{"id":2914,"date":"2026-04-08T10:53:36","date_gmt":"2026-04-08T10:53:36","guid":{"rendered":"https:\/\/www.cutout.pro\/learn\/?p=2914"},"modified":"2026-04-08T10:53:39","modified_gmt":"2026-04-08T10:53:39","slug":"blog-what-is-happyhorse-1-0","status":"publish","type":"post","link":"https:\/\/www.cutout.pro\/learn\/blog-what-is-happyhorse-1-0\/","title":{"rendered":"What Is HappyHorse-1.0? The Mystery #1 AI Video Model"},"content":{"rendered":"\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"595\" data-id=\"2920\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-28-1024x595.png\" alt=\"\" class=\"wp-image-2920\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-28-1024x595.png 1024w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-28-300x174.png 300w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-28-768x446.png 768w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-28.png 1387w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I&#8217;m Camille. Last week I was scrolling through the <a href=\"https:\/\/artificialanalysis.ai\/video\/leaderboard\/text-to-video\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Artificial Analysis Video Arena leaderboard<\/a> \u2014 the one where real users vote on blind video comparisons \u2014 and a name I&#8217;d never seen was sitting at #1. HappyHorse-1.0. No team page. No brand. GitHub links that say &#8220;coming soon.&#8221; If you make product videos or visual content with AI tools, a pseudonymous model topping both text-to-video and image-to-video rankings is worth understanding before the hype cycle decides for you.<\/p>\n\n\n\n<p>This is what I&#8217;ve been able to confirm, what stays unverified, and why \u2014 if you care about input-driven video workflows \u2014 the I2V numbers deserve a closer look.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"590\" data-id=\"2919\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-1024x590.jpeg\" alt=\"\" class=\"wp-image-2919\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-1024x590.jpeg 1024w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-300x173.jpeg 300w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-768x443.jpeg 768w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image.jpeg 1254w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">What Artificial Analysis Has Confirmed About HappyHorse-1.0<\/h2>\n\n\n\n<p>Let me start with the one hard signal we actually have. On April 7, 2026, Artificial Analysis posted on X that they had added a new model to their Video Arena. Their exact word: &#8220;pseudonymous.&#8221; That&#8217;s the entire confirmed identity.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">&#8220;Pseudonymous model&#8221; \u2014 what that label means<\/h3>\n\n\n\n<p>Pseudonymous means the model was submitted without a verifiable team attached. It showed up, generated outputs, users voted blind \u2014 same as every other model in the arena. Anonymous benchmark drops have happened before, but one landing at #1 across multiple categories is unusual.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Current rankings across four leaderboards (T2V\/I2V, with\/without audio)<\/h3>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"742\" height=\"339\" data-id=\"2918\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-27.png\" alt=\"\" class=\"wp-image-2918\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-27.png 742w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-27-300x137.png 300w\" sizes=\"auto, (max-width: 742px) 100vw, 742px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>As of early April 2026, HappyHorse-1.0 holds these positions on the Artificial Analysis leaderboards:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">Category<\/td><td class=\"has-text-align-center\" data-align=\"center\">Rank<\/td><td class=\"has-text-align-center\" data-align=\"center\">Elo Score<\/td><td class=\"has-text-align-center\" data-align=\"center\">Runner-Up (Elo)<\/td><\/tr><tr><td>T2V, no audio<\/td><td>#1<\/td><td>1,333<\/td><td>Seedance 2.0 (1,273)<\/td><\/tr><tr><td>I2V, no audio<\/td><td>#1<\/td><td>1,392<\/td><td>Seedance 2.0 (1,355)<\/td><\/tr><tr><td>T2V, with audio<\/td><td>#2<\/td><td>1,205<\/td><td>Seedance 2.0 #1 (1,219)<\/td><\/tr><tr><td>I2V, with audio<\/td><td>#2<\/td><td>1,161<\/td><td>Seedance 2.0 #1 (1,162)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>A 60-point Elo gap (T2V no audio) means one model wins roughly 58\u201359% of blind matchups \u2014 meaningful. A 1-point gap (I2V with audio) is noise. Newly added models tend to be more volatile than established ones with thousands of votes behind them. <strong>Check the live leaderboard before making decisions \u2014 these numbers will have moved by the time you read this.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"673\" data-id=\"2917\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-26-1024x673.png\" alt=\"\" class=\"wp-image-2917\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-26-1024x673.png 1024w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-26-300x197.png 300w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-26-768x505.png 768w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-26.png 1127w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">What Multiple HappyHorse Sites Claim \u2014 and What Can&#8217;t Be Verified Yet<\/h2>\n\n\n\n<p>Here&#8217;s where things get murky.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">15B parameters, open source, 38s inference, 7-language lip-sync<\/h3>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"836\" height=\"425\" data-id=\"2916\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-25.png\" alt=\"\" class=\"wp-image-2916\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-25.png 836w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-25-300x153.png 300w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-25-768x390.png 768w\" sizes=\"auto, (max-width: 836px) 100vw, 836px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Several sites \u2014 happyhorse-ai.com, happy-horse.art, happyhorse.app, happy-horse.net, happyhorseai.net \u2014 each describe the same model: 15-billion-parameter single-stream Transformer, 40 layers, joint video-and-audio generation, 7-language lip-sync (Mandarin, Cantonese, English, Japanese, Korean, German, French), ~38 seconds for 1080p on a single H100, full commercial license.<\/p>\n\n\n\n<p>These are <em>claimed<\/em> specs. I can&#8217;t independently verify any of them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">GitHub and Hugging Face links: status as of publish date<\/h3>\n\n\n\n<p>As of April 8, 2026, the GitHub and Hugging Face links on these HappyHorse sites point to &#8220;coming soon&#8221; pages or return 404 errors. The weights aren&#8217;t publicly downloadable.<\/p>\n\n\n\n<p>Here&#8217;s what makes this interesting: a separate open-source project called <a href=\"https:\/\/huggingface.co\/GAIR\/daVinci-MagiHuman\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">daVinci-MagiHuman<\/a>, developed by Sand.ai and GAIR Lab, <em>is<\/em> publicly available under Apache 2.0 \u2014 and it matches HappyHorse&#8217;s claimed specs almost exactly. Same parameter count, same architecture, same language list, same inference speeds. A 36Kr investigation found the benchmark numbers and website structures to be near-identical.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Team identity \u2014 community speculation vs confirmed facts<\/h3>\n\n\n\n<p>Nobody has officially claimed HappyHorse-1.0. Speculation on X has pointed at WAN 2.7, DeepSeek, Tencent, and \u2014 most persistently \u2014 Sand.ai&#8217;s daVinci-MagiHuman. The Year of the Horse timing (2026 in the Chinese lunar calendar) and language ordering on the sites (Mandarin before English) suggest an Asia-based origin.<\/p>\n\n\n\n<p>The prevailing theory, per 36Kr, is that HappyHorse is an optimized daVinci-MagiHuman iteration submitted to stress-test user preference. But theory is not confirmation.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Why Image-to-Video Users Should Pay Attention<\/h2>\n\n\n\n<p>This is the part I care about for my own work.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Elo 1392 in I2V (no audio) \u2014 what strong reference-following means for input-driven workflows<\/h3>\n\n\n\n<p>The I2V no-audio score of 1,392 is the highest on the board. What that tells us: when users upload a reference image and compare blind results, HappyHorse&#8217;s output wins more often. The model appears to follow the reference more closely \u2014 subject identity, composition, visual coherence.<\/p>\n\n\n\n<p>If you&#8217;re doing product videos or brand content where you start with a specific image and need the motion to <em>respect<\/em> that image, reference-following is the metric that matters most. Beautiful motion that drifts from your product shape isn&#8217;t useful. Locked-on motion that moves your subject convincingly is.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How clean cutouts and asset quality change AI video output<\/h3>\n\n\n\n<p>This applies regardless of which model you&#8217;re running: input image quality determines the ceiling of your output video. True for Seedance 2.0, true for Kling, and it&#8217;ll be true for HappyHorse if and when it becomes accessible.<\/p>\n\n\n\n<p>Dirty edges, leftover halos, compression artifacts \u2014 the model reads all of that as signal and amplifies it into motion. I&#8217;ve covered this in detail for <a href=\"https:\/\/www.cutout.pro\/learn\/blog-clean-assets-ai-video-seedance-2-0\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Seedance 2.0 asset prep<\/a> and <a href=\"https:\/\/www.cutout.pro\/learn\/blog-seedance-2-0-flicker-edge-cleanup\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">flicker edge cleanup<\/a>. The logic is identical for any I2V tool: ghost halo around the cap in your photo means shimmering ghost halo in every frame of your video.<\/p>\n\n\n\n<p>Before you rewrite a prompt, check the asset.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Known Limits and Open Questions<\/h2>\n\n\n\n<p><strong>You can&#8217;t use it yet.<\/strong> No public API, no downloadable weights under the HappyHorse name. If daVinci-MagiHuman is the base model, that <em>is<\/em> available \u2014 but it requires H100-class hardware and <a href=\"https:\/\/github.com\/GAIR-NLP\/daVinci-MagiHuman\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">nontrivial setup<\/a>.<\/p>\n\n\n\n<p><strong>Elo scores are early.<\/strong> Seedance 2.0 has 7,500+ vote samples in T2V; HappyHorse&#8217;s count isn&#8217;t broken out. More votes could shift rankings either way.<\/p>\n\n\n\n<p><strong>Community testing is mixed.<\/strong> Some users on X report gaps with Seedance 2.0 in character detail and dynamic coherence. Others are excited about multi-shot potential. Short blind clips may not reflect all use cases.<\/p>\n\n\n\n<p><strong>Portrait-heavy evaluation.<\/strong> Per 36Kr&#8217;s analysis, portrait and voice-over content accounts for 60%+ of the arena&#8217;s test samples \u2014 giving face-and-speech-optimized models a built-in advantage.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">FAQ<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Is HappyHorse-1.0 made by ByteDance \/ DeepSeek \/ Tencent?<\/h3>\n\n\n\n<p>No official confirmation for any of these. The most discussed theory links it to Sand.ai&#8217;s daVinci-MagiHuman, but that remains unconfirmed. Community guesses are not identification.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Can I use HappyHorse-1.0 for commercial video?<\/h3>\n\n\n\n<p>Not directly \u2014 there&#8217;s no public access under the HappyHorse name as of April 2026. Third-party demo sites offer browser-based generation with their own terms, but they&#8217;re not the model developer. If the underlying model is daVinci-MagiHuman, its weights are under <a href=\"https:\/\/huggingface.co\/GAIR\/daVinci-MagiHuman\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Apache 2.0<\/a> (commercial use permitted) \u2014 but you&#8217;d need H100-class hardware.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Does input image quality affect HappyHorse output?<\/h3>\n\n\n\n<p>Yes \u2014 every I2V model amplifies input flaws into motion flicker. Cleaning your cutouts first saves more time than rewriting prompts.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How often do Artificial Analysis Elo scores change?<\/h3>\n\n\n\n<p>Continuously. A model at #1 today might be #3 next week. Always check the live leaderboard rather than relying on any article \u2014 including this one.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-6 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"855\" height=\"434\" data-id=\"2915\" src=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-24.png\" alt=\"\" class=\"wp-image-2915\" srcset=\"https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-24.png 855w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-24-300x152.png 300w, https:\/\/www.cutout.pro\/learn\/wp-content\/uploads\/2026\/04\/image-24-768x390.png 768w\" sizes=\"auto, (max-width: 855px) 100vw, 855px\" \/><\/figure>\n<\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Alright, that&#8217;s where things stand. A model nobody can name is at the top of the most credible video benchmark we have, and the gap between confirmed and claimed is wide. My take: pay attention, but don&#8217;t rearrange your workflow around a model you can&#8217;t access yet. The thing you <em>can<\/em> control is how clean your input assets are \u2014 and that matters no matter which model wins.<\/p>\n\n\n\n<p>See you next time.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong>Previous posts:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-cutout-pro-blog wp-block-embed-cutout-pro-blog\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"zKxJ0fPHvD\"><a href=\"https:\/\/www.cutout.pro\/learn\/blog-seedance-2-0-audio-guide\/\">Seedance 2.0 Audio Guide: Dialogue, SFX, BGM, and Lip Sync Tips<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Seedance 2.0 Audio Guide: Dialogue, SFX, BGM, and Lip Sync Tips&#8221; &#8212; Cutout.pro  Blog\" src=\"https:\/\/www.cutout.pro\/learn\/blog-seedance-2-0-audio-guide\/embed\/#?secret=0R0iR6sSN4#?secret=zKxJ0fPHvD\" data-secret=\"zKxJ0fPHvD\" width=\"500\" height=\"282\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-cutout-pro-blog wp-block-embed-cutout-pro-blog\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"lcKPgGwiid\"><a href=\"https:\/\/www.cutout.pro\/learn\/blog-ai-image-to-video-online\/\">AI Image to Video Online: Turn Any Photo Into a Motion Clip (Free)<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;AI Image to Video Online: Turn Any Photo Into a Motion Clip (Free)&#8221; &#8212; Cutout.pro  Blog\" src=\"https:\/\/www.cutout.pro\/learn\/blog-ai-image-to-video-online\/embed\/#?secret=Puicpq2Vx8#?secret=lcKPgGwiid\" data-secret=\"lcKPgGwiid\" width=\"500\" height=\"282\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-cutout-pro-blog wp-block-embed-cutout-pro-blog\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"6p9tnAwlVr\"><a href=\"https:\/\/www.cutout.pro\/learn\/blog-seedance-2-0-image-to-video\/\">Seedance 2.0 Image to Video: Turn One Photo Into a Consistent 16s Clip<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Seedance 2.0 Image to Video: Turn One Photo Into a Consistent 16s Clip&#8221; &#8212; Cutout.pro  Blog\" src=\"https:\/\/www.cutout.pro\/learn\/blog-seedance-2-0-image-to-video\/embed\/#?secret=CFE5U1xUuv#?secret=6p9tnAwlVr\" data-secret=\"6p9tnAwlVr\" width=\"500\" height=\"282\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/figure>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m Camille. Last week I was scrolling through the Artificial Analysis Video Arena leaderboard \u2014 the one where real users [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2920,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2914","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-video-editing"],"_links":{"self":[{"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/posts\/2914","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/comments?post=2914"}],"version-history":[{"count":1,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/posts\/2914\/revisions"}],"predecessor-version":[{"id":2923,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/posts\/2914\/revisions\/2923"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/media\/2920"}],"wp:attachment":[{"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/media?parent=2914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/categories?post=2914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cutout.pro\/learn\/wp-json\/wp\/v2\/tags?post=2914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}