What a Week!

Welcome back to Theoretically News!

Boy, where to begin? Obviously, the headline act of the week was Google dropping Project Genie. I’ll talk more about this in a bit, but as I continue to play around with it, I am endlessly stunned by what it can do.

But of course, that wasn’t the ONLY news of the week! So, it’s a good thing we have this newsletter.

NEWS
Genie Is Out Of the Bottle! Game Stocks Plunge?

So, while I promise not to do this often, I have to give the top news spot to Genie. Since I’ve already covered the basics and overall thoughts in my video this week (fun fact: the longest video to date on the channel!), I won’t repeat all that, but I DO feel like this is such massive news that it deserves the headline spot!

In the interview section of my video, Google DeepMind Researcher Jack Parker-Holder mentioned that he thought of Genie as a new class of foundation model—because it can not only create a world but simulate reality within it.

Although, yes, today we’re seeing something with fairly limited controls and interactivity, that is just today. Two years ago, when Genie-1 was showcased, it was only possible to generate 2D games that lasted one second. Extrapolate that out to two years from now, and it starts to be clear how Genie and other state-of-the-art world models will be a new class of interface for technology.

Add to that the fact that we know SIMA, Google’s AI Agent that “lives” inside Genie, exists. It will likely be added to the public release at some point (though that might be a while), meaning we’re looking at a model where you generate a world and then it can be populated with characters or scenarios run by SIMA.

And if that sounds impossible, just remember that two years ago Genie-1 was only capable of generating 2D platform games at a blurry and blobby resolution for one second of “gameplay.”

In related news, Reuters is reporting that shares of video game company stocks fell sharply the day after Genie’s release. Shares of Take-Two Interactive (Grand Theft Auto) fell 10%, while Roblox was down 12%. Game engine giant Unity fell a whopping 21%.

Granted, were there other factors in the market rollercoaster? Possibly. But the market is often quick to react based on the news cycle, so it is likely this will end up being a minor speed bump. It will certainly not impede the billions that GTA 6 will make.

NEWS
The Tilly Tax?

Tilly Norwood, NOT photographed on a red carpet

Hollywood’s AI TAX? As SAG-AFTRA prepares to sit down with the studios on Feb. 9 for negotiations, news has broken about a potential idea the unions are considering called the “Tilly Tax.”

Tilly Norwood, of course, is the AI Avatar that crystallized the industry's AI fears last fall. The character caused a bit of a stir on a slow news day by claiming to have multiple talent agency deals. (Note: I have my doubts about that, but the story gained traction.) Now, the union is floating a proposal where studios that use synthetic actors like her in place of humans might have to pay a royalty into a union fund.

So, while the union still can’t stop AI-generated characters from appearing in films and media, they’re now discussing ideas on how to either discourage or at least be financially compensated for them.

But even SAG-AFTRA isn’t sold on the idea, calling it the “best bad idea we’ve got in 2026.”

Hot Take: This tax idea showcases the profound struggle Hollywood is facing in adapting to AI. And frankly, it opens the door to more questions than it answers in terms of how it would actually work. Does this apply to animation or non-realistic characters? What about procedurally generated background actors?

We’ll see how the whole thing plays out, but I’ll say one thing: The character and hype around Tilly Norwood has always felt like an opportunistic rush after a viral news story. The "Hawk-Tuah Girl" of AI, if you will.

If the character’s ultimate lasting legacy is a failed attempt to create a tax based around her name? Yeah, I’m fine with that.

New Models
Bytedance SeedDance/Dream Updates!

Interesting movement in the AI video space from the Chinese models, perhaps timed to the Lunar New Year, but for sure making February a hot month for model updates! As reported by The Information, mid-February will see the launch of:

  • SeedDance 2.0: ByteDance’s new flagship video generation model

  • Seedream 5.0: The accompanying image generation model

While we still don’t have the benchmarks, resolution, or feature set, I’m willing to wager that, at this point, the release of SeedDance 2.0 will up the ante on some recent features we have been seeing lately, including audio, reference/omni models, and video editing.

And while I can’t officially report anything now, I have been hearing rumors and whispers about some other big version number updates also coming in February!

As soon as I know more and can confirm, I’ll let you know!

New Models
New SOTA Open Source Video Model

OpenMOSS (the Fudan University-backed research group) has officially released MOVA (MOSS Video and Audio). Their tagline is bold: they aim to "break the silent era" of open-source video generation.

Worth noting: LTX-2 had already done that, but… I’ll let it slide.

Being trapped in the "Genie Bottle" this week, I haven’t had time to try it out myself, but interestingly, MOVA claims to handle Native Bimodal Generation—meaning it synthesizes high-fidelity video and synchronized audio in a single inference pass.

The demos I’ve seen look pretty good. There are still some issues with spoken dialogue (darn those contractions!), but the model is multilingual.

At the end of the day, a new open-source video model is always a good thing, but the hardware reality check here is brutal. The 720p model runs on a massive 32B parameter architecture (that’s 32B total, with 18B active parameters during inference thanks to a Mixture-of-Experts design).

So, unless you have an RTX 5090, native performance is likely out of reach.

Quick tip: Technically, you can run it on an RTX 4080 if you have 96GB+ of DDR5 RAM by offloading the weights (and KV cache). So, if you’re in that Goldilocks Zone, head over to Hugging Face to give it a shot!



COOL THING OF THE WEEK
WINK

Friend of the channel Momo Wang premiered Wink, an AI short film (partnered with Adobe), at Sundance this week.

It’s cute, and cat lovers—you’ll be extra happy with this one.

Momo’s background is in traditional animation, and I think that shows through in the short. But the remarkable part is that Wink was created in just 28 days. By Momo’s estimation, if she were working traditionally, a 5-minute short like this would have taken over a year to produce.

I was on a call with Momo this afternoon, and it was very encouraging to hear that the film was well received at Sundance, with people stopping her in the street in the days after the screening to tell her how impressed they were.

Perhaps the tide is slowly turning?

You can watch Wink over on YouTube here: https://youtu.be/Yo29u0I5-Ow?si=uiIPLGyiwyBMEGRO

FROM THE STUDIO
What I Covered This Week

This week, I took a look at Luma Labs Ray 3.14. It's a smaller update for Ray, but one that touts being faster and cheaper. Although, as we uncovered in the video, the "cheaper" part really only applies to the 720p outputs—not 1080p.

Additionally, I took a look at Decart’s Lucy 2.0 Real-Time Video Editing, which does have a free demo you can try out.

Lucy 2.0 has some similarities to Krea’s recently released Real-Time model (which I did not have time to cover), but there are some distinct differences in their approaches.

Overall, I’d say Real-Time is looking promising, and certainly is a lot of fun—but it does have a bit of that old "AI Video 2.0" era vibe.

Still, don’t let that dissuade you from trying these models out and experimenting with them. As the old saying goes: “This is the worst it’ll ever be.” Real-Time Video, as crazy as that sounds, is right around the corner!
If you missed the video, here it is: https://youtu.be/Rw42ya_u424

And of course, there was my epic video on Google’s Genie.

As a bit of fun “Behind the Scenes”—from time to time, DeepMind will reach out to see if I’m interested in covering a new launch. I have yet to say no.

As is expected, I sign a phone book’s worth of NDA agreements and off I go!

Generally, I’ll get access about 48 hours before launch, which always turns into a whirlwind of testing, filming, and editing.

I can generally produce one of my “typical” YouTube videos in a (longish) day, but when DeepMind calls, they always ask if I’d like to speak to someone on the team as well. Once again, I have yet to say no.

It’s been fun turning these calls into interview segments for releases like Veo and Nano Banana Pro, and I tend to think you all are interested in the thought processes of the actual builders of this incredible technology.

But, of course, me being me, I can’t just run a 28-minute Zoom session as-is (since that would bore even me!). So, these segments turn into a production within a production.

All of which is to say: When it’s a Google Launch Week, I don’t get much sleep.

I mean, I haven’t gotten much sleep since 2022, but on a Google week, I get LESS sleep!

That said, I wouldn’t have it any other way. Super proud of this video!

THAT’S A WRAP!

With that, we close this week and I’m going to get some sleep!

Who am I kidding? I’m going to go play with Genie more!

If you have any feedback or anything you’d like to see, please let me know: [email protected]

Or, just drop a comment on one of the videos! I honestly do see them all!

As Always I thank you for Reading…
Tim

Keep Reading