Is This Thing On?

Welcome to the FIRST issue of Theoretically News! And yes, this really is Tim, though I won't deny the final copy might get a once-over from an LLM. Someone has to keep my excessive use of semicolons in check!!

As a quick top line, here’s what I’m playing with for the format:

  • Quick Headlines in the World of Creative AI (and what I missed!)

  • Pro-Tip of the Week

  • Cool Thing of the Week

  • And some closing thoughts...

If you have any newsletter suggestions, drop me a line and let me know! We’re going to build this together. In the meantime, let’s dive in!

NEWS
MINIMAX GOT A BAG OF CASH!

MINIMAX Gets a Bag! I don’t usually cover the financial side of the business on the channel, but this is pretty noteworthy: Minimax officially listed on the Hong Kong Stock Exchange today (Jan 9), and it was a massive debut. Shares surged over 100% on day one, valuing the company at nearly $14 billion. This makes it the most successful IPO for a Generative AI startup to date, raising roughly $600M–$700M in fresh capital.

Why It Matters: That cash is explicitly earmarked for R&D. In my opinion, Minimax has been lagging slightly in the AI Video Race, currently being edged out by heavy hitters like Veo, Sora, and Kling. This $600M might be the exact fuel injection they need to crash back into the top 3. Keep an eye on Minimax in 2026—I think we’re going to see some big moves.

NEWS
LTX-2 Released Open Source & Future Plans

Lightricks Drops the Weights: LTX-2 Goes Open Source Lightricks has released the open-source weights for LTX-2. A tad later than initially promised (but always better late than never!), LTX-2 can generate motion, dialogue, SFX, and music—all on consumer-grade hardware.

The "Flux" Moment? Amid a flurry of partnerships with Nvidia and ComfyUI, I’ve overheard at least one person call this the "Flux for AI Video" moment. Honestly? That’s a pretty fair assessment. The model isn’t perfect yet, but the open-source crew is already cranking on the code, solving early issues like dialogue bleed and sync.

What’s Next? LTX isn’t done. In a Reddit AMA this week, Lightricks CEO Zeev Farbman dropped some deets on the roadmap:

We're planning an incremental release (2.1) hopefully within a month - fixing the usual suspects: i2v, audio, portrait mode. Hopefully some nice surprises too.

This quarter we also hope to ship an architectural jump (2.5) - new latent space. Still very compressed for efficiency, but way better at preserving spatial and temporal details.

The goal is to ship both within Q1, but these are research projects - apologies in advance if something slips. Inference stack, trainer, and tooling improvements are continuous priorities throughout

My Take: If they hit that Q1 deadline for 2.5, the open-source video gap is going to close fast.

NEWS
Dreamina Updates With 20 Keyframes!

Bytedance’s Dreamina platform updates with new features/updates including Image + Video/Video + Video Inputs

  • 20 Keyframes: 20? That’s….a lot.

  • Timing Control: You now have control from 0–8 seconds with 0.5s precision.

  • Local Editing: A new "Lock and Unlock" feature for targeted changes.

    My Take: I’ll try to look into this one soon. I'm very curious about the Local Editing part of this— Plus, 20 keyframes? That’s insane.

COOL TOOL
3D Camera Control with Qwen Image Edit

I mean, you KNEW Flamethrower Girl had to make an appearance!

Here’s a neat tool you can try for free! Built by MultimodalArt and hosted on Hugging Face, this is a nifty image editing tool utilizing Qwen Image Edit 2511
(I still say this needs a better name! C’mon Qwen, "Massive Plantain" is just sitting there!).

Why I Like It: What really grabs me is the simplified camera controls. Instead of guessing with text prompts ("pan left, rotate 30 degrees"), you simply use sliders (Azimuth and Elevation) to physically "move" the camera up/down or left/right around your subject. Controls are a bit basic, but still: neat glimpse into what we might see developed in the future.

The Verdict: Generation-wise, it may not blow you away every time, but in terms of a practical UI, I really love the concept. It’s a glimpse of a future where we stop "prompting" and start "directing."

PRO TIP
2×2 in Nanobanana Pro, Not 3×3

One of the big recent trends with Nano Banana Pro is utilizing it to create multiple continuous shots for AI Video Generation.

The general accepted practice is to ask for a 3×3 grid. However, friend of the channel PJ Ace advocates for using a 2x2 grid in a 21:9 output frame. After playing around with it, I tend to agree with PJ’s assessment: a 2×2 grid holds consistency across the frames much better, giving you more usable shots per generation.

The Prompt Template:

Generate a photorealistic cinematic 2x2 grid of still frames from a live-action [GENRE] film depicting [THE CHARACTER] in [THE LOCATION].

[The Character] [ACTION DIRECTION].

The imagery must feel gritty, industrial, and oppressive. High-contrast lighting, sweaty skin texture, rusted metal details, atmospheric steam. No clean surfaces, no vibrant colors, no CGI gloss. (Obviously change this style block to fit your scene)

Each frame represents a continuous moment in the scene captured from different camera angles (wide, over-the-shoulder, close-up) to ensure visual consistency.

This works best if you’re utilizing Nano Banana Pro on a platform that allows 21:9 output.

As I mentioned in my Planet Hell breakdown, even if you’re generating the final video in 16:9, dropping a 21:9 image reference has an added bonus: it gives you "wiggle room" to reframe, pan, or scan within the video generator without losing resolution.

COOL THING OF THE WEEK
Legend of Zelda AI Generated Trailer

Speaking of PJ, I’d be remiss if I didn’t mention his Legend of Zelda AI Film Trailer as the standout video of the week. Racking up a staggering 9.5 million impressions over the last few days, this is one of those AI-generated concepts that clearly broke through to the mainstream. And with good reason: It’s REALLY good.

The Context: Granted, as often happens with PJ’s work, not all of those 9.5 million impressions left positive comments. But PJ is no stranger to this gauntlet—he already survived a similar firestorm with his Princess Mononoke trailer back in 2024.

My Takeaway: While I always advise creators to steer clear of AI-generated IP content, PJ has a knack for spinning these properties into viral hits. That said, if you choose to head down this path, be prepared for backlash. PJ has developed a thick skin, but that skin was earned the hard way.

Additionally, as we enter 2026, I think we’ll start to see more crackdowns on Fan/Tribute content, AI Generated or not. Particularly on the Star Wars/Disney front, given the news about their partnership with OpenAI.

FROM THE STUDIO
What I Covered This Week

Bit of a slower kickoff week to 2026, so I decided to check in on the AI releases coming out of CES. This was a pretty fun video, covering everything from Nvidia’s new Rubin AI Platform to robots, AR glasses, and a lollipop that sings to your bones. Yeah, that last one was wacky.

THAT’S A WRAP!

So, there we go! The first issue of the newsletter is out!

If you have any feedback or anything you’d like to see, please let me know: [email protected]

Or, just drop a comment on one of the videos! I honestly do see them all!

As I mentioned in this week’s video, I’ve been really waffling on putting a newsletter out. But now that we’re in it (51 more to go!), I am reminded that the best way to do a thing is to just start a thing.

There’s room for improvement in this newsletter, and your feedback will help guide that—but at least the train is now rolling!

As Always I thank you for Reading…
Tim

Keep Reading

No posts found