Good To See You!
Another week closes on the rollercoaster of Creative AI! It’s been an interesting one to say the least, we had the “global” rollout of Seedance 2.0 (more on this later), WAN 2.7 dropped, and Google’s Veo is hitting blowout sale prices!
Let’s round it up!
NEWS
WAN 2.7: Seedance Killer? Uh, no…

Always remember Wan launched as WanX. Never forget.
So, just a day after I posted a video saying, "Wan 2.7 isn't available as of this video" — Alibaba went ahead and dropped it. Sometimes I'm convinced they're messing with me…
But yes, Wan 2.7 Video is now available in all the usual places — via API on the major providers and directly on the Wan mothership.
As detailed in my video, 2.7 features:
First/Last Frame Control (FLF2V): You define both the opening and closing frames of a clip, and the model generates everything in between — turning it from a standard generation tool into something closer to a directed animation tool.
9-Grid Image Input: A 3×3 grid of nine reference images in a single input for multi-angle scene composition.
Up to 5 simultaneous references: The model reads across all of them (mixing image, video, and audio) to infer character, motion, and environment.
Instruction-Based Video Editing: Give an existing clip a natural language instruction — change the background, lighting, or wardrobe — without a full regeneration.
Subject + Voice Reference: Combined in a single pass.
Duration bump: Increased from ~5 seconds to 15 seconds.
Native audio built-in: Includes background music, ambient sound, and synced vocals.
Visual quality hitting commercial 1080p standards: Better skin textures, fabric movement, and physics.
Now, is it the "Seedance Killer"?
No. It really isn't. I'm still early into testing, but currently, it feels like a point-one update to me. That isn't a totally bad thing — I thought 2.6 was pretty good. Audio has improved, but I am seeing issues with image-referenced characters turning a bit "plastic-skin" looking. The model's omnimodal capabilities do seem to be on par with the Kling 3.0 Omni model, though.

Testing Captain Renfield in 2.7
One thing worth noting: there's no indication this will be open-sourced, and I highly doubt it will be. There have been some rumors swirling that 2.5 might be released as open-source, but even then, those are just rumors. Fingers crossed. If that happens, I think we’d see it in the next few weeks.
The Verdict: I need to test the instruction-based editing, as I think that’s the most interesting feature. If a generated clip is 90% right, you can now fix the remaining 10% without re-rolling the dice. But overall? Temper your expectations. I'll do some more tests over the weekend and roll out a short review on it next week.
NEWS
Netflix Goes Open Source

Well, I didn’t have this on my bingo card for the week: Netflix has released an open-source model called VOID (Video Object and Interaction Deletion) —
And as much as I was hoping it was a video-to-video model that would turn everything into The Upside Down… alas…

It’s actually a research model that removes objects from videos — BUT ALSO changes all the physical interactions they induce on the scene.
Here's their example: If you remove a person holding a guitar, VOID also removes the person's effect on the guitar, causing it to fall naturally. It generates a physically plausible "counterfactual" — what would have happened if the object had never been there.
It’s pretty interesting. Every other video removal tool can inpaint the background when you remove something, but they don't understand physics. There’s no causality.
This will regenerate footage to show the alternate universe where your removed object didn’t exist. That’s pretty crazy.

Under the hood, it's built on CogVideoX-Fun-V1.5-5b-InP and fine-tuned with what they call "quadmask conditioning" — a 4-value mask that encodes the primary object to remove, overlap regions, affected regions, and the background to keep. It uses a VLM (Gemini) + SAM2 pipeline to automatically identify which parts of the scene are causally affected by whatever you're removing.
Practical notes: This is a research project, published on arXiv on April 2, with code on GitHub and model weights on Hugging Face. There's a Gradio demo on HF Spaces if you want to try it out, although that can be a bit janky if you don’t kick it up to a powerhouse GPU. Since it requires a GPU with 40GB+ VRAM (think A100), this isn't running on your home machine today.
My Take: Post-Ben Affleck announcement, this is Netflix putting real AI research out into the open. But it's also exactly in line with what Affleck has been talking about regarding the use of AI in video production. Ta-Dum, indeed.
NEWS
Veo 3.1 Hits the Bargain Bin!

Google launched Veo 3.1 Lite on March 31 — a new, cost-optimized tier in the Veo lineup, and the pricing is genuinely aggressive.
The numbers: $0.05 per second for 720p and $0.08 per second for 1080p. That's less than 50% of the cost of Veo 3.1 Fast at the exact same generation speed. An 8-second 720p clip will cost you $0.40.
But more than that: Google is also reducing pricing on Veo 3.1 Fast — new rates will be $0.10/sec for 720p, $0.12/sec for 1080p, and $0.30/sec for 4K. The Veo 3.1 family now spans three tiers (Lite, Fast, and Standard) ranging from $0.05/sec to $0.40/sec.
Now, for the "free tier" nuance. You may have seen headlines suggesting Veo is free now. And… sorta?
Veo 3.1 Lite is currently in its Paid Preview phase and does not support free tiers directly through the API itself. BUT, there are adjacent paths: Google Flow offers limited free daily generations, students with .edu emails can get Google AI Pro free for 12 months, and Google AI Studio usage is free of charge for testing. Ultimately, though, the Veo models themselves require the paid tier.
The bigger context: This is happening right as OpenAI shut down Sora — which was reportedly burning $15 million per day. Google is making the opposite bet: drive costs down aggressively and own the developer ecosystem.
My Take: While Veo 3.1 is obviously getting pretty long in the tooth for cinematic, bleeding-edge AI filmmaking, that’s not a huge deal since we’re closing in on Veo 4, likely coming at Google I/O.
But Veo-3x isn’t really aimed at us anymore. It's now strictly in developer land. Think app builders whose focus isn’t solely on video. And honestly, the $0.05/sec Lite tier is genuinely cheap enough for them to start seamlessly building video into their apps and workflows.
COOL THING OF THE WEEK
Memory of Princess Mumbi

I was sent the trailer for a very cool-looking independent film called Memory of Princess Mumbi.
I haven’t seen the film yet, but I am now keenly interested. I think after you read the background details and watch the trailer, you will be too.
This is a micro-budget Kenyan feature film, written, directed, shot, and edited by Damien Hauser, currently making the festival rounds:
Memory of Princess Mumbi is set in 2093 as a young documentary filmmaker, Kuve (Abraham Joseph), travels to the kingdom of Umata to document the aftermath of a great war. There, he meets Mumbi (Shandra Apondi), a free-spirited actress who challenges him to make his film without using AI.
Yeah, you see what he did there, right? Our guy made a movie, augmented with AI tools, about making a movie that does NOT use AI tools. Chef’s kiss.
Definitely take a moment to check out the trailer here. And, while I could waffle on with my usual talking points about traditional filmmaking and AI, let’s pass the mic to Damien himself instead:
“Till now, it was always Hollywood telling the stories of the whole world,” says Hauser. “And through this technology…[African filmmakers] are more able to tell their own stories. Not only Africa. We’ll get so many more different perspectives, and different types of storytelling.”
FROM THE STUDIO
What I Covered This Week
I kicked off the week with a platform tour of Recraft. It mostly detailed their V4 model, but obviously, they’ve got a lot more going on over there. This was a sponsored video from them — but as you guys know, I really only do these for companies I genuinely like. And I like Recraft.
I’ve got a few more spots with them coming up in the near future. I’m actually looking forward to digging in to see what I can do now that they’ve brought video generation into the mix!
I also dove into the whole Seedance 2.0 release madness. It’s… frankly, a bit of a mess. But it’s also kind of funny to me? You know when you stub your toe and it really hurts, but you also start laughing? Yeah — kinda like that.
Again, it’s a mess. But I’m sure these are just growing pains for now. Hopefully, within a week or two, we’ll have full global access and the prices will start to normalize. As for content restrictions and nerfing? I think those are gradually lifting as well. ByteDance is well aware that issues like those contributed to the sinking of Sora. They’re not going to make the same mistake.
But probably the most fun I had this week was live chatting with Flamethrower Girl via Pika’s new Agent Video Chat feature.

As mentioned in the video, it’s not perfect. In fact, it's still kind of janky. But the idea of now video chatting with your AI Agent, who actually has a persona? I mean, that’s… Jarvis? But also more than Jarvis, since all he had was a voice!
Our Girl has a Flamethrower!
wait…maybe that wasn’t such a good idea after all…
THAT’S A WRAP!
Another week down!
What’s in store for next week? Well, if I had to guess more Seedance Drama!
Until then, I thank you for reading! My name is Tim!



