Good To See You!
Quick one this week, gang—and yes, I'm a day late. I know, I know. It's been one of those weeks where the backburner projects decided they wanted to be front-burner projects.
Going forward, while the newsletter will remain weekly, although I don't know if it'll have a strict structure on release. Anyhow— This issue is lighter on the news but at least has a Seedance trick/theory!
Let's get into it!
NEWS
Sneaking Past Seedance 2.0's Realistic Character Filter

Why Does Flamethrower Girl always work?
So, as I mentioned in Thursday's video, and we are all acutely aware of: Seedance 2.0 has a "realistic people" filter that…well, it can be pretty heavy, but also inconsistent. I do believe those limitations will be lifted fairly soon—but the prompt engineer in me can't help but poke at the walls in the meantime.
If you caught Thursday's video, you'll know that some characters sail right through the filter for reasons unknown. Flamethrower Girl? No problem. Lyra? Smooth as butter. Meanwhile, Tom, Sonny, and even Renfield were getting blocked left and right

(Side note: We DID manage that one shot of Renfield at the tavern, but that was based on an older image—generated back in, I believe, Midjourney V6—not his current character model card.)
Here's where it gets interesting. I came across a video that FAL did on Seedance prompting, hosted by my pal Matt Workman. Matt hypothesized that using Midjourney as your source image generator might be a workaround, since Midjourney tends to give characters a slightly more painterly feel—which could be enough to slip past the realism detector.
It's an interesting theory, considering that Renfield DID pass with his older image reference, and Lyra was also a V6 character.
I'll need to test this more, but so far? It does seem to be working— not all the time, but with a higher success rate.
Plus: if you use the Omni model with a realistic background, Seedance tends to take the realism of the background as ground truth for the generation—meaning your slightly painterly character gets pulled into a photorealistic world.
AGAIN: I wouldn't call this a 100% bulletproof workaround just yet. But it's worth experimenting with if you're hitting that wall.
If you don't use Midjourney, try prompting character details in Nanobanana or any other image generator to give your characters a slightly painterly look. All that said, hopefully it won't be too long before we see a number of the current guardrails lifted.
In the meantime, one overlooked technique with the Seedance Omni model is to provide it audio and have it generate video to accompany. It's interesting, and doesn't always provide 1:1 results—in fact it will often provide "inspired by" results—but it is for sure worth experimenting with.
Pro Tip
The Open Source Seedance Clean-Up Tool!

Flow Denoise — The Seedance Clean-Up Tool You Didn't Know You Needed
In more Seedance news—a fairly common complaint with generations has been interpolation issues, jagged spikiness, and color blotching. I’m sure you’ve all see it— that look that sometimes feels like you’re watching a bootleg DVD from 2004?
Well, leave it to the open-source community to go out and build a fix.
Flow Denoise is an open-source project from AIMZ-GFX, who— also happens to be a professional compositor. Now, what does it do?
Uses optical flow (MEMFOF) to align neighboring frames, then averages them to remove temporal noise
Separates chroma and luma so you can target color flicker without killing detail
Scene-aware — handles cuts automatically (tested on 15-second clips with multiple scene transitions, worked clean)
If you don't know what any of that means, that's fine. Neither do I!
But there IS a GitHub for the project where you can download it and run it in ComfyUI. I know that doesn't make all of you node-haters happy, but because it's open source, so- I'm hoping to see it implemented into some platform or another soon.
In the meantime, if you've got your comfy pants on, give it a shot. If not, I'll try to do a test run in an upcoming video so we can check out the results!
Hypothetically: This should also work for other Video Generators, so it’ll be worth exploring for sure!
SPONSOR!
Lyria 3 on Artlist!

Artlist just added AI music generation to their All-In-One AI Toolkit — powered by Google’s Lyria 3 and Lyria 3 Pro models.
They just launched AI Music inside their AI Toolkit — built on Google’s Lyria 3 and Lyria 3 Pro. You type a prompt (or drop in reference images), pick a genre, mood, and theme, and get a commercially licensed track generated in seconds. Lyria 3 Pro goes up to 3 minutes with full lyric and tempo control.
A standout feature is Artlist Sound — a post-processing layer exclusive to Artlist that refines every AI-generated track to match their production standards. There’s also auto-prompt and auto-lyrics if you want the tool to do the creative lifting for you.
It’s included in every Artlist AI plan, no upsell. Text-to-music, image-to-music, custom lyrics, instrumental — all in one place.
→ Try AI Music on Artlist
FROM THE STUDIO
What I Covered This Week
Packed video this week! The big story: OpenAI's next image model is already being tested under stealth names on the Arena leaderboards— I’m not sure this is going to end up being quite a Banana Killer, but even if it presents itself as a new option, it’ll be well worth it.
Doesn’t take the sting out of losing Sora, but hey— at least we got something (hopefully) cool out of it.
But that wasn't all! We also covered Milla Jovovich (yes, THAT Milla Jovovich) open-sourcing an AI memory system called MemPlace, plus PixVerse's new cinematic C1 model, and Galileo Zero introducing a "world critic" for AI video quality control.
Yeah. It was a LOT. If you missed it: CHECK IT OUT!
I also did a deep dive on Seedance 2.0 now that it's fully landed in the US—pricing breakdowns, content restrictions, workarounds, the whole thing. I won't rehash it all here since we're already in Seedance territory above, but one thing worth highlighting from that video: I checked back in on PixVerse's updated real-time video model, and yeah—it's getting pretty cool. Real-time AI video is still early days, but every time I revisit these models, the gap between "fun toy" and "actually useful" gets a little smaller. Worth keeping an eye on.
If you missed it, Here’s the Link!
THAT’S A WRAP!
Kind of a quick one this week—but next week should be fun. I'm headed out to check out an event for Utopia's new features. I covered them a few months ago when they first launched, and since then, it looks like they've been putting in the reps. I'll let you all know what I see!
As Always I thank you for Reading… Tim



