Googles Veo3 AI video

AI generative video is moving too fast, this is a preview of Veo3 from Google, that was a stable releas in May 2025. I gave it some simple prompts to see how it would respond, previously with older ai video consistency was an issue,artefacts, and a general lack of performance.

What's currently setting Veo3 apart, is that it also generates music, sound and voice in sync with footage. Limited to 8sec clips at 1280x720 with the Gemini app..

Video is watermarked to denote it being AI generated using synthID https://deepmind.google/science/synthid/


Here's 3 clips in different animation mediums - on a theme of environmentalism to see what it would spit out. A scary demonstration that can quickly develop micro shorts or content for explainers with little prompting. For moodboarding and testing ideas this isn't going to go away.

Just for comparison , same prompt in RunwayML gen4

As usual some bizarre hallucinations in background, I'm assuming that's a lumberjack.

Currently not production ready with questions around the data it's trained on... But at the rate this technology is improving, once there are cleaner datasets and studios training on their own artwork, this is a paradigm shift to the usual workflow we will encounter for the rest of the decade, and more how the fundamentals of animation and storytelling need to be addressed by artists to keep ahead of the game and innovate over the recycled.

This is where https://www.moonvalley.com/ and others like it will start to be more visible and accessible in the future, as this integrates within the pipeline working alongside designers.