The Potential of Runway Gen 3 Alpha for AI-Generated Animations

As AI continues to revolutionise various creative fields, the latest advancements in generative video models have sparked significant interest. With the release of Runway Gen 3 Alpha, I was eager to test how well it could animate images that I had created through different AI generative art models, seeing if it could effectively interpret direction and create smooth animations. While there's a question of whether such models can be used commercially, integrating them into an established production pipeline offers exciting possibilities. Here, I’ll document my tests to assess how Runway Gen 3 Alpha performs and its potential for production use.

---

Test 1: Animating a 3D Character from Luma AI

For the first test, I used an image I had previously created with Luma AI, a 3D-generated character I call "Tommy the Tomato." The character has a specific style and aesthetic, and I had previously written about it in a blog post. The prompt for animating this image in Runway Gen 3 Alpha was:

Direction: "Tommy is looking around in an optimistic and naive way. The camera slowly pans in on his face."*

Results:

- Positives: The render quality was impressive. The textures of the green leaves on top of Tommy’s head were detailed, and the glossy reflections on the tomato's body were spot on. The depth of field effect was well-executed, and the character's face – complete with eyes, nose, and mouth – appeared fully formed. These were all carried over successfully from the original image.

- Negatives: The most noticeable animation was in the eyes, where instead of blinking, the pupils seemed to "close" awkwardly. The movement of the mouth was more convincing, but overall, the lack of realistic eye movement took away from the quality of the animation.

Conclusion: While the animation showed some promise, there were clear limitations. Still, it had potential to be improved with adjustments in the prompt or re-renders.

---

Test 2: Animating a Photo-Real Image from DALL·E

For the second test, I wanted to see how well Runway Gen 3 Alpha could animate a more photo-real image. I used DALL·E, an AI model accessible via ChatGPT, to generate an image of two men in suits shaking hands. The prompt for this image was simple:

Prompt:"Two men in suits shaking hands in agreement."

Once generated, I set the prompt in Runway Gen 3 Alpha to guide the animation:

Direction:"Two men shake hands and then walk past the camera in slow motion, as the camera slowly pans in."

Results:

- Positives: The animation successfully depicted the two men shaking hands and smiling at each other. The pan-in effect was executed, though the overall movement remained static.

- Negatives: The men did not walk past the camera as I had instructed. Additionally, there were noticeable glitches—one of the men’s legs went through a table, an issue that could potentially be resolved through multiple re-renders. The overall animation had an uncanny quality, making it feel less than production-ready.

Conclusion: The animation showed potential but fell short of a polished, production-ready result. With more refinements and re-renders, however, it could become usable for certain applications.

---

Test 3: Animating Storyboards with StoryboardSketcher

For the third test, I explored using Runway Gen 3 Alpha for animating storyboards. I used ChatGPT's StoryboardSketcher, which runs on DALL·E, to create storyboard sketches based on simple prompts. The storyboard I generated was:

Prompt: "A 16:9 wide-angle sketch of a man walking past a corner shop."

Once I brought the sketch into Runway Gen 3 Alpha, I applied the following animation direction:

Direction:"The man walks slowly past the shop, and the camera follows him left to right in a cinematic way."

Results:

- Positives: The animation was successful in portraying the man walking past the shop, complete with a realistic shadow following him. The movement was smooth, and despite the sketch-like quality of the image, it gave a nice cinematic feel.

- Negatives:The camera didn’t follow the man as cinematically as I had hoped, but the core movement was impressive enough to see the potential for this tool in creating quick animatics.

Conclusion: This method holds potential for animating storyboards quickly. While it may not be suitable for final production, it could be a valuable tool for creating animatics to show timing and movement to clients or colleagues.

Final Thoughts

The Runway Gen 3 Alpha model demonstrates some interesting potential for integrating AI-generated animations into production pipelines. While it is not yet production-ready, the capability to animate images from models like Luma AI, DALL·E, and StoryboardSketcher shows promise. The limitations are clear—occasional glitches, uncanny movements, and a lack of nuance in complex animations—but for rough previews, animatics, and certain creative projects, it could become a valuable part of a production process.

Ultimately, I see this as a tool that will only get better with time and development. While it’s frustrating to see some of the small, nuanced issues not being fully addressed, its use as a way to rapidly iterate and visualise ideas in early production stages is undeniable. I look forward to seeing how the technology progresses and becomes more seamlessly integrated into creative workflows.

Previous
Previous

AI Generative Extend Tool in Premiere Pro: A Closer Look

Next
Next

Exploring Cartwheel: A New AI Prompt Tool for 3D Motion Capture