How AI is Powering “Mateo & The Llama”

Uncategorized

If you’ve been following the advances in artificial intelligence, you know it’s now possible to generate everything from realistic art to human-like voices with just a few lines of code. But did you know these models have evolved to the point of fueling entire animation productions? That’s the journey we embarked upon to create Mateo & The Llama—a show that explores the frontier of AI-driven storytelling.

From Idea to Animation

The concept was simple: if AI could help produce music, images, and text, why not harness multiple AI models in tandem to make a fully animated show? Rather than building 3D assets by hand, we utilized a range of AI tools in a carefully orchestrated pipeline:

  1. Runway for the animation
  2. DALL·E for characters, objects, colors, and backgrounds
  3. ElevenLabs for both voice-over and sound effects
  4. Music & Lyrics: The lyrics were developed using DALL·E, and the music was created in Suno
  5. Lip Sync: Hedra was used in certain instances to keep character lip movements in sync

For the story itself, we’ve found that while many AI tools can generate text, crafting the scripts and storyboards manually ensures we capture the exact feel we want for each episode. After the scripts and the storyboard are in place, AI steps in to help bring those ideas to life visually and sonically.

Consistency Across Multiple Models

One of the biggest hurdles in AI-assisted animation is keeping characters consistent when moving from one AI tool to another. Each model has its own style, which can lead to mismatches in character design and facial expressions. By refining how we prompt and manage the outputs—especially in DALL·E—we’re able to maintain a cohesive look and feel.

That same refinement applies to lip sync. While many AI models offer some form of lip-sync capability, we’ve found that the facial movements can still seem unnatural when working with detailed or stylized characters. So far, Hedra has proven one of the best solutions in this space, even though there’s still room for improvement.

Getting characters to align with the story and have their own personalities also presented a challenge.

The Power of Hybrid Workflows

Our process blends manual creativity with AI-assisted production at every step. We still shape the story, write the scripts, and plot out the storyboard ourselves. Then we lean on AI tools to execute the animation, create backgrounds, generate voices, and even provide sound effects—significantly reducing the time spent on repetitive tasks.

This same workflow can be applied to 2D cartoon animation, puppets, or even claymation. The core idea remains the same:

  • Develop a solid script and storyboard manually.
  • Feed each segment of the production pipeline into specialized AI tools.
  • Integrate the outputs, refine them, and ensure everything remains visually and narratively consistent.

 

Overcoming Challenges

Despite the efficiency gains, lip syncing remains one of the biggest challenges. Bridging the gap between a character’s movement and the AI-generated voice still requires manual tweaking. However, as AI tools continue to advance, we anticipate an ever-improving synchronization process—one that will make fully automated animation increasingly feasible.

The Future of AI-Driven Creativity

Above all, Mateo & The Llama is a testament that AI can spark human creativity rather than replace it. By using these tools in tandem, we amplify what’s possible—freeing ourselves from the most tedious steps so we can focus on telling compelling stories and developing memorable characters.

Want to see how it all comes together? Check out the latest episodes and behind-the-scenes clips on our YouTube channel:
https://youtube.com/@mateoandthellama

Whether you’re an animator, a tech enthusiast, or simply curious about AI’s impact on creativity, Mateo & The Llama is proof that we’re entering a new era—one where humans and AI collaborate to push the boundaries of storytelling.

More Articles