
Delivering the Gift of AI for Christmas
Mad Genius thinks Christmas is great. Some might say that our biggest flaw as an agency is caring (about Christmas) too much.
Every year, Mad Genius tries to celebrate the season in a way that gets everyone on the team involved and shows off our capabilities as an agency. Recently, we made a Christmas album with “Mad! That’s What I Call Christmas.” Our agency embraced Christmas drinking in the midst of COVID-19 with the12drinksofchristmas.com. We celebrated the creator of the color-coordinated group photo with ”Lost Footage of Barry Styles.” We even helped a local shelter get puppies adopted with “Forever Home for the Holidays.”
This year, everyone has been talking about AI. Whether you’ve been debating AI’s role in the workplace, using it to order 18,000 waters at Taco Bell, or taking some bad diet advice from ChatGPT, you’ve surely been in at least one AI-related chat.
Mad Genius hasn’t been shy about our embrace of AI as a tool to enhance our people’s creativity. So, to celebrate those people, we decided to give them a bit of a Christmas break and let AI do the heavy lifting for our 2025 celebration. We used AI to make three Christmas movie trailers.



We’ll show you the trailers at the end of this blog, but first, a brief overview of how we did it, documenting this multi-step, process-driven endeavor full of trials, failures, and learning along the way.
Now, here’s the more detailed, step-by-step process of how we successfully turned ourselves into AI-generated Christmas clichés.
Step 01: Writing Character Descriptions
One of the pillars of every Mad Genius Christmas project is that we want to involve as many people as possible. So, we began by writing detailed descriptions of each character, which gave us control over how many characters would be included in each trailer. The character descriptions also allowed us to include as many ridiculous details as we wanted while guiding the story in the general direction that we wanted it to go. Still, maintaining the novelty and chaos of having a chatbot write the actual script.
For example, here’s the description we wrote for the main character of what became “A Very Greedy Christmas.”
"Greed McMoneybaggs: He’s a business tycoon who has just completed a hostile takeover of the local soup kitchen just to shut it down. He owns many rental properties and raises the rent every year. Satisfied with another day’s work, he looks out his window at the sign that says ‘McMoneybaggs LLC’ blows out his candle, and settles in for a long winter’s sleep on Christmas Eve. His mattress is like five feet above the ground because he keeps all his money under it. After he’s visited by apparitions through the night, he realizes the error of his ways and reopens the soup kitchen so all can have a Merry Christmas."
Step 02: Having a Chatbot Write the Script
We gave these descriptions to ChatGPT 5.0, which is behind a paywall, but there are certainly plenty of free models that could give you a comparable output if you push it in the right direction.
The specific prompt was, “I'm going to give you a series of character descriptions and I need you to write the script for a 60 to 90 second movie trailer that includes all of these characters. You don't need to use every single detail in the descriptions, but each character must reach the end of their arch as described. Narration is okay to include in the trailers.” Then we entered a brief plot synopsis and character descriptions for each trailer. It turned the characters like the ones above into dialogue like this:

Step 03: Creating an Art Direction Book
Early on we decided that we’d shoot photos of Mad Genius employees to put them in these trailers, so we created an art direction book as we would for any other production. This included details about the overall aesthetic of the trailers and direction on what each character wears. Alas, this was done by human hands. AI is prone to wardrobe recommendations that require a much larger budget.
Step 04: Shooting Photographs
Next, we spent a whole workday creating shareholder value by rotating different Mad Genius employees in and out of our soundstage so they could take pictures in costume and makeup. By this point we had turned our script into a loose storyboard (another human-hand endeavor) and knew the compositions of the beginning of each shot of the trailers.
We took pictures of our “actors” in these starting positions. Veo 3.1, the video-generating software we used to create the videos, has a feature where you can enter a starting frame, which it will then turn into a video. In our experience, generative AI video creates a much more consistent, coherent shot if you give it something to go from rather than asking it to create a video from scratch.
Our team also got some single shots of each character facing the camera for safety or to be used in our movie posters and a years-long back catalog of promotional materials.
Step 05: Generating Backgrounds
The still images were given to an AI image generator—more specifically Gemini’s Nano Banana Pro—to change the backgrounds into the settings that the script called for.

When you look at the successful ones, this whole process can seem like magic. However, it did take some work. Plenty of generations were left on the cutting room floor where they belong.



To be fair, that last image is, in fact, a bit snowier.
Step 06: AI Animation
Once the images had backgrounds, we entered them into the aforementioned Veo 3.1 along with the scene direction and dialogue for each shot.
One successful prompt that yielded a video worthy of the final cut was “NEWSIE bursts in running, waving papers. NEWSIE shouts, ‘Extra! Extra! Soup kitchen open! Christmas saved!’ Keep characters' faces consistent during the entire shot. The NEWSIE never stops running.”
As with the static image generation, there were plenty of janky outputs to sift through. And after we were done sifting through them, we compiled them into a blooper reel for you to enjoy:
Step 07: AI Voices
Veo 3.1 is capable of generating audio outputs like characters speaking. When prompting to create the videos, we simply included the dialogue from the script. The characters in the outputs delivered the lines, but we ran into the problem of the characters sounding mostly the same. Every character seemed to fall into “man voice” or “woman voice” that was almost uniform across all the videos regardless of the characters’ ages.
We took this audio and entered into a different software, Adobe Podcast, to compress and enhance the audio quality. We then entered that audio into another tool called Eleven Labs, which gives you more control over the inflection and tone. Since this audio originally came from the Veo output, we just had to add the audio back into the final cut, and the words were already synced.
Step 08: Adding AI Music
With the videos complete, all that was left to generate was some Christmas-y music to fill the too-long silences between lines of dialogue. For this, we used a combination of Adobe Firefly and Suno. Some tracks were better than others. Here’s a terrible one:
Ready to give them a watch? Great. Here you go:
You’re welcome. And we’re sorry.
Merry Christmas.
Have you wanted to integrate more AI into your creative? Want to star in your very own AI movie? Schedule a chat and we can talk over some eggnog. It’s gross but it’s Christmas so we have to drink it.