Stacey Roy (LEGO Masters) | AI Virtual Talent Production Pipeline

I recently collaborated with my friend Stacey Roy, the Season 3 winner of FOX’s LEGO Masters and a prominent voice in pop culture. Our goal was to push the boundaries of virtual production, using advanced generative AI to digitize her likeness and place her as the commander in a high-fidelity, sci-fi short film.

This project was more than a creative exploration; it was a proof-of-concept for a repeatable AI Virtual Talent Production Pipeline. For agencies and brands, this demonstrates a lean, high-octane methodology for leveraging real-world celebrity equity in limitless digital environments, standardizing the creation of “AI twins” for commercial use.

Here is the exact workflow I used to take Stacey from a photoshoot to a virtual star.

Phase 1: The Dataset & Identity Capture

We began with a controlled, professional photoshoot. We captured about eighty high-resolution images of Stacey covering a comprehensive range of poses, angles, and facial expressions in neutral lighting. This controlled environment was critical; the quality of the input data directly dictates the quality of the final AI model.

Phase 2: Training the Custom LoRA (FluxGym)

Once the images were prepped (cropped, cleaned, and carefully captioned), I utilized FluxGym to build a custom AI model of Stacey, known as a LoRA (Low-Rank Adaptation).

This is where the actual “learning” happens. By setting specific training parameters and running the dataset for several hours, the AI learns the unique, subtle details of her face, how light interacts with her features, and how to maintain that identity across different compositions. The result is a small, portable data file that allows me to accurately generate realistic images of her in any environment.

Phase 3: Digital Art Direction & High-Fidelity Generation

With the Stacey LoRA prepared, I moved into Stable Diffusion to generate the base images for each scene. I cast her as our cinematic space commander, generating countless variations to find the perfect cinematic moments.

stable diffusion image generation

However, AI generation is just the foundation. I took most scenes into Photoshop for significant post-production. I hand-painted details, corrected lighting flaws, and blended layers to ensure the visual narrative matched the premium, high-fidelity aesthetic of a real film set.

Phase 4: Animation, Voice Sync, & Mastering

Once the cinematic stills were perfected, the final challenge was animation. I used Kling AI to bring motion and life to each scene, focusing on creating realistic physics and camera movements.

The most critical step for virtual talent is performance. We synchronized Stacey’s actual voice-over lines to the generated character using advanced lip-sync AI, mapping her real expressions to the animated performance. Finally, I edited everything together in Adobe Premiere, building the sound design and musical score to tie the entire narrative together.

adobe premiere

Watch the final result here!


The Evolution of the Pipeline: Optimized Workflows for 2026

Generative AI moves so fast, and the state-of-the-art workflow I used on this project just a few months ago has already evolved. While the original methodology (Training a LoRA > Generating Stills > Post-Production > Video Animation) was successful, my current strategic recommendation for this type of virtual talent pipeline has shifted to optimize for even greater fidelity and faster iteration.

If I were architecting this same project for a client today, my revised pipeline would be:

1. Generation (Dual Identity Lock): I would continue to use the custom LoRA within Stable Diffusion as the primary identity reference. However, I would simultaneously utilize photo references within Nano Banana. This combination significantly reduces identity drift at the generation stage, anchoring the character’s likeness with both a trained model and a hard visual reference point.

2. Animation (Native Consistency): The biggest bottleneck in the previous workflow was the transition from a perfect still image to a moving video, which often requires significant post-production stabilization. The updated workflow would prioritize using newly developed native consistency features (such as ‘elements’ or designated character lock profiles) directly within video generators like Kling.

This optimized approach creates a streamlined, end-to-end video pipeline where the generative AI is designed to understand motion and identity preservation in parallel, rather than forcing identity onto motion after the fact.

The Strategic Takeaway: Virtual Production is Commercial Ready

This collaboration is a demonstration of the strategy-to-execution workflow I provide. By legalizing and standardizing this virtual production pipeline, we have created a dynamic, reusable asset.

For brands, this means creating high-concept narrative content, gaming cameos, or complex campaigns featuring their talent ambassadors, all produced at a speed and scale that traditional pipelines simply cannot match. It is a leaner, faster, and more versatile way to create branded entertainment in 2026.

Currently available for freelance projects, contract roles, and agency partnerships.