Over the past 14 years, I’ve designed and produced educational media for corporate partners and learning and development teams, often as part of large, multi-layered learning systems.
Much of that work lives behind client and institutional permissions, so this page is set up as a Representative Media Lab. It brings together short video examples that show how I think about learning media and how I build it using AI.
Every video featured here was intentionally recreated or newly produced using AI-first workflows, from avatar instructors and animation to audio, editing, and post-production. The goal is to highlight both instructional strategy and modern AI production skills in practical way.
I use AI in video production to move faster, scale more easily, and iterate without compromising instructional quality. The focus is practical implementation that support real learning goals, including:
This approach reflects how I design learning media today, using AI as a production partner rather than a replacement for instructional intent.
My role encompassed project leadership, graphic design, and eLearning development.
At the start of each media project, I focus on getting clear on the instructional goal, the audience, and where the video will live. Before anything is produced, I use AI to think through the structure, flow, and message so the video is built with purpose from the beginning.
In pre-production, I use AI to:
Tools I commonly use at this stage include:
Using AI early helps me move faster, reduce rework, and make smarter decisions before production starts.
During production, I use AI to generate the core media assets that bring each video to life. This includes everything from instructors or talent to visuals, audio, and motion elements.
At this stage, I use AI to:
Tools I commonly use during production include:
Rather than treating these tools as standalone solutions, I use them together as part of a coordinated production workflow, selecting each tool based on the instructional goal of the video rather than novelty.
In post-production, I bring all of the AI-generated assets together using professional editing tools like Adobe Premiere Pro and Adobe After Effects.
At this stage, I focus on:
This final phase turns individual AI-generated components into polished, intentional learning media that is ready for real-world use.
A curated set of video segments, each showcasing AI-driven production techniques.
Video segment explains how generative AI works and how to communicate with it. A podcaster-style format pairs an avatar instructor with motion graphics to reinforce key ideas.
ChatGPT GPT-5 (scripting), Gemini 1.5 Pro (storyboarding), Nano Banana Pro (avatar presenter), ElevenLabs V3 (audio narration), Midjourney V7 (graphic elements), Google Veo 3.1 (avatar video clips), Higgsfield Vibe Motion (motion graphics)
Video follows a project manager through a typical workday. A documentary-style format uses storytelling and b-roll to reinforce decision making through a case study.
ChatGPT GPT-5 (scripting), Gemini 1.5 Pro (storyboarding), Nano Banana Pro (avatar presenter), ElevenLabs V3 (audio narration), Kling AI 2.6 (avatar video clips, B-roll generation)
Video segment explores effective leadership of diverse teams. An avatar-led talking-head format with animation highlights key concepts and practical behaviors.
ChatGPT GPT-5 (scripting), Gemini 1.5 Pro (storyboarding), Nano Banana Pro (avatar presenter), ElevenLabs V3 (audio narration), Midjourney V7 (graphic elements), Google Veo 3.1 (avatar video clips), Higgsfield Vibe Motion (motion graphics)
Video segment explores how to be heard during difficult workplace conversations. An avatar talking-head format with shifting camera angles to showcase communication examples.
ChatGPT GPT-5 (scripting), Gemini 1.5 Pro (storyboarding), Nano Banana Pro (avatar presenter), ElevenLabs V3 (audio narration), Kling AI 2.6 (avatar video clips)