LongCut logo

3 Ways to use Ai with REAL Filmmaking

By Jon Kent

Summary

Topics Covered

  • AI Boosts Micro-Drama Production Value
  • Hybrid AI Backgrounds Elevate Emotion
  • AI Pickup Shots Fill Production Gaps
  • Match Real Actors to AI Lighting First
  • Retake Refines AI Dialogue Precisely

Full Transcript

This is a real actor and this is an AI actor. You're looking at a scene from a

actor. You're looking at a scene from a new type of film I've made, a vertical micro drama, a storytelling format that's exploding in China right now. And

I wanted to see if I could make one myself and whether or not AI could help boost the production value. So, for this video, I'm going to make a four-part micro drama series where I'll be using AI in a hybrid workflow to replace

backgrounds, generate pickup shots, create digital doubles, and have real actors perform alongside AI characters.

Let's get into it.

Okay, so before we dive into the video, I want to just quickly talk about what a vertical micro drama actually is. And

the best way to tell you is to actually show you. So I've downloaded one of the

show you. So I've downloaded one of the most popular apps, which is Real Short.

So if we click on that, click on any one of these uh titles, and we're going to go straight into watching one of these.

First notice is that when we are shooting on one character, we're pretty much taking up the whole screen with that character, either close-ups or medium wide shots. Second thing I've noticed is that it dive straight into

the episode. There's no messing around.

the episode. There's no messing around.

It's straight into it. And these

episodes are lasting normally about 90 seconds. As soon as the episode's done,

seconds. As soon as the episode's done, it automatically scrolls onto the next episode. So hopefully you get an idea of

episode. So hopefully you get an idea of what these vertical micro dramas are all about. The acting is a little bit cheesy

about. The acting is a little bit cheesy and the story line is a bit repetitive.

However, we are not going to make this love triangle. We are going to make a

love triangle. We are going to make a revenge film. Let's get it.

revenge film. Let's get it.

So, first up, let's talk about AI backgrounds and a couple of ways that we can use them. So, the first shot is simple. A static camera shot, parked

simple. A static camera shot, parked car, and our character stepping out and walking away. But emotionally, I wanted

walking away. But emotionally, I wanted this scene to have a little bit more weight to it. So, I want to use the AI to create a transition from the cool blue sky into a dark storm cloud. So, to

do that, I grab the single frame from the original footage I shot. Then, I

open up LTX and go into generate video.

Upload the image. For this shot, I'm using VO3.1 and select 9x6 ratio. I added my prompt and click generate. And this is what it gave me.

So, the next step here is blending the AI footage with the real footage I shot.

So, I brought the original footage and the AI clip into After Effects. Above

the original footage, I masked out the car window on the AI clip as there are some cool reflections. The next layer above is the sky and trees masked out from another copy of the AI video

showing us the sky transition. And the

top two layers are a couple of rotoscoped elements of the actor. This

way, we keep the real actor elements I shot, but we also get the AI sky transition as well. And here's the final look.

So, this second background shot is actually for the next shot in the sequence. I wanted to test out two

sequence. I wanted to test out two things here. One was a green screen and

things here. One was a green screen and two was a new treadmill I bought. So, I

wanted to test that out as well. So, I

had my actor walk on the treadmill in front of a simple green screen setup.

Once I filmed that, I pulled it into post, did a quick grade, and again exported a steel frame of the character.

Then, I took that still frame into LTX and used Nano Banana to generate a new background that matches the environment of the previous shot. Then I used Nano Banana again to remove the character.

And then I added the same model and colored car from the real footage with the door open. So now we have a clean plate. From there, I used that final AI

plate. From there, I used that final AI image to generate a slow dolly movement backwards, giving just enough motion to make the scene feel alive. I then keyed out the green screen from the original footage and then dropped in the new AI

background. And the final step was to

background. And the final step was to add in an overlay of rain and a slight camera shake and then finally a little grade to bring it all together. And the

final results look like this.

So, another hybrid workflow I think AI can really shine is with insert shots and pickup shots. These are cutaway shots that you would normally grab during a shoot, but sometimes when you're in post-production, you realize you could have done with one or two more

shots. So, the first shot I needed in my

shots. So, the first shot I needed in my sequence was a aerial shot showing the audience the scale of the location. But

before generating anything, I need to first figure out what was actually in the real location. Then heading straight to video generating, I added the names of the tree and plants to the prompt.

Selected via 3.1 again and 9x6 to generate a slow drone style shot that actually matches the location we were filming in. And sometimes you need that

filming in. And sometimes you need that insert shot that joins two shots together like this one. But before our character stands up, I wanted a quick close-up to show what the character is actually doing. So heading straight to

actually doing. So heading straight to generate images and selecting flux.

making sure the prompt matches the elements in the scene, the jacket sleeve, overcast lighting, the environment, and what the hand is doing, which gave me this. I then created a new prompt to describe the camera movement,

what the hand was doing, and I also described a second character stepping into the frame from behind, which again helps motivate the next shot. The result

is a quick AI insert that fits naturally prior to the real shot and makes the movement flow much smoother. For the

next two pickup shots, I wanted to create a visual metaphor. So, I needed one shot that was a bare tree with a few birds on it that would fly away after sort of hearing a gunshot. And then the

second shot was going to be the sun breaking through um an overcast sky. The

tree shot I started by generating the image, making sure the surrounding foliage matched the real location and then animating the birds lifting off as if startled by the gunfiring. And the

sun cloud shot I generated entirely in generate video as this was a less specific shot. So, creating a prompt and

specific shot. So, creating a prompt and using LTX2 Pro to generate the shot directly. And finally, for the last

directly. And finally, for the last pickup shot, I use one of LTX's new features called elements. If we go to the top of the page and click the elements tab here, we can upload an image of our character, an object, or something else. Then, we can name the

something else. Then, we can name the character. Do also have the option to

character. Do also have the option to assign a voice. But once you're happy, click save element. So, if we go back to the image genace, I can then create a prompt and use the at symbol to tag in any element that I've saved. So, for

this shot, it was creating a wide shot of our character walking in a forest with a golden hour sunlight. It was then ready to generate into video using LTX2 Pro. All I needed for this shot was some

Pro. All I needed for this shot was some camera movement and the character movement. And this was the final shot of

movement. And this was the final shot of the film.

So, before we film anything, we need to build the AI character first. And

there's a reason for that. If we

generate the AI character first, we can then match the lighting from that shot with our real actor. Doing it the other way is just going to be a little bit more tricky. So in LTX I created the

more tricky. So in LTX I created the images in the image generation tab.

Selected flux and 16x9. Here's the

prompt I used. The goal here is to get a clean cinematic first image of my character. This was the image I ended up

character. This was the image I ended up with. And from here I had to make one

with. And from here I had to make one important change. The brown leather sofa

important change. The brown leather sofa the character is sat on as I don't have that sofa for my actor to sit on. So we

got to make that match. So using my phone I took a photo of the sofa. Now I

select edit, upload the reference image, select nano banana, and simply prompt the AI to only replace the sofa that the character is sat on using the reference image, which gave me this. I then move

on to generating a few more images in 9x6, again using Nano Banana. So I ended up with a medium shot, a high angle close-up, and a medium wide shot face on. Once we've got all our shots, it's

on. Once we've got all our shots, it's now time to bring them to life. So click

create video. For these shots, I'm using VO3.1. And next, I copy across the

VO3.1. And next, I copy across the dialogue from my script and add a simple prompt for the actor's movement and camera movement, which gave me this >> maybe.

Where'd you find it?

>> And the final thing I want to show you quickly is a feature called retake. So,

let's say I have a clip and I'm happy with pretty much all of it except for the last line of dialogue. It's maybe

just something I want to change in it, but without redoing the generation and maybe losing some of the elements I actually liked, I can use retake to just replace that last line. So, here is a

clip I generated using LTX2 Pro.

>> You think I watch for sport?

>> I watch because that's what I do.

>> Now, all we need to do is go into edit and select retake. Then, select the section of the clip we want to change.

Here, I'm going to change the last line of dialogue.

>> I watch because that's what I do.

>> And once we confirm the selected part, add the new prompt to say what you want changed. And we have our retake.

changed. And we have our retake.

>> You think I watch for sport?

It's always been about the money.

>> So, now we have all our AI character shots. The next step is to film the real

shots. The next step is to film the real actor. First thing I did was take the

actor. First thing I did was take the main medium AI shot, flip it horizontally, and use that as my guide for the actor's angle. Once the camera was set, I then set up the lighting, the same direction and same quality. Once

everything was lined up, we ran the takes. We shot the dialogue, few extra

takes. We shot the dialogue, few extra versions for safety, and then we moved on to close-ups and inserts. So, the key here is making sure that everything is really aligned with that AI shot. So,

both the shots and performances look like they're from one scene. If you want to check out the full finished micro series, there'll be a link here somewhere or down in the description.

But let me know your thoughts on micro drama series. What do you think of

drama series. What do you think of those? And also the AI hybrid workflow.

those? And also the AI hybrid workflow.

I'd love to know your thoughts in the comments. I reply to every comment, so

comments. I reply to every comment, so definitely drop a message down there.

Thanks for watching. I'll catch you in the next video. Peace.

Loading...

Loading video analysis...