I Merged AI With Real Filmmaking — This Is the Future
By Alex Zarfati
Summary
Topics Covered
- AI-Human Hybrid Redefines Filmmaking
- Film Emulation Makes AI Footage Believable
- AI Cheats Impressive Wide Shots
- Match Camera Specs for Seamless AI Integration
- AI Fills Budget Gaps in Titles
Full Transcript
AI and filmmaking is one of the most polarizing topics right now. I think you're an absolute traitor and an enemy to the film industry if you support AI filmmaking.
AI can write you excellent imitative verse that sounds a little bit, then it cannot write you Shakespeare. Some people think it's the deficit while others think it's a magic button that just completely replaces the craft. But I believe both are wrong. See, the
real future of filmmaking isn't AI versus real film. It's a hybrid of both. Real
actors, real cameras, real lighting and storytelling all enhance by AI to be used as a tool, not necessarily a replacement. Now I'm not just talking about theory here. In
this video, I'm gonna show you guys exactly how I integrate AI into my real filmmaking process. Not fake cinema, but to expand what's possible on the independent filmmaking level.
filmmaking process. Not fake cinema, but to expand what's possible on the independent filmmaking level.
For the past few months, my team and I have been working on a film called Last Detective, and the total all-in budget for this project was just $5,000. This
was an incredibly ambitious project. It had multiple locations, actors, crew members, and as you guys can imagine, that budget disappears pretty quickly, even with an incredibly generous crew and lots of favors. And when we got into the edit, I had so many ideas of how we could elevate this film, but unfortunately, we just didn't have the budget to do that. And that is when it turned to Higgs Field AF. For those
of you who don't know what Hicksfield is, it's basically this AI platform where you get access to the world's leading AI models like Nano Banana, Kling, and their most recent creation which is Cinema Studio, which I am super stoked about and I'm gonna talk to you guys about that in a second. Now for the scene, I wanted something that felt extremely real because our film begins with our main character hungover waking
up in his apartment and suddenly the phone rings and with a message that plays from his wife. And this message starts to unfold and it reveals a dark traumatic past that has to do with his daughter's past. We already had this photo of our actor with his real daughter in a picture frame. So I reached out to the actor and I asked him if he would be interested in allowing us to
use his daughter, create an AI model of her to use for these little scenes.
And we got the permission from him and that's when I started uploading a bunch of different images into this AI model and I created her character. that character
into the prompt. So now that I actually have this image, I'm going to go ahead and take it into cling. And then I'm going to use cling to actually animate this image. And what's great is that the image does a lot of the heavy lifting for me already. So all I have to do is put in a prompt is a young girl blowing out the candles at her birthday party with a
slow dolly push in. And this is the video that it gave me. To create
this memory sequence, I wanted to create core memory scenes that this character would have or go through in his head when he's thinking about his daughter. So I thought of her maybe running in a field of flowers, opening up Christmas gifts and running on the beach. And I was able to take the same character and generate all of these different images, plug them into Kling. This is pretty much the footage that
I was able to create. While it does look pretty good, you can kind of tell some of it is fake. So the one thing that we have going for us right now is that none of these images are gonna last too long. Then
the secret sauce in getting this to look real is actually color grading it. I'm
gonna go ahead and take this into Premiere Pro and I'm gonna go ahead and mess with some of the shadows and the overall exposure and the contrast, but then I'm also gonna open up something called Dehancer. Now, Dehancer is an awesome third-party plugin that actually takes footage and creates a film emulation. So you
could actually go through here and choose a real film stock. For my real filmmakers out there, you know what I'm talking about when we're looking at Kodak 500T. And
if you choose that film stock, you can get something that feels a little bit more real because it adds a little bit of realistic grit and grain and halation and bloom to the overall image that makes it little bit more believable like it was actually shot on a real film camera. When I put all of these images together this is what that memory sequence looked like in tandem with the film and
the real footage that we shot. Take a look at this. That everything would work itself out. I think that
itself out. I think that this is a great way and a great addition to the edit, but it's also a great way for filmmakers to add production value to an already great scene. Now,
another great way to use AI and one way that we used it was to help you cheat wide shots in locations that just aren't exactly impressive. And that's exactly what we did for this scene. You see, in this particular scene, we have two of our actors having a conversation in front of an investigation board, which is supposed to be like this gritty police station office at night. We started running into little
set design issues. The fact that these blinds right here are just blinds and there's no actual window. So I thought to myself, if these actors aren't really moving, could we get away with replacing the wide shot with something that's AI? And I'm going to take you guys on that journey with me right now. So check this out.
So the first thing that I'm going to do here is I'm going to take a still frame of the actual shot that we got from the wide. Like I
said, there's a couple of issues with this shot. Number one, it's just that. What
I really wanted was a nice push and shot. Then there's no window to motivate that blue light coming through those blinds and just make the shot itself a little bit wider than it actually is. So what I'm gonna do is I'm gonna take this into Hicksfield and I'm gonna ask it to expand the frame laterally and slightly vertically. to create a wider, more expansive version of this shot. And something that's
slightly vertically. to create a wider, more expansive version of this shot. And something that's very important here is to maintain a true anamorphic characteristic. The reason why that's very important is because we shot this on an anamorphic lens. So if they're trying to mimic a spherical lens look, you go to cut it with another scene in our film, it's not gonna make much sense. And that's something that I think a lot
of filmmakers have to their advantage. As filmmakers, we can describe something in great detail.
And the more that you do that, the better and closer of an image that you're gonna get to your vision. So these are some of the images that I got. Now, this one right here in particular isn't bad and I think that this
got. Now, this one right here in particular isn't bad and I think that this could work But with all that being said, I want to show you guys one more other thing that you could do inside of Higgs field. Now Higgs field has something called cinema studio. And what's incredible about cinema studio is that it allows you to choose things like camera, the lens and the aspect ratio to get it even
more detailed than just putting in a prompt and trying to put in the right words. So I'm going to go ahead and choose an Ari camera. And that's because
words. So I'm going to go ahead and choose an Ari camera. And that's because we shot this on an Ursa mini pro and the Ursa is probably the closest camera on their list to an And then, I'm going to go ahead and choose their anamorphic lens option because we shot this, of course, on an anamorphic lens. And
the anamorphic lens option that they have here is something called a Panavision C-Series. Now,
I'm going to choose the focal length that we shot this at, which is a 50 millimeter. And I'm going to go ahead and keep it to an F4 aperture.
50 millimeter. And I'm going to go ahead and keep it to an F4 aperture.
And the reason why I say F4 is because we were about at an F2.8.
I think an F4 keeps everything nice and in focus. And on a wide shot, that's how I kind of want it to look. Now, while this is rendering I also put this into cling so that we can compare and contrast the two images the one that we get out of cling and the one that we get out of cinema studio Okay, so overall this cling image isn't bad. It's bad It's decent.
When I look at the two actors in the frame, I can kind of tell that they're fake. I mean, that's just me being real and honest here. So it's
not bad and it might be able to work if it's very quick, but anything over like two or three seconds, you're totally going to see that that's AI. Now
let's pop back over to the cinema studio version and see what that came up.
For sure, to me, this looks a lot more natural. I could see that the characters aren't really moving much which is a good thing. They look a little bit more realistic. The color tone overall matches the actual shot that we had a little
more realistic. The color tone overall matches the actual shot that we had a little bit more and the movement is great. Okay, so now I'm gonna take this wide shot. Any shots that you take from AI that you're trying to mix with real
shot. Any shots that you take from AI that you're trying to mix with real footage, put them right next to each other and A and B them back and forth and make sure that the color tone matches the rest of your footage. That
is very, very important. Something that I like to do right away is mess with the curves. I think the curves helps you very quickly get something that's a little
the curves. I think the curves helps you very quickly get something that's a little bit closer to the footage that you need it to be. And I might even add in a little bit of that dehancer just to make it more like it was shot on film rather than digital. And this is the final image that I have. Now the big question is, will this cut together when I cut to our
have. Now the big question is, will this cut together when I cut to our closeup shot of our actor? Here's the test.
I mean, call me crazy, call me insane, but that to me could actually work.
I definitely think that we could actually go from this wide shot to this closeup of our actor's face and move into the rest of the scene. You know, say whatever you want. I used my own image, I didn't take from anybody else's image, and I enhanced what was already there. I wanted a clean push in, I got that. I wanted to fix that window problem, I got that. The actual shot itself
that. I wanted to fix that window problem, I got that. The actual shot itself is gonna last one to two seconds, and then we cut into a closeup shot.
To me, this does that job. Call me crazy, this is the perfect way to integrate AI into live action filmmaking. But I am not done yet. Check this out.
One cool thing that we wanted to do was add a title sequence to this film. Me and a small crew went out and captured all of these Miami exteriors
film. Me and a small crew went out and captured all of these Miami exteriors and we got some really, really great stuff. The only issue is, is that I wanted a drone shot of the skyline of Miami. And unfortunately I, broke my drone, didn't have the money to hire a drone operator to come out for one shot. To me, that just wasn't worth it. But as I'm in the
edit, I'm thinking to myself, man, it'd be great if I had just a drone shot to end the title sequence. Could we? AI it? Could we do that? So
what did I do? I took the same process. I went and I screenshot this skyline that I got from Miami. I imported that into cinema studio. I actually shot this with a spherical lens. So I put a spherical lens, I left the RE camera and I put a dolly push in to this drone shot. And the prompt was pretty simple. A cinematic nighttime aerial shot of the Miami skyline. And this is
what it gave me. This isn't bad. Can we make it better? Because to me, like if I'm looking at this just by by itself, I can kinda tell it's fake. I can kinda tell it's AI. It looks like almost too clean, too perfect.
fake. I can kinda tell it's AI. It looks like almost too clean, too perfect.
But can we do the same trick? Let's quickly put it into Premiere. Let's take
it into Dehanser. Let's mess it up a little bit, give it a little bit of film stock emulation, and let's color it a little bit closer to the other shots. Now this is what that looks like in the final title sequence.
shots. Now this is what that looks like in the final title sequence.
I want to thank you guys so much for stopping and hanging out. If you
guys are interested in checking out Higgs Field, I'll drop a link down below. You
guys can check that out. If you guys have any questions about my process or anything to do with anything that we talked about, drop them in the comments down below and I'll see you guys next week.
Loading video analysis...