LongCut logo

The NEW Way To Use AI For 3D Artists

By Brandon Lerry

Summary

Topics Covered

  • AI as 3D Hyper-Realism Layer
  • Lock Look with Weavey Nodes
  • Luma Motion Transfer Unites
  • Workflow Shines on Subtle Shots

Full Transcript

AI can generate insanely realistic images and videos right now, but there's one big problem.

you still don't get real camera control like you have in a traditional 3D software.

So as a 3D artist, I wanted to test something. What if I kept all my camera, animation, and timing inside a 3D software like Unreal Engine and only used AI for realism? Could I basically turn AI into a hyper-realism layer on top of 3D? So I put that to the test and the results were actually pretty surprising.

3D? So I put that to the test and the results were actually pretty surprising.

And I'm going to teach you exactly how to do this yourself. All right, we'll start out by building the shot. The way I did this was I started by rendering an untextured or clay animated from Unreal Engine. It doesn't matter what 3D software you're using, Blenders, Cinema 4D, Houdini, Maya, all of them can render an untextured pass, but as long as you have decent lighting, models set up the way you want,

and your final camera animation, you'll be good to go. Once that's set, we'll render the full animation, and we'll export the very first frame of that as a JPEG.

This first frame is what becomes the visual target for your entire sequence. So the

next and most important step is locking the look. This is where we focus entirely on the final look of how we want our video to be. So I use Weavey as the main hub for this. You can do this with separate tools like Gemini or Fixfield or Kling, but I prefer Weavey's node-based all-in-one approach where they have the models just built in and as a 3D artist, it just makes more sense

to me. I love just to have a bit more control over the final image.

to me. I love just to have a bit more control over the final image.

So quick note, if you want to use this exact workflow, I've linked a whole setup below that lets you copy and paste my node layout and all my prompts just so you don't have to figure it out from scratch. It is all in that link. So the first step to getting this like actual final look that we

that link. So the first step to getting this like actual final look that we want is to generate the realistic reference image. So we'll start by importing that first frame JPEG that we exported and then we'll add a simple prompt. For my example, I wanted to make a hyper-realistic Formula One driver. So I put in that into the prompt. And then what we'll do is we'll plug that into Weavey's prompt enhancer.

the prompt. And then what we'll do is we'll plug that into Weavey's prompt enhancer.

And the prompt enhancer basically takes a look at our prompt and the image that we put into it and give us basically a better prompt. Now with our enhanced prompt and our base image, we'll plug that into Nano Banana Pro or something like Flux 2. And what's cool about Weavey is that you can actually just test both

Flux 2. And what's cool about Weavey is that you can actually just test both at once. And for me, I actually like the results from Flux 2 Pro just

at once. And for me, I actually like the results from Flux 2 Pro just a little better than Nano Banana for this test because it gave a more photo real look right out of the box. Why we spend so much time here is because this is going to be our final look, like our final render. And so

you just want to get this right, even if it takes a lot of prompting and iteration and back and forth. So in my example, I liked using Flux for this, but Flux seems like it still struggles with accurate text and logos. So to

fix that, I actually did a combo of Flux and Nano Banana where I use the Flux image, plug that into Nano Banana, and then added a prompt and basically told it to fix the logos, change them to a mobile logo and a Red Bull logo. So at this point, you should have a final still image that represents

Bull logo. So at this point, you should have a final still image that represents the look of the entire video. That's why this initial image matters so much because it's really your final look. Once we're happy with our final image, we'll run it through Magnific Upscale for a higher resolution reference. This will basically give you a better output and pretty much like a better image for Luma or Kling to work with.

So this was the result I got and I just loved the final image. It

was 4k, the skin details were great, and it looked like a real picture. So

step three is the look to motion transfer. This is where everything comes together. This

is where we'll basically copy and paste our final look image onto our untextured render. So to do this we'll drop in a Luma modify node. You can use

render. So to do this we'll drop in a Luma modify node. You can use something like cling01 edit video, but for me actually after testing Luma and Cling, for this result Luma actually gave me better results. So we'll drop in a Luma modify node and then we're going to bring in these three things. We'll put in our first AI generated image and then we'll drop in that original untextured render

from our 3D software. This is going to be like the actual video and this is what's going to drive the motion and camera. And then thirdly, we'll add a very detailed prompt basically telling Luma that we want to use our video as the motion and camera driver, but use that final image we made as the look and to combine them. Basically, this means the image controls the style and realism while the

video controls the camera, animation, and timing. So we'll hit run on this and you'll get a fully animated hyper-realistic result. All right, so moving on, I actually tried to push this workflow even further and tested doing it with animals. So I grabbed a pigeon asset from the fab store and brought it into Unreal and I rendered a simple animation of that. So using the same exact workflow as before, I tested both

Nano Banana and Flux to generate the realistic starting frame. I liked both so I actually just ran them both through the same pipeline. So I tested the results in both Luma Modify and Kling using the same prompt and the same workflow and honestly the outcome was kind of wild. It added realistic feather detail, subtle eye movement, and even generated a convincing city background. So when I saw these results it genuinely excited

me because if you're a 3D artist then you already know how tough it is to get feathers like this or eye movement and even just like this photo real quality from a 3D renderer. So up until this point, I was testing static shots, but I wanted to see how this workflow held up with actual camera movement. So

first I tested a subtle dolly in towards the pigeon, and the results were this were pretty much similar as before. With this workflow, there was a little bit more prompt engineering and a little more iteration to get it right, but honestly not that much more effort had to go in to get a final looking shot like this.

Then I wanted to try an orbit shot around the pigeon, and at first it looked a little bit strange. My first test, the AI rotated the pigeon instead of understanding the actual camera movement. Like it kept the camera movement going around the pigeon and understanding how the pigeon looked, but it didn't keep in context how the background looked and how it all parallaxed against the pigeon. So to fix that, I realized

that adding background objects such as these pillars and even these like random little tiles or squares in Unreal. It really helped the AI understand the parallax of the image.

So after a few tests and prompt tweaks, I was actually able to get a pretty solid result. Now with this orbit shot, this is when I started to notice a little bit more artifacting in the image. But after a few more tests and prompt tweaks, I was able to get a pretty solid result. So for my next test, I wanted to take it a step even further. I wanted to see if

this worked on environments. Could I use an untextured city environment and basically have it texture the entire city for me. So for this test, I brought a Google Maps OSM building model of the San Francisco waterfront into Unreal. This time, I actually had a real reference photo of the San Francisco waterfront that I wanted to essentially paste onto my untextured buildings. So I took the first frame of my Unreal render and

I brought it into Nano Banana with the reference image plugged in to generate a realistic starting frame that matched my camera and composition. From there, I ran the exact same process as before, but the only difference is that this time I used Kling instead of Lume. for the motion transfer. And to be honest, with this much camera motion, this is where I notice things start to fall apart a bit. Now you

can see some of the warping in the lower part of the frame and some subtle building movement and artifacts happening. The reason this is happening is most likely because the AI is generating its own depth and normal passes, which causes flickering. So I

tried using my own depth map from Unreal and definitely got better results, but not something I would still find usable. So for a quick test, I was genuinely impressed and hopefully this will improve or I can find a better solution for higher movement.

And for my last test, I wanted to see if I could actually use an already rendered shot from Unreal that looked a little bit gamey, but basically transform it into something that looked photo real. So I repeated the same exact process for this forest environment shot, and I wanted to see if I could transform it into a snowy environment. So the goal was to see if I could swap a standard Unreal

snowy environment. So the goal was to see if I could swap a standard Unreal render for a more realistic winter scene. The results were actually pretty solid. The good

parts about this was it added the snow and like fire detail in the mountain areas and even the water looked super realistic. But you'll notice some warping in the fine detail areas like in the bridge post down here and even a tree in the fine details right here. But overall as a fast realism pass it works pretty well. So where this workflow shines is it works best when the shots are static

well. So where this workflow shines is it works best when the shots are static or subtle, the camera movement is controlled and the prompts are extremely clear. It struggles

more with fast orbiting shots, heavy parallax or large camera moves. I feel like this workflow can be really exciting because as a 3D artist artists, getting that photo realism look can take a lot of time no matter what software you're in. So being

able to quickly level up our shots like this, I feel like there's a lot of opportunity and potential here. This workflow is pretty powerful, but tools alone don't build our careers. What actually matters is how we use these things to get attention, get

our careers. What actually matters is how we use these things to get attention, get clients, and get paid. So in another video, I break down exactly how I built a sustainable 3D career without relying on one client or one platform. If you want to understand how skills, workflows, and visibility ability turn into real income, watch this video right here.

Loading...

Loading video analysis...