Can AI Survive the AAA Studio Pipeline? (Human vs. Machine 3D Test)
By Class Creatives
Summary
Topics Covered
- Hybrid Artists Replace Traditional Ones
- AI Fails Without Hand-Painted Fixes
- AI's Kryptonite is Ambiguity
- Multi-View References Boost AI Accuracy
- AI Makes Generic Helmets, Not Yours
Full Transcript
Hey there, class creatives. And in this special edition, we're going to tackle something we've been wanting to talk about for quite some time. All the fear circulating right now about AI replacing artists in the gaming and visual effects
industry. We wanted to set the record
industry. We wanted to set the record straight and see what actually happens when you drop machine learning tools into a real highstakes AAA production pipeline. To do this, we teamed up with
pipeline. To do this, we teamed up with two industry heavyweights. David
Ardondo, an art director who's worked with Amazon games, and Anmar Muhammad from Onmars 3D, a veteran technical director who has built pipelines for Epic Games. We thought it would be fun
Epic Games. We thought it would be fun to answer one very specific question.
Can AI actually spit out a highly specific productionready 3D asset from a custom 2D concept, or does the human artist still come out on top? We'll
break down the exact modern hybrid workflows these pros are using to crank out incredibly detailed character designs at studio speed. First, Dave
will walk us through his 2D concept process. He'll show us how he uses tools
process. He'll show us how he uses tools like Midjourney and Nano Banana to rapidly generate ideas and how he fixes and elevates those generations with traditional hand painting. Then, Enmar
is going to take those exact 2D concepts and put them through the ringer. He's
going to stress test them against top 3D generation tools like Meshi, Tripo, Haidum, and Hunyen to see where they shine and honestly where they completely fall apart. We're going to show you how
fall apart. We're going to show you how blending traditional handcrafted techniques with these new AI workflows is totally changing game and visual effects development and how the modern
hybrid artist is actually securing their future in the industry. Let's get into it.
>> All right, let's address the elephant in the room right now.
The industry is a bloodb. Concept
artists are getting laid off. Studios
are tightening budgets. And artists are terrified they're going to be replaced by machines.
If that is your only workflow, you're going to get left behind.
But AI isn't going to replace the concept artist. The hybrid artist is
concept artist. The hybrid artist is about to replace the traditional concept artist.
Today, I'm going to show you exactly how to stop fearing these tools and instead start leveraging them to make yourself
entirely indispensable to any studio in the world. You're gotten right.
the world. You're gotten right.
You hear me well. All right, let's take a look at the ideation phase. When an
art director needs a sci-fi helmet or a character design, he doesn't only want one good design by Friday. No, no, he
needs at least 50 great designs by Tuesday launch. And that's a fact. You
Tuesday launch. And that's a fact. You
let the machine to handle the heavy lifting in the initial blocking. I am
curating.
I am curating.
But here is where most of materials fail. They think they get a cold
fail. They think they get a cold generation out of nano banana posted on Instagram and they think they're done.
AI has severe limitations. Look at this thing. It elucinated this extra gear.
thing. It elucinated this extra gear.
That doesn't make any sense, by the way.
And then the structural design for this helmet is completely off. And my
favorite part, look at this lady. Oh my
god, that chin is about to take off, bro.
As you can see, it completely missed the benchmark that I actually need for this character. And now, if you handle this
character. And now, if you handle this raw image to the 3D department, hey, you'll be fired, my guy. Trust me,
you'll be fired.
AI cannot fix this specific topological line flaws. I need to paint them by
line flaws. I need to paint them by hand.
This is exactly why you still need to know how to paint, how to design, and most importantly to understand anatomy.
And that ladies and gents, that's how you take full control over in Photoshop.
I am completely removing the AI generated woman from this nano banana concept and repainting her from scratch.
So her design actually fits our design brief.
You have to be the master of the final pixel. The machine only gets me 70%
pixel. The machine only gets me 70% there, but it is my hand painted Cintic workflow that gets me 100%
production ready.
Honestly, AI gives you a beautiful mess, but it's up to you to turn it into an actual blueprint. And that brings us to
actual blueprint. And that brings us to the most critical role of the hybrid artist. Your art doesn't just live in a
artist. Your art doesn't just live in a 2D portfolio. it gets sent down the
2D portfolio. it gets sent down the pipeline to be built in 3D for games and the VFX industry.
Thank you. I'm Dave Ardondo.
>> That's right, Dave. And that's exactly where the AAA pipeline shifts. As you
just saw, Dave has successfully locked the 2D concept. But in production, a beautiful illustration means absolutely nothing if the topology breaks inside the engine. Now, we hand the asset over
the engine. Now, we hand the asset over to Omar Muhammad. He's going to take Dave's exact 2D concepts and stress test them against the top 3D AI generation tools on the market to see if they can
actually output game ready geometry or if the human execution is still the only way to survive.
So, I just pushed AI mesh generation through a real production scenario. This
wasn't just prompting a, hey, give me a cool sci-fi helmet. This was a concept given to me by professional concept artist David Ardondo. and he and I worked together to create a real
production scenario on how far we can push AI. This is a real 2D concept with
push AI. This is a real 2D concept with a real design with real production constraints. And I wanted to answer one
constraints. And I wanted to answer one question. Can AI make this helmet or can
question. Can AI make this helmet or can it only make a helmet? So, let's jump right into it. Here's the concept by Dave Erdando and this has given me everything that I need to create a 3D
model. This is exactly what I typically
model. This is exactly what I typically receive within a production environment.
This includes a full mood board, inspiration, and everything a 3D artist would need to take it from 2D to 3D. The
main thing to understand is that this isn't optional. I'm not given this
isn't optional. I'm not given this concept and then told make something close to it. I'm given this helmet and told to make exactly this. So, I do what
I always do. I get into 3D modeling.
Now, for this instance, what I decided to do was use ZBrush. Because of the organic nature here, I wanted to just start off with a sculpt and not have to
worry about any topology. So, that's
exactly what I did. This was the first round of review that I sent over to Dave. And like all concept artists, he
Dave. And like all concept artists, he gave me amazing feedback. And like I said, this is replicating a production environment. My job is to create this as
environment. My job is to create this as a 3D model based off the concept provided. Now, here's the thing. There
provided. Now, here's the thing. There
were some key things within the design that I completely misinterpreted. And if
me, a professional 3D artist of over 15 years, is going to misinterpret concept, then you better believe that AI is going to do the same thing. We were
able to go through multiple rounds of review. He provided fantastic drawovers
review. He provided fantastic drawovers and everything that I needed to create this as a production ready asset. Once I
took it through that ZBrush pipeline from the sculpt phase, I did a full Z measure to create the hard surface version of the helmet. I used that in combination with rettopology to create
the final helmet. And here's a side by side. And here's a before and after.
side. And here's a before and after.
Here we have the before and then here's the after. And you can see some of the
the after. And you can see some of the key things that I had that I updated after the feedback from date. And again,
you can see it here in the feedback. So
all of that was no problem at all. And
you can see the different like one of my favorite things was that I went ahead and modeled this divot here into the helmet when in fact it was just supposed
to be a planer surface as we can see in the reference. So I took that as an
the reference. So I took that as an extrusion when instead it was just a plane of the helmet. So now let's look
at what AI did. I tested the big four which is meshi onion and heightum. Now, I know there's a lot more AI mesh generation tools out
there, but those four gave me the best results through my early testing. And
boy, did I test a lot of different AI mesh generation tools and a bunch of different ways and approaches until I found a really good core workflow that I'm going to share with you here today.
I cover this in full depth on the full video on my YouTube channel. But for
here, you can get a highlevel approach on what I was trying to achieve early on. Not only did I use the four software
on. Not only did I use the four software or the four mesh generation tools, I also used Nano Banana or Google Gemini and Chat GPT for image generation. So
the first thing that I did was see how far AI mesh generation can get. I wanted
to see how far AI could get with just this side view here. And the results weren't that great. Here we have Messi.
And with just this side view, it actually like looks okay when you're just looking at it from the side view.
But the second, right, the second you start turning around and looking at views for things that it doesn't have reference for, it looks really bad.
Which again is to be expected, but this gave me some good insight on what AI is doing or thinking. Right. Trio, same
thing. They all kind of do things a little bit differently. They essentially
have to fill in the blanks, right? But
things like this was pretty interesting.
It almost looks like it puts like this weird sci-fi pattern here in the case of Tripo Hanion here. It
also does some really interesting things. Again,
things. Again, not its fault. It didn't have a lot of information. Looks fantastic from the
information. Looks fantastic from the side, but the second you rotate, you can see where it falls apart. And here's
Heightum. Same thing.
doesn't look that great. Completely
makes up some things and even adds like some its own LED patterns and ports and you know it's getting inspiration from the side panel and putting it on the
side. So, got it right. What is AI's
side. So, got it right. What is AI's kryptonite here? AI's kryptonite is
kryptonite here? AI's kryptonite is ambiguity. It really doesn't understand
ambiguity. It really doesn't understand what you're trying to create. All it can do is fill in the blanks. And if you let AI fill in the blanks in production,
you're going to have a bad time. The
next part is where I started fixing and giving AI more information. If I jump to my mirrorboard here, what I did was take the concepts
and feed it into Nano Banana and Gemini and had it create front three/arters, back view, front view, and then of course the main side view. And here's
the chatbt version. Chachi BT took it a little bit further and started to change the look and feel. So, I kind of got two different styles here and I just went
with it. I just wanted to see how far
with it. I just wanted to see how far that I could get.
And again, now we're starting to look at what it was able to create. And it's
better. Right now, I have all of this here in my mirror board. And you can see the core ones that we have with meshi tripo onion and heightum with nano
banana and then chat chbett with meshi triple onion and heightum to make this easy to compare what I did was bring all of this into blender and here we'll see
the comparisons right in some instances I have two helmets and those two helmets are because I have the nano banana and chatbt version and we can start to see
it gets a little bit better here, but not all that great. Even with giving it this reference, it still just didn't do what I wanted.
Like, this is messy. This is full-on high poly. And if I were to start from
high poly. And if I were to start from this, I would have a lot more work to do. Trippo actually did a really solid
do. Trippo actually did a really solid job. I isolate here and kind of rotate
job. I isolate here and kind of rotate around. We can really see that it's
around. We can really see that it's picking up the panels nice, but the front of the helmet is really where it struggles, right? Because
again, what is it trying to recreate? It's
trying to recreate the the concept based off of this reference. Now, what I'm realizing is maybe from the beginning if David and I work through this concept
and we say, "Hey, let's really push AI to its capabilities or give AI everything that it needs." We would have gave it a full on side front, front 3/4
rear that was fully shaded, so it wouldn't have to fill in the blanks. But
even still, it doesn't give us everything that we need. If we were at maybe 50% before, we're now probably at about I'd say 65 70%.
Hunion did a okay job, but even with that rear of the helmet, it just still messes it up, right?
We can see that I'd have to fix up the the back completely, but that aside, I mean, the front it does okay. It adds these extra panels
does okay. It adds these extra panels here at the front.
And what should this look like, right?
And then we see hemum. And then Heitum just I don't know, man. It just didn't really do that great of a job. It's
okay, but not great. So, what did we learn from all of this? What we learned is that if we start to remove ambiguity, the results improve drastically. Like, I
would say significantly. If we're
starting at 30 or 40% accuracy, we're now getting closer to 50 60% from an anecdotal standpoint. But still, it's
anecdotal standpoint. But still, it's not that great with design decisions. So
I wanted to give it one more test. That
being this here I have this layer called done.
And with this done version, this is the helmet that I created through the traditional 3D art pipeline. Meaning
sculpting modeling rettopo UV mapping, texture baking, rendering in Unreal Engine. And this is what we got.
Unreal Engine. And this is what we got.
And this is sitting at about 5,000 triangles from an optimized asset.
So I said, hey, you know what? If I just give this a front view like this, side, rear, top, and then a front three
quarter, and then put that in AI, will that eliminate the issues and challenges that I was having with this ambiguity?
Well, this is exactly what it produced, and I was really, really happy with the overall results. This is meshy.
overall results. This is meshy.
And you can see that it really nailed this front part of the helmet now because that is where in the previous mesh generations it had
such a hard time. And even the textures look pretty solid.
It does mess some things up in other areas. And you can see that it's not
areas. And you can see that it's not really all that symmetrical, but that's an easy fix. And this is meshy. We'll look at Triple next. And
meshy. We'll look at Triple next. And
again, pretty solid. And it does a much better job. And I would say it does even
better job. And I would say it does even better on the rear portion of the helmet. Now, things do get a little bit
helmet. Now, things do get a little bit messy and a little bit wonky. And not
only that, but it gives it this kind of weird faceted look, but as far as accuracy and overall, it is significantly significantly better.
Henion 3.1 solid. It really kind of fell apart here, which was surprising. I
thought it would do a much better job, but there's something with those images that it just didn't like. And then he here just completely fumbled to be
honest with even better reference. It
really just completely messed up this panel area, this sci-fi panel here. So,
all in all, I was really happy with the overall improvement. And I would say
overall improvement. And I would say we're still probably about that now 80 to 85% of accuracy. There was still a
of accuracy. There was still a noticeable amount of work that I would have to do. Now all of this is just talking about mesh generation. As you
can see, this is a lot of detail, right?
So what happens when I take this and use these core mesh generation tools to create the low poly? So I have another folder here that I'll show. So, we can
see again we have the low poly here that I created and 5,000 triangles, good consistent edge flow. We're focusing on all the key details, the paneling, the
silhouette. Even in some instances, I go
silhouette. Even in some instances, I go through instead of having geometric panels, I just have it fully textured because of the small nature of the detail. And so, let's see what meshy
detail. And so, let's see what meshy does here for the low poly. This is now about 5,000 triangles. So this is about as low it could go
and it is really not that great from a topology standpoint from but from a reduction standpoint it is really really good. We're looking at about 5,000 tries
good. We're looking at about 5,000 tries but there's no semblance of any edge flow or topology or anything like that. Like it even
struggles here on the circular portions as well. Hyion 3.1 doesn't have a
as well. Hyion 3.1 doesn't have a textured view. So now this is I it's
textured view. So now this is I it's Henion 3.1 but it's actually using Polygen 1.5 for the reduction. Now this
is now starting to look quite a bit more intelligent as far as edge flow goes and topology. I'm really happy with some of
topology. I'm really happy with some of these details. However, some areas just
these details. However, some areas just completely goes off in an area that isn't really helping the form their silhouette or even the topology and
optimization. And here's Triple with the
optimization. And here's Triple with the smart low poly. And it does a pretty good job. We can see it's giving us
good job. We can see it's giving us details in areas that we want, but because of the faceted nature from the
high poly, it doesn't do a great job, but it's better than I expected. Now, to
wrap things up, if I could give you three tips, one of the most important tips I would say is to remove ambiguity.
Since we are trying to create a very specific helmet, you cannot let AI fill in the blanks or it's just going to give you something that you don't want.
Another tip that I had that went into the that I go into in the fulllength video on AI mesh generation is to separate complexity. For example, I have
separate complexity. For example, I have this reference here of this really cool image that I found based off of actual clothing.
And I removed the helmet, replace it with a mannequin from that mannequin. It gave me good results. But
mannequin. It gave me good results. But
you can see I have a mannequin's head, a jacket, and pants. And if I and then it really starts to mess up the graphics.
And so as we start to reduce the complexity, but keeping all of the key forms, this is what I ended up getting.
And again, I used Nano Banana and Chachi BT for the mesh generation portion here.
I had it removed the graphics and decals. I removed the hands, the head,
decals. I removed the hands, the head, and then I separated the jacket and the pants. And this is what I got. And
pants. And this is what I got. And
again, I was very thoroughly impressed with this. To start here with this real
with this. To start here with this real life reference and to end here from a mesh generation standpoint, and pretty much all of them did a solid job, it
produced really, really good results.
And then finally looking at all of this still without a doubt treat AI as a tool and not a replacement. It is amazing when you're doing things like this for
an ideation phase. You want to quickly rapidly prototype different concepts.
You want to miss and mash and concept bash all sorts of different ideas. You
absolutely can. And it is really good for that and to be able to be fast and iterate. And David goes into that whole
iterate. And David goes into that whole portion from a concept art standpoint.
And you could do the same for 3D art.
Here are some examples of some helmets that I gave based off of Dave's AI exploration and it produced again very solid results. But the thing to
solid results. But the thing to understand is that this these are the concepts that AI created. Right? We're
also finding things like this where it had a really hard time with the visor, the glass. So, I went through and
the glass. So, I went through and updated the concept using AI to create an completely opaque visor. And we can see how good and how solid it turned
out. And then bringing it all the way
out. And then bringing it all the way back, this is the low poly version.
We can even see some of this where we have the high poly helmets based off the complete concepts from AI. And then once I pumped it in here, some of these
produced really, really fantastic and good result. Like this helmet from a low
good result. Like this helmet from a low poly top optimized standpoint was incredibly incredibly impressive. And
it's about 15,000 tries, so it's kind of mid poly, but the details in here look amazing. And this is using Polygen 1.5.
amazing. And this is using Polygen 1.5.
We can see Messi again does okay on the reduction but not on the topology. But Triple
again also solid. So this is where it starts to do a really really good job but it's doing a good job on a sci-fi
helmet, not the helmet that I needed and we needed in the production scenario.
And here we can see the jackets, the final version, which if you're interested, I cover in the full length video. So, to bring this all back home,
video. So, to bring this all back home, I was I've been looking at AI mesh generation since the beginning of 2025 last year. Now, we're at early 2026, and
last year. Now, we're at early 2026, and here's my updated take. AI mesh
generation has improved significantly, which is to be expected, but you have to keep in mind how to use AI and how to get the most out of it. You get the most out of AI when design decisions have
already been made. You completely remove ambiguity and you're okay with it changing the concept and maybe making it more generic. So to answer my question
more generic. So to answer my question from the beginning, AI is good at making a sci-fi helmet, but it is not good at making my sci-fi helmet. And that
difference, that's why 3D artists will still matter. So let me know what you
still matter. So let me know what you think. Did the results surprise you? So,
think. Did the results surprise you? So,
if you want the full deep dive, including all the blend files, mirror boards, check out the link on my channel and the link below. So, with that, I'll see you in the next one. Well, there you have it. Straight from the masters
have it. Straight from the masters themselves. AI is definitely an
themselves. AI is definitely an incredible tool for rapid ideation. But
when it comes to hitting a very specific AAA vision with clean, productionready topology, the human artist is still undefeated. We hope this breakdown gave
undefeated. We hope this breakdown gave you a clear behind-the-scenes look at how professional artists are actually using these hybrid workflows to their advantage rather than just stressing about them impacting their future. By
mixing AI generation with traditional 3D mastery, you can build highly detailed optimized game assets way faster than we used to. A massive thank you to Dave
used to. A massive thank you to Dave Ardondo and Anmar Muhammad for sharing these highlevel industry insights with us. If you want to see the exact nodes,
us. If you want to see the exact nodes, the topology tricks, and the full unedited technical breakdown of how Onmar got that final 3D result, your next step is to head over to his
channel. We've linked his full deep dive
channel. We've linked his full deep dive tutorial right down in the description.
We've got more technical videos coming from Onmars and Dave diving even deeper into these techniques. So, drop a comment and let us know what specific workflows you want us to tackle next.
Are you using these modern hybrid 2D and 3D workflows to create your game and production assets for your personal projects, or as a working professional at your studio? Let us know your thoughts about the new way of creating
art in the comments.
And uh don't forget to like and subscribe and we'll see you in the next one. Perfect.
one. Perfect.
Loading video analysis...