Adobe just changed the creative AI world
By Curious Refuge
Summary
## Key takeaways - **Adobe Embraces AI as Aggregator**: Adobe is now positioning itself as an aggregator, bringing various AI tools like Topaz, 11 Labs, and Runway directly into its platform, signaling a shift towards a unified creative ecosystem. [01:06], [01:38] - **Firefly Image 5 Ups the Resolution**: Adobe's new Firefly Image 5 generator produces images up to 4 megapixels, a significant quality improvement over previous versions, with the added benefit of copyright-cleared models for commercial use. [03:22], [03:35] - **Premiere Pro Adds AI Masking & Auto Soundtrack**: Adobe Premiere Pro now features a tool for quickly creating masks around characters, simplifying tasks previously requiring rotoscoping, and an AI that automatically generates soundtracks matching video duration. [09:06], [09:54] - **Magnific Precision 2 Leads Image Upscaling**: The latest Magnific Precision 2 upscaler is deemed the best on the market for photorealistic results, outperforming tools like Crystal Upscaler and Topaz Gigapixel, especially for large-scale projects like billboards. [24:39], [27:19] - **AI Adoption High, Cost Remains a Barrier**: A survey of 16,000 creators revealed that 76% use AI tools for business expansion, but 38% cited the cost of AI models as the primary barrier to wider adoption. [15:38], [16:06] - **VO 3.1 Excels in Video Performance**: Google's VO 3.1 is highlighted for its superior character performance and physics in AI video generation, though its current 720p native resolution necessitates upscaling for higher quality. [20:53], [21:23]
Topics Covered
- AI simplifies common creative tasks across Adobe tools.
- Adobe's AI Aggregation is the Future of Creative Workflows.
- High AI tool cost is the biggest adoption barrier.
- Google VO will win the AI video generation war.
- Magnific Precision 2 is the best image upscaler.
Full Transcript
Adobe just changed the world of AI
creativity. By the end of this video,
I'll explain why. There's a brand new AI
image upreser that has better quality
than any tool we've used up until this
point. And Miniax 2.3 is here and the
quality looks really good, but how does
it compare against VO and clink? Well,
we'll find out in this week's episode of
AI Film News. Thanks for joining. Now,
before we get going, I want to say thank
you so much to the team at Adobe along
with the members of the Curious Refuge
community that said hello at Adobe Max.
We heard so many incredible stories of
students who went through our training
and landed jobs afterwards. I really
appreciate everyone who came by to say
hello. Okay, cool. Let's get to the
news. So, of course, we're not going to
bury the lead. Let's talk about the big
news coming from Adobe Max. There were a
ton of updates from the Adobe team, but
I think the first one that I really want
to focus on is the fact that Adobe has
officially planted their flag when it
comes to artificial intelligence. While
they were a little hesitant to talk
about AI a few years ago, this year's
Adobe Max was almost exclusively talking
about the latest AI tools on their
platform. Now, the most notable update
that I think really has a seismic shift
in the way that we approach our
creativity is the fact that Adobe has
gone allin on being an aggregator that
brings various AI tools together using
Firefly and various tools across the
Adobe platform. You have the ability to
do everything from creating images to
creating videos using some of the very
popular tools on the market. For
example, you can use Topaz, 11 Labs,
Flux Google Ideogram Luma Moon
Valley, OpenAI, Pika, and Runway
entirely inside of Adobe's platform.
Now, I'll do more extensive training in
the very near future on how you can use
Adobe tools for end to-end creativity,
but I do want to show you some of the
newest innovations that we have inside
of the Firefly platform. So, let's hop
in here. You can find a link below this
video. I'm going to be using the mood
board maker inside of Adobe. So, we'll
go ahead and select start mood boarding.
And if you've used other tools like
Miro, it works in a very similar way.
So, you can of course bring in
thirdparty assets if you want. You also
have the ability to create a completely
new board. I'll just go ahead and select
create a board. And of course, we have
the ability to bring in our assets. I'll
go ahead and click X. Now, the cool
thing about using boards is you can kind
of make it anything that you want
depending on your own creative
organizational style. You can lay out
all your images in a straight line. You
can have it be super messy. Everyone's
different and it really just depends on
what type of workflow resonates with
you. So, for example, all you have to do
is go down here to generate an image.
And from here, you can see we have the
ability to select this menu. And now we
have a ton of different image generators
that we can select. And most notably,
there is Firefly Image 5 at the very
top, which is Adobe's brand new image
generator that generates images up to 4
megapixels big. And it's a big quality
improvement over previous versions of
Firefly. That's really important because
again, Adobe's models are copyright
cleared, meaning that your clients can
use the images that are generated from
Firefly and they don't have to worry
about thirdparty IP showing up inside of
the images. But of course, you have the
ability to select other image generators
if you want. Notably, you're not going
to find image generators like MidJourney
or Seeddream inside of this tool. But
I'm going to go ahead and select the
Firefly image 5 preview here. And I have
a quick little prompt here. A cinematic
still of a man in a sci-fi film holding
a magical blue stone. Now, what's also
cool is you have the ability to upload a
reference image. You can, of course,
change the aspect ratio. We'll select
widescreen. And they do have this button
right here that does allow you to go in
and select subprompts. Basically, these
are prompt presets that you can select
to kind of push your generations in a
specific direction. And you can also
select random if you wanted to do
something like that. But I'm not going
to select any of those settings. We're
just going to keep this open. And you'll
also see here that it says it's going to
use zero credits. Adobe has announced
that you get free image generations
using any of the image models, not just
the Adobe models, until December 1st.
And they're also giving you free video
generations using Adobe Firefly until
December 1st as well, which is pretty
cool. And as always, I'm going to go
ahead and click generate a few more
times. I like having more images because
I just get to iterate faster and pick
and choose the best generation for my
project. So, we have a few different
images here. We have this first one,
which looks okay. It's not exactly a
blue orb. Number two, it's okay. Number
three, I think is pretty cool. And then
we have number four. And I did not
prompt for him to be wearing a metal
bra, so I'm not entirely sure what's
happening there. But whenever you find
an image that you want, you can go ahead
and select place on canvas and it will
place it on the canvas here. Now I have
the image here that I've selected. And
what's also very cool is you have the
ability to go in and edit and change
these images using settings that you may
be familiar with in other image editing
software like MidJourney or Flux. So I
just want to note that because you do
have the ability to vary your image. you
can go in and edit it. And you also do
have the ability when you edit to edit
using tools like Nano Banana or Flux
Context, which allows you to be a bit
more conversational in the way in which
you change the image. And to put that in
comparison, here's the image that we
generated from Firefly image 5. And here
is the same generation from MidJourney.
You can see Midjourney pushes into a
much more cinematic result. The result
from Firefly does seem a bit more stock.
The lighting does come across as, you
know, stock video, if you will. But I do
think that it's a big improvement over
Adobe's previous image generator. But
for our example, I want to go down here
and select generate video. And you can
see we have a tab that looks very
similar to other video tools on the
market. And again, you have the ability
to select various video models depending
on the specific model that you want. So
you have tools like Mary by Moon Valley,
Pika, Luma, Ray 3, Runway, Google Vo
3.1. A lot of the really high performing
video models are directly here inside of
Adobe. Now, there are a couple notable
exemptions. You're not going to find a
lot of the Chinese models like Miniax or
Cling inside of Adobe, but we have
Google 3.1, which I believe is the most
intelligent video generator on this
list. So, I'm going to go ahead and
select that one. So, to upload our
image, all you have to do is select the
first frame button, or you can go to
this little dropper, and we'll select
our image from the canvas and select use
as first frame. So, we have first frame
here, and then you can, of course, do
last frame as well if you're trying to
be very specific about how your video
progresses. Because this is Google V3.1,
we're going to make sure we have audio
selected. And it's going to note that
this is going to use 400 credits and go
ahead and hit generate. And after about
a minute, it created this video here,
which is really dynamic. It looks really
cool. I maybe I'd prompt out the like
weird blue flames that pop up here, but
I like the idea of this guy kind of
holding this orb and presenting it to
something. And so I was like, okay, how
can we like do a twoot sequence here? So
guy presents orb maybe to like an alien
ruler or something like that. And so we
actually were able to use Firefly boards
to sequence this out. And so we
generated a new shot here using Google
Nano Banana. And then we have this shot
of him approaching with the orb.
and the alien grabs it. And what's
pretty cool about this video is you can
feel a lot of the weight in the alien
grabbing the orb from the guy's hand.
So, I think it did a good job here. And
I should note that you could totally
expand this. You know, it could become a
huge board with all of your shots. And
what I love is you have the ability to
loop the video so you can see all of the
clips next to each other. So, it's not
technically a video editing timeline,
but it allows you to kind of see how the
sequence of events will unfold, which I
think is just really helpful whenever
you are working on a creative project.
Adobe also announced a few other video
updates that you should know about.
Notably, inside Adobe Premiere Pro,
there's a brand new tool that allows you
to quickly create mask around your
characters, which basically eliminate
the need for rotoscoping for simple
projects or if you work on social videos
where you, you know, cut out your
subject, it's going to be a lot easier
to create those directly inside of
Premiere Pro. They also came out with an
update inside of Adobe Lightroom that
allows you to type in a prompt and find
the images that you're looking for. It
also has the ability to pick and choose
the best images from your shoot and, you
know, can just help you save time
whenever you're pulling selects or
working with a client. They also
integrated Topaz into their tools which
allow you to upres the images and videos
directly inside of Adobe's tools. Adobe
also announced a brand new feature that
allows you to upload a video and it will
automatically generate a soundtrack for
that video that is the specific duration
of your clip. So, let's hop in here and
take a look at it. So, all you have to
do is click the link below this video
and we'll select generate music. So, for
our example, I'm going to bring in an
animation example that I created for a
tutorial earlier this week. It's
basically about a minute long and it
just is, you know, some animals ordering
food in a cafe. And we'll go ahead and
drag and drop that into the video
section here. And you can see
automatically it said we're going to
create a melancholy atmospheric song
with ambient drone style for a dystopian
reflection. It's understanding that this
entire film takes place on a rainy day,
which is uh pretty cool. And of course,
you do have the ability to go in and
edit the style. You can also select
different carrots here depending on, you
know, the project that you're working
on. I'm going to change the energy to
medium, the tempo set to medium, and
then it's 68 seconds long, which is how
long the video is. And let's go ahead
and hit generate. Okay, let's listen to
our first track here.
[Music]
Sounds like a uh a haunted house. Let's
uh do number two.
Okay, a little spooky. And let's listen
to number three.
[Music]
All right. You can see uh it's taking a
very dramatic tone in the generation.
So, as you can see, it will
automatically create music for your
project. Is that music better than
handcurating music from a tool like a
stock music library? I don't think so at
this point, but it is cool that we're
beginning to see automatic soundtracks
inside our editing tools. They also
announced a new tool called Project
Moonlight that basically allows you to
connect your social media account and it
will create content recommendations,
which is pretty cool. very excited to
test out that brand new feature. There's
also a really interesting announcement
called Project Graph where Adobe has
essentially created Comfy UI directly
inside of their platform. So all of the
node trees and workflows that you are
probably familiar with if you use
company UI or more recently runway
workflows, you now have the ability to
do a very similar thing using Adobe's
project graph. And if you work on
projects where you have consistent
outputs that you need again and again,
like social media projects or film shots
that have a very specific aesthetic or
character or style, I think that having
some sort of workflow like this could be
really powerful. They also announced a
brand new mobile version of the Premiere
Pro app, which looks a lot like Cap Cut.
There's a lot of features in there for
allowing you to edit your videos even
faster. And there's a lot of the AI
tools that you've come to know and love,
like the sound noise remover feature
that you'll find using Adobe Podcast. I
use it all the time, and it's really
helpful if you have a lot of media that
you shot on your phone because a lot of
times, of course, you don't have a good
microphone. So, having the ability to
enhance your audio directly inside of a
mobile application is pretty cool. On
the business side, they also announced a
brand new initiative called Foundry,
which allows large companies and brands
to essentially create custom models that
can run inside of Adobe's platform. The
reason why this is important is because
it connects studios and brands with
Adobe's machine learning researchers to
essentially create the best models for
their workflows. A lot of times brands
create a lot of images with specific
products or in a specific style. So
rather than using a complex Comfy UI
workflow or traditional Laura training,
you can work with Adobe's team to create
a custom model specifically for your
brand. And finally, on the business
side, they also announced Gen Studio
that allows brands to train a model not
only on their image data but also data
across their entire company.
Essentially, the value proposition is
you can create different types of
designs and assets from a simple prompt
and it'll do everything from social post
to website banners and streamline the
entire process. They also announced a
brand new AI assistant inside Adobe
Express. This is really cool because
basically if you have an idea for a
design project that you're wanting to
create, you can type in a prompt and it
will give you recommendations on the
type of design that you could use for
that project. And then you can kind of
go down the creative rabbit hole and
give feedback. You can change the
colors, you can change the assets and
this is entirely from a conversational
experience. So, you don't have to click,
you don't have to learn tools, you're
just having a conversation and changing
the design based on your own creative
taste. And what's also funny is the
announcement that got a huge applause at
Adobe Max is a new AI tool that will go
in and automatically name your layers
inside of Photoshop. I can't wait till
they come out with this inside of other
tools like Premiere Pro and After
Effects. And you know, it'll just help
organize your projects and make you seem
a bit more organized than you are
whenever you share your projects with
other people. I think one of the most
notable things that I saw from Adobe Max
is a survey they put together with
16,000 content creators and they said
that basically 76% of those people have
used AI tools to expand their business
or brand. I really had no idea that AI
was so popular across the creative
world. Obviously, in this new creative
era, there has been friction, but it
seems like the vast majority of creative
professionals are using AI in their
day-to-day workflow. And from that
survey, they also found that 38% of the
creatives said that cost of AI models is
the biggest barrier keeping them from
using the tools. And that kind of
transitions us to our conclusion about
all of these updates from Adobe. I think
the fact that such a large company is
positioning themselves as an aggregator
and the fact that they already have so
many creative tools out there that
professionals use day in and day out
like Premiere Pro or Adobe Photoshop. It
really says that the future of
creativity is going to come from these
aggregators. We've talked a lot about it
in our student office hours over on
Curious Refuge along with this channel.
The fact is aggregators are the future
of creativity. The era where you would
have one model on one platform, I think,
is slowly starting to come to a close.
And having a really robust all-in-one
solution that can bring in different
models into a single place, I think is
the path forward. And the fact that
Adobe has such a robust network of
creative tools all in one place already
makes them a leader in the space. But
the big caveat to that is exactly what
the people from the survey communicated
cost. So what does it cost to run these
tools inside of Adobe's platform? Well,
if you want to generate a video using a
tool like Luma Ray 3 directly inside of
Firefly, it's going to cost you 500
credits. Now, to put that in
perspective, whenever you pay for their
$10 a month plan, you get 2,000 credits.
So, essentially, it cost you $2.50 to
create one video clip using Luma Ray 3.
And so if you're working on a short film
project and let's just say it takes you
five generations to get the exact shot
that you're looking for, which would be
pretty impressive because usually it
takes more, and then your short film has
200 shots, it's going to take about
a,000 clips to get that short film put
together. That's going to cost over
$2,000. So there's definitely going to
be a need to reduce the price. The best
dollar fordoll deal that is currently
available on the market is the unlimited
Google VO3.1 fast plan, which is $200,
but you do get unlimited generations,
which I use all the time. Enrollment is
now open for the November session of our
courses here at Curious Refuge. If you
wanted to learn alongside the world's
biggest studios and the most creative
people on the planet, we would love to
have you inside of our program. We had
multiple people from Adobe Max come up
to us and tell us that Curious Refuge
has changed their life and allowed them
to land world-class jobs. We would love
to have you inside of the programming.
And we also launched our brand new AI
screenwriting course which is available
over on the website. Be sure to check
out the courses and let us know if you
have any questions. The team at Miniax
released version 2.3 which says it has
improved physics and is just all around
a better video model. It generates the
video clips in 1080p. Let me show you
how to use it. So, I'm here on the Hilu
AI website and you can see at the very
top here, it's very easy to use. There's
a prompt box. So, we can type in a quick
little prompt. We'll say handheld shot.
a troll walks around a field looking for
something. And of course, let's go ahead
and upload an image to create the start
frame for the shot. So, I have this shot
here. I generated this using MidJourney.
And we'll go ahead and bring that into
the platform. And when you're ready, go
ahead and click create video. And after
a few minutes, it generated this video
clip here, which honestly is one of the
more impressive AI video clips I've seen
in a while. There's a ton of weight in
the character steps. You can see the
details of the hairs all around. It
looks really, really good. Now, there
are a few things that I do want to note.
For example, if you really zoom into the
character and look at the texturing in
their skin, there is a bit of distortion
going on. So, it's not crystal clear. I
do think you'd probably want to use a
third party tool to upres or kind of fix
some of those compression artifacts that
are coming through. But, I think it did
an amazing job. Now, of course, let's
compare that against Clling and VO to
kind of see what the difference is. So,
here's the result from Clling. So,
again, Clling did a really good job.
It's very comparable. I'd say the camera
lens qualities are a little more
realistic from Cling, but altogether
it's not too far off. Both are really
good. And then let's take a look at the
generation from VO 3.1. Okay, so as you
can see inside of VO, the physics and
the performance from the character are
the best out of the other tools. The
problem is Google VO 3.1 at this point
is natively creating the video in 720p.
So you definitely would have to upres it
in that case and of course you're going
to lose out on quality. So it's
definitely a balance between using
miniax, using cling or using Google VO.
I think that once VO allows you to
create videos natively in 1080p and
beyond, they will ultimately win the
video war because the physics and the
performance is simply better and of
course the audio quality is much better
inside of VO as well. So I wanted to do
a couple other tests here. So let's
start out with a VFX shot here. So, I
have this castle and this wall is coming
down here and uh this is the generation
from Miniax, which I should note does
not have sound because Miniax does not
allow you to create videos with sound
effects. Yeah, it looks pretty good. I
wouldn't say it's photo realistic, but
it's pretty darn close. I think it's
almost on par with professional VFX that
you would expect to see in a like a
mid-tier budget inside of a Hollywood
production. So, not too bad considering
that was the very first generation that
we got from Miniax. Here's the same
result from clean. Again, it's pretty
good. I think it's a little too hyper
contrasted, not quite as realistic. I
like inside of the Miniax generation how
the smoke kind of shared in the uh color
grade from the video. So, I think the
compositing inside of Cling was not the
best for this generation. And then let's
see the same result from Google Vio.
Okay, Google VO went really over the top
uh with the dirt explosion there. I
really like the physics a lot, but I
definitely would want to prompt that a
few more times, I think, to get the
generation that we're looking for. And
then for our final example, I want to
show you why ultimately I think VO is
the best tool for day-to-day AI film
making work. Basically, I have this
soldier here and I want him to be kind
of, you know, upset and say, "What is
your name?" And uh this is the result
that we got from Miniax. He's not saying
what is your name. You couldn't even lip
dub that because it would not match. So
that's not what we're looking for. Uh
here's the same result from cling.
>> Okay. Performance looks real, but
obviously he's saying nonsense words, so
that's not what we want. And then
finally, here's the generation from Vio.
What is your name? Is that
>> okay? Uh, he had a bit of a stutter
there, but you can see I think the
character performance is the best from
VO. And the fact that he actually says
the words that we're looking for is
pretty great. The problem that you may
run into in VO is it will automatically
put sound effects and music behind your
character. Sometimes, no matter how good
your prompting is, you can't remove
that. And so you may want to use a tool
that separates background noise and
music from the voice like the voice
separator inside 11 Labs if you're
working on a film project. So with all
of this being said, is Miniax 2.3 the
very best AI video tool on the market? I
don't think that's the case, but it
could be very helpful in select
workflows. I do think I would primarily
use Google VO, especially Google VO Fast
for a lot of my film projects, but if
I'm not getting a specific shot the way
I want, I would of course hop over into
other tools like Clling or Miniax just
to see what we get and compare the
results. The team at Magnific released
Precision 2, which is their AI image
upreser. It's a big improvement over the
previous version of Magnific, but I'm
curious about how it stacks up against
other AI image upresers. So, let's hop
into the tool to see how it works and
then compare it against other tools. So,
to use the precision upscaler, all you
have to do is go to magnific upscaler.
Make sure you have precision selected
and that you are working in version two.
There's a few different versions of
version two and you can totally pick and
choose the right one for your specific
project. I'm going to go ahead and
select Sublime just to get the maximum
quality possible. Now, for our input
image, I have this image that I
generated inside of MidJourney. I'll go
ahead and drag and drop that into the
input image section. And you can do a
scale factor that's way bigger than your
base image. I'm just going to stick with
times 4 because that image would end up
being very large already. I'm going to
turn off sharpening because I don't like
to increase the sharpness of my images.
But for smart grain, we'll keep it at
we'll say about 4%. And go ahead and
click upscale. So after about 30
seconds, we have an image here. So
here's the before. And then here is the
after. You can see we have a lot more
detail on our character. So before was
very soft. It's very pixelated. And then
after, there's a ton of detail in her
skin. You can see just little bumps,
little cracks in the lips. There's a bit
more of a natural sharpening around the
glasses. The weave of her hat looks much
more realistic. And so it did a good job
at making this overall scene look
really, really good. Her hair looks
very, very good. The way the bokeh kind
of slowly fades the hair into the
background looks realistic. Her
shoulder, you can actually see the film
grain and the texture from the shirt.
So, it really did an amazing job. Now,
of course, we have to compare that
against other AI image upscalers on the
market. Last week, we talked about a new
tool called Crystal Upscaler, and this
was the result from Crystal, which again
looks pretty good. There's a lot of
detail in the skin. You can see there's,
you know, the wrinkles, uh, in the chin.
There's a lot of realism in the hat
here. Her glasses have like dust on the
lenses, which looks pretty cool. And
again, it did a really, really good job.
We used Topaz Gigapixel to upres the
image. And you can see her lips look
very plasticky. The skin looks plasticky
as well. So, Gigapixel did not do an
amazing job here. And then finally, we
upresed the image directly inside of
MidJourney. And you can see the skin
texturing looks super super fake. So,
with all of that being said, I think
that the new precision upscaler inside
of Magnific is the best image upscaler
on the market. It seems like it's doing
a photo realalistic job. And if you need
an image to be really really big,
especially if you're working on a
project that's going to be printed out
or put on a large canvas like a
billboard, then I think Magnific is the
best tool for you. From a cost
perspective, it cost about 17 cents a
generation to upres your images. If
you're using that X4 model, that's
compared to 40 cents if you're using the
Crystal Upscaler that we talked about
last week. So, I do think Magnific is
just generally your best bet. And you
can compare that against Gigapixel,
which is $17 a month. So, the tool's
cheaper. You can generate up to 10
images at a time. Altogether, I say
Magnificol
for image uprising. There are a ton of
AI film events popping up around the
world. We have a Curious Refuge meetup
in Denver on November the 1st. We host a
digital office hours every single Monday
where you can get your AI film, video,
and creative questions answered by our
team of experts. That's on Mondays at
9:00 a.m. Pacific time. And we also have
a Curious Refuge meetup on November 5th
in Palm Beach. Be sure to check out our
film events page over on our website to
see not only Curious Refuge events, but
also the greater AI filmmaking
community. And that brings us to our AI
films of the week. We have a ton of
films that we want to shout out in this
week's episode, but I want to give a
huge hat tip to the winners of our
Halloween competition that we put
together in partnership with Epidemic
Sound and Leonardo. Our judges had a
really hard time picking the winner of
the competition, but at the end of the
day, story was king and we were really
impressed with all of the submissions.
So for our films of the week, I want to
shout out three projects that were
created by our studio Promise in
partnership with Adobe for Adobe Max.
The first film is a project called My
Friend Zeff by Dave Clark. This was a
hybrid project that took traditional
film making and AI film making and fused
them together. Some of the shots inside
of this project look incredibly high
budget and it's one of the best quality
examples of a hybrid AI workflow that
I've seen to date. The next film that I
want to shout out is called Kaira by
Meta Puppet. Meta did an incredible job
with bringing a building back from
vintage photographs that he took and he
crafted this really awesome compelling
human story that's also really funny and
uh really showcases his directing and
editorial skills. So fantastic job on
that project. And then finally we have
Nagori by Guom here at Promise. It's a
really beautiful 2D animation aesthetic.
The story is very compelling. that the
voice over, even though it was AI
generated, is really, really nice. So, I
highly recommend checking it out. We're
also going to do an exclusive
behind-the-scenes breakdown with these
filmmakers to share with you a specific
workflow that you can follow to create
similar films of your own in the very
near future. Thank you so much for
watching this week's episode of AI Film
News. As always, please like and
subscribe here on the platform to get
the latest AI news and tutorials
directly on YouTube. And you can
subscribe over on our website to get
AILM news sent to your email inbox every
single week. I hope you have an amazing
week. Best of luck on your projects.
Loading video analysis...