Introduction to ComfyUI and Stable Diffusion
By Trixter Film
Summary
Topics Covered
- Full Video
Full Transcript
okay concentration good so good morning everyone I'm very very happy to see you and today we we going to talk about a
stable diffusion as as as an idea basically and we're going to talk about the how we implementing it and how we operating on the stable diffusion models
in one of the uis that existing in this world okay so first before we'll jump to and um what was the point like what I want to get from this Workshop after
this Workshop I want that people will have basic understanding how the machine learning models do work and specifically
how stable diffusion models are working and then you will be able to implement your knowledge in any UI I did choose
comy UI uh which is just you know envelope because it's very you very clearly can see how model uh
Julius sorry just very quick question um do you want to record or I already I'm recording you okay sorry okay thank you
yeah yeah thanks man H so um I did choose com fui just because after I watched few of the uis I I I saw that they little bit opaque with what's
happening inside and com UI is not based and that's why we'll be able just to follow really the process that I will explain in beginning and it will be not just some white paper then you're reading and you're like oh my God I'm so
stupid I will never learn it I don't know what to do but you will be able actually see read understand Implement we're not going to touch anything which is complicated we're not going to touch
to go deep inside of the process we will be able to sit inside of the car and drive and uh how long I doing this um like let's
say two years we're dealing with some kind of machine learning stuff okay and with comu ey I just sat down one month ago uh because I wanted okay to put my
hands actually to business so I don't have very deep understanding and I and again I I what I really would like to do is that you will just be not afraid to
go inside and start moving stuff okay so hope not too very long introduction let's go to the very very first page okay so how does the stable diffusion
Works in very simple way explained or like basically in very very basic stuff and if you will a look on this G on the right side we have a drop of ink which
is going inside of the water okay and uh when it's going inside of the water it's completely diffused okay we very used to see the process we completely
not used to see the process which is going backwards meaning that if we would be able to calculate how the diffuse
drop will come back from being diffused to the way it was in the very beginning and that that's exactly how stable
diffusion models are working how the saving information and also how they do bring back information so right now it's like okay Alexi what let's let's just
let's look in more kind of like in the details into it so of course to calculate something which iside of water is really really complicated you need very big computational Power so they did simplified a little bit to certain
extent and instead of the water they're using noise and let's see what is happening um I will go from backward actually from here from the four let's
say we want to save this cut picture inside of the model okay and what's happening and stable diffusion we're starting to apply layers of
noise one layer and let's call it steps okay we're applying noise we're applying more noise we're applying more noise we're applying more noise in the end
exactly as a cup of water it looks like nothing like a cut but there's very important thing that machine machine learning model
remembers how noise was applied what seed of the noise that was applied on this image and you can reverse engineer that meaning that later if you will take
the same image which for us human humans looks like nothing but noise and if you will start to substract the same pattern
that you apply to this image in the end you will come back to the cat okay so you will have picture of the cat and as you can see nothing like that exactly
the picture of the cut and it still does not make sense for us well okay Alexi congratulations you can store it in very weird way we don't know why and I don't know why it's it's actually rocket
science and it's actually very complicated how it's done and then we can we can come back and have our cut back great but now let's add the
complexity we're not storing one cut but we're storing hundreds thousands and maybe Millions pictures of cats inside
but let's store just picture of the cat and picture of a dog in the same way so there will be another picture of the dog and then we'll diffuse them together and
then there will be some noise which will inside of it have a cat and a dog and now when we will come back we'll not get cat or dog we we'll get some mix of that
which is will be very weird for us right we just like well what is that we we don't need it we don't understand that what is this picture and here and uh here coming inside another very
important part which called the clip or let's say conditions H every model will be able to communicate with us or more we will be
able to communicate to the with this model so when model was created when model was a um kind of like training it received a lot of pictures and in our
case right now was just picture of cat and picture of dog very simple but additionally it received also text um and it was so-called tokenized meaning
the text became token every letter is a token every word is a token of few words together is a token meaning it's how we talking to
machine little bit confusing will get there very very fast so when picture of cut was shown it was written a cut or it was written cut looking into the camera
uh with the stripes H green eyes blah blah blah with explanations so whenever it's saved in very weird way that we don't understand but machine also saved
this tokens saved this information which is just text and whenever we want to bring back from the
noise our data to the picture we can tell machine what actually want to extract for example if we will just write in our
prompt prompt a cut he will go to the noise and then whenever he will come back he will start pull it to direction
of a cut he will not uh react to some uh details which are saved as details of dog and he will bring it towards direction of the cat in very simple way
if we'll write cut looking into the camera he will check in all the pictures that he has from the cuts and he will try to bring details of cut which is
looking into the camera and he will do it in certain amount of steps till machine will say well you know what I converged or maybe I didn't converge and this is the steps that we're talking
about it's very important few stuff that a few terms um uh that I'm mentioning it's a steps it's how many times we applied the noise and then how many
times we reduce the noise and then also which is important is prompt and clip it's how we're pulling this to certain direction now imagine you have millions
of pictures which you will have and every model that you you see like on LinkedIn we have 8 billion tokens we have 15 billion tokens CH GPT we have 80
billion tokens tokens the more tokens your model will have the more power parameters it's different things but let's say the more the easier for you to
talk to this model the more this model is understanding you because it has better understanding of what you're writing in your prompt okay so our models they're not two pictures there
are billions of pictures or millions of pictures and billions of tokens which are explaining what's happening inside uh so far I believe we are okay
rise your hand if you feel like you know know many of you like just getting lost but so far we noising we're saving this in certain way and then we bring it back
now next thing which is happening that we have we when we turned all this inside of noise that's how it stores in
the model and we want to bring it back we always will apply certain noise additionally to that to give us very basic variants you remember with the
prompt we're pulling to the certain direction but if our noise is always the same as it's safed it will give us what we want but there will be not much
variance in the details that it's going to give us and that's why in the end we're applying some random noise on top of that so all the information that's
stored will be slightly mixed and that's where seed is coming in play that you can alter your seed when you're applying on top of all your noise and then when
stuff will come back it will come back slightly different every time and this is the random nature of a generative art that's where kind of like a lot of stuff that we see that like well it's hard to
get it because it's like well you're changing the seat and picture can change completely or not completely or a little bit so we learned already few things okay so we learned steps of the noise
that we're applying we learned that there are different seeds and next thing that I would like to touch is H Lattin
space um not very critically crazy important but good to know okay so images in most of stable diffusion models are saved not as a red green blue
image but they saved in Lattin space meaning we cannot visually see the picture when it's saved in in Lattin space so this cut in the Lattin space
will look different why is it happening just because of compression um if I'm not mistaken Lattin space is 40 eight times smaller than normal picture that
we see and that's why it is way more efficient for a machine to go to this space do all the uh calculations and
then get back the process uh of a um going inside of this space or coming back is always very well defined inside
of the model and it's called vae which is variable autoencoder don't try to remember basically variable Auto encoder I don't remember just remember vae will
be responsible each model will have it will be responsible to take the input picture and with a certain rule convert
it to the Latin space to incode it and then to decode it back so we learned now additionally we learned um vae we almost done I know that right now it's all kind
of like well Alexa you're throwing a lot of termin or not so many actually something like five but don't worry we will get there and I think right now I will stop with this termin because
better now we will go and actually look into it inside of the application and then we will repeat I will repeat myself I will add additional terms and additional conditions we will talk uh
that will be very important but right now I think it's too much of the dry information important in this slides that I'm presenting right now there are a lot of links the first one how this
stable diffusion works it's really amazing page I really recommend it to to read and uh here there's more technical uh um pages about the Vie Latin space
different Samplers I will get to Samplers as well conditions and tokens okay so next one is what will be our application it will be comy UI they're
also a very popular automatic um 1111 invoke AI easy diffusion and so on and so on and so on okay so you can check check each one of them you can click on the link download install it at home if
you want play with that I'm using a lot site called CIT AI for uh resources and there is a Vibrant Community with a lot of
additional models I will talk about the models in a bit and a lot of examples that you can just drag drop and see how it was built uh for Trix Terian guys to start
comy UI you just need to uh run it from this directory and um regarding folder structure so right now
yes okay good uh um a little kind of like technical uh thing when you install a comi it will um basically create this location and then you will have to
install a lot of custom nodes you will see very fast immediately in the first minutes you will try to implement workflows and you will have to install custom stuff it's very easy to do it at
home because you can just go to the manager and install it I will skip any explanation installation right now it is measure Pain still it's easy but sometimes it's not working but you will
have to deal with the directory quite a lot when you're doing by yourself okay uh a little bit about UI and this is kind of warning when you are downloading workflows from internet you will see
something like that and this is not a bad one don't be intimidated especially after this Workshop hopefully just always you have experience I'm talking to compositors
right now you have experience with the node-based uh compositing node-based work most of these people don't and they building something which they think that looks amazing it's completely not
readable by the end of the day all these workflows are junior level workflows for compositing really there's way less stuff happening it's just very complicated because people don't know
how to build node graphs so when you see something like that calm down go inside spread it from left to right and you will see how easy it to operate that so
this is kind of like very first thing okay uh here is just for you if whoever will want to follow this guide is little shortcuts for later how you operating
actually not graph and let's jump to first thing that we would like to do and we will try to do text to image workflow this is very basic workflow and we
already uh basically this set up the uh the scene for that in beginning with all the terms we know that we have a model we know that we need to give this model
some text and then we will want to have some image okay and this will be our first work workflow that we will um that uh we will dive into and this will be
approximately the first image that we will get from uh from our from our comy um we will have our checkpoint what is the checkpoint um I hope you can see
actually what I'm I'm showing we will I will zoom in later but uh I first want to give you an overview of the everything that I told you previously but already inside of the not graph we
will have to load the checkpoint and checkpoint is our model I did really glazed above the models about the stable diffusion models but right now let's talk about a little bit more in depth
there are two models existing right now for stable diffusion okay I'm ignoring stable Cascade for now two models one of them was created few years back and it's stable diffusion
1.5 and this model was trained on on images which are 512 size small ones okay and size of this model is a quite
big I don't remember the number but we will consider this model the first one the small model and now the second one is
sdxl extra large which is existing probably 6 months this model is newer model it is bigger it has same architecture but is bigger and it was
trained on uh images which are uh um 1024 by 1024 additionally this model bigger not just by image size but by by amount of
tokens remember chat gpt3 chat GPT 4 and that will be the difference it is very easy with a sdxl model to create something which is looking very very
cool in very very fast okay so this is our model whenever you loading model you loading checkpoint okay and here I'm
loading sdxl model next one will be um our prompts we will make one prompt condition positive condition which means
we will decide we'll say what we want and we will make one negative condition what we don't want which is also very important we don't want ugly picture like you know it's really like kindergarten with this we don't want we
don't want this foggy we don't want this ugly and so on and so on and then we'll go into into the heart of computation which called K sampler I will explain K sampler
really in depth in in a bit um then we will get out of it and here we
see we can see vae decode it meaning we going out of our latent space By the way latent space I missed something K sampler is working in latent space and
since we are not giving him any picture so we need to tell him at least what will be the size of the image that we're working with and this what this note is
doing empty latent image 1024 10 by 1024 going inside this is our read note our black constant okay very simple we're going inside of K sampler then we're
going out of K sampler with our latent space after we did the computation and then the saying of vae um converting Lattin space to red
green blue space now let's just have a look one by one on our checkpoints and just to see that we're not missing something so this is that's how checkpoint looks like checkpoint will
spit out a model with all the data it will spit out clip clip is the area when we will create our conditions and there will be prompts there will be control
Nets there will be IP processor there will be whatever different conditions we will add but that's where we will operate on them it's like how we pulling model to certain direction and also he
will tell us with v AE how we will go inside of Latin space and outside of Latin space okay so that was our first note uh a lot uh checkpoint now the
second one that I want to talk about is a k sampler and this is the most important one I believe in the in in the basically in computation because that's
where magic is happening and that's where I very hope right now we will start seeing things which are uh familiar to us so first first of all what he's expecting he's expecting the
data of the model okay what we are working with we're working with this model great thank you very much he will expect to which direction we want to pull it positive condition to which
direction we don't want to pull it negative condition and what will be the canvas what will be the read that will enter this sampler and here very
important parts so you remember the noise that I mentioned that will be added before we Den noising this is the seed of this noise and as you can see
there's like you have huge amount of the seed uh that you can apply you can apply it in random way in fixed way we will see what we what we're doing there later
the next one is will be the steps uh so what steps will tell us steps it's how many times we want to den
noise our noise till we will say well we happy with this result basically the your your uh goal as an artist or as an
operator is to give him amount of steps that you will deem as a good amount of steps to get nice picture it's not always that you will give 100 steps that
picture will become become become better better and better sometimes five steps will be enough usually between 12 to 30 or 50 steps will be good amount and you
will learn it while you're working um you can easily overshoot if you give more and more steps model will try to extract more and more details and we
like well I will pull something for you and you can overdo it and it's heavy and you're not getting good result steps are extremely important what is CFG and CFG
is give me a second Seed where did I write it because I I'm forgetting yes someone uh me uh can you change these
conditions on the goal or you need to set it at first and that's it uh you you you absolutely can change it on the goal you absolutely can change on conditions on the go you can say I want
this I want that and then you're executing okay thank you um then CFG guys I forgot what is the translation uh
control for noise I think it control for noise yeah no no no I I want to the real one like you will find it anyway don't remember that okay what you can remember
it's CFG and what classifier free guidance thank you very much extremely easy to remember now forget the name now CFG is extremely important and let me
try to explain why CFG it's how much you want to bend the model to your rule and let's just imagine that you um
your model trained a certain amount of material it knows certain amount of uh pictures and if you will put CFG on 10
which is crazily High model will try to produce exactly what you mention in a positive prompt and not to produce exactly what you mentioned negative
prompt and if you pushing it more and more on certain Mo on certain model moment model will be not able to produce it for you so you will you for example
you want to say that I want a person wearing glasses standing on one leg one hand is here this is here and I really want it like that and model will try to bring it for you but it will break
because in certain moment it will like I will just invent something for you because I don't have it in my data and something that I was talking to friend yesterday and we kind of arriv to conclusion it's like a director is
talking to an artist in in a an actor and saying well I want that you will be more in love and actor is giving you like oh I love you so much no no give me more and then after a while actor is
like breaking like dude I cannot be more in love I kind of that's my maximum so you don't want to push it too far your control from other side if you will go to very low values you give freedom to
your actor you saying dude give me some love and then every actor will give you love differently and that will be a little bit crazy maybe and we'll be already not PG-14 so you don't want to give your model too much freedom because
it will give you a result which you it's unwanted usual values of CFG is between three and eight okay you very fast will find out that well okay that's kind of
like that's where I want to be with the different models with different luras it can be like even 1.5 but again that's something which is less important I mean it's very
important but basically you will get there okay so that was CFG extremely important a slider next one we have sampler name and and Sher I will just
consider them be sorry I will consider them together and basically this is how your steps are
applied imagine you want to denoise your material and you're going to apply 5% less the noise 5% less the noise five uh noise 5% less noise five
less five less and you want to arrive to convergence you want to arrive to best picture you can go five five five five five five and you need like 50 steps still will be there or you saying well I
will go the halfway I will go 50 and then I will go 20 and then I'll will go go uh 10 and five so the difference how
your steps are made there different sampler name different Samplers so this is really rocket science and there are many of them it is not crazily important
for very very beginning you will be very fine with the oiler and normal as a beginner when you will start to play with this maybe you will go to this one or that one there's like this list of
them right now I do have my preferences but again I by no means I feel that I'm really know what I'm doing there but you kind of like you know you can read about that maybe then you will be a little bit
more educated for the different Samplers you need different amount of steps to get your result some Samplers will never will get to your result so for example
if you're saying well I want 20 steps it will be like okay I want 15 Steps he will be like this is the sweet point but 30 steps it will be completely Madness
so it is a little bit uh maybe a little bit complicated later but for now uh easy peasy the last one is the noise we will leave it on one and I will not talk
about the noise right now too much uh but I will come back in a while uh uh soon the noise it's how much noise with
this seed we're applying to the image or canvas that's coming inside of our sampler okay so um we are are ready I
believe I no let's talk about the prompts themselves okay prompts as I mentioned already we're giving positive prompt we're giving negative
prompt um I also uh this is the guide for the prompts after you will read this guide you will be the prompt artist I'm joking a little bit but frankly it's on
our level right now it's enough to just write few words and you will get very good result now I'm I do think that uh prompts are extremely extremely
important on the next level not on our level because it's the way you're talking to the machine and not everyone can talk in the way which is very
efficient because there is a science how words tokens saved inside of the model and if you will know which tokens to get
out by wording it properly then you will get your result in more easy way again on our level right now we'll just type something and we will get something and that will be good enough you want to dig
into it there is a really huge amount of information out there a a empty latent image very easy we're giving a canvas and that's the size of our
canvas and Vie decode we started from decoder in very very beginning it's how we getting out of the latent using vae knowledge to the
image and save image basically let's present our image okay so uh before we will go actually I will just uh very briefly say what this panel is it's
panel of uh confi basically where you executing your prompt you have their que you have a um a history you can save your uh work workflows you can load them
refresh and so on so on right now will be not very very important and um let's build that okay I will start from very very
beginning so the first one we will say load checkpoint and um how do I load it and sorry for technicality right now because for me it is important how you are
actually working double click will bring you this menu yesterday working in Nuke I was double clicking all the time because I was I became very familiar with that and here are checkpoints here
are models I by the way checkpoints I started to explain you about the models and I mentioned guys there are only two types of models SD 1.5 and sdxl but very
important to understand that these models were released by um stability Ai and then community did pick them up and
continue to train them and that's how if you will go to um hooking fa hugging face or a CIT AI sites you will see there are like hundreds of models you
always always need to be aware what is the base model for your model if this sdxl or is SD 1.5 it will be written all
the time so I will just I will grab one of the sdxl models that we did download in order to learn the comfy and it's dream shaper and um why people
pre-training this more why people adding more training you have base model but then you want to that you want to create model that will work better on
environments so you will collect huge amount of uh um uh photos which are um uh which which has also kind of the
text explaining what this what these photos are and you put it inside and you training the and and training the model so model will start learn more and more
about the environments so your model will be able to produce better environment picture it will not produce better picture of humans but if you want to produce very nice uh Mountain it will
do it way way nicer this dream shaper was fed with a lot of material from the Sci-Fi and and so on and so on and so on okay and that's why I'm using this model okay and then we will uh create our
sampler because that's a k sampler that's where all the computation will happen and this is quite easy right now so okay model will connect to model
that's fine you cannot connect directly clip to the negative because um clip is just the RW on top of which one you will put now
your prompts and um I don't want to type new one I saved here my negative prompt and my positive
prompt which I took from one of the random pictures on internet that I found by cond do Ai and here he just putting the prompt and all the rest of the
values and I was just thinking like well I will just instead of typing I'll just get this prompt here but I will just uh create create the clip Tex Tech uh
clip text and code prompt that's how it will look like and I will just copy it but you understand the idea so I will not copy it I will just put it here and
here positive and negative and comi is not very comfortable if you don't know how to deal with the not graph stuff it's very easy starting to be cluttered I will not use any plugins right now to
unclutter it so I'll try to be very very organized we we're coming in with clip conditioning positive and negative okay
so so far so good what else we missing we're missing the latent image okay if I try to execute it I'll will press contrl M Enter he will give me an error he's
meeting latent image so empty empty latent image and I will connect it
here then uh what was it 1024 by 1024 by 1024 stay with me we're getting there
okay and then we are ready but we don't want we can't see latent so we can we should go from latent space to normal picture and that's why we have VA decode
as you can see you can just drag it and then he will give you maybe options or you can double click and type VA de code okay so this is our samples and he's
expecting vae because he doesn't know which model is going to give him the VA and that's it that's how it's going to look
like and image again save image now we are ready to go let's uh press uh Q prompt and I'm using RTX 390 H that's why it's kind of pretty fast and pretty
nice so and uh this is the picture that we're getting just on the uh very much on the Fly and immediately okay now I would like to come back to more uh
working on the cas sampler but uh right now we are done with the text to image now your controls are basically going here or going here I will operate with
all that in a moment when we will start to talk about image to image because it will be a little bit more fun but question about text to image generation
from your side everything is clear good just sorry just one question uh there are a or you think there are a a
specific way to start in terms of prompting it's like it's better start finding a base impr prompting first and then start to touch about the samplings
and everything and the about the model or okay prompting there is a so as I mentioned prompting is actually quite a sign so I would recommend you to go to the
link that I I left in in the presentation uh and um and T somewhere somewhere somewhere
and empty lot yeah and then there's a guide which explaining how you prompting I don't want to go there right now it's huge it is huge so let's not go there
okay yes very important no we will not go there right now okay uh okay we are done we produced very similar picture to that we will play
with this uh quite a lot now let's talk about the next step image to image now image to image it's a same as a text to
image okay but instead of uh I will go back to my workflow and I will just copy everything that I have crl C control shift
V and let's go here right now after all this tedious kind of explanation and like a lot of stuff Alexi what is this you will see
that if you try to remember these things you're starting to play with that okay so text to image H let's go back text to image so we provided to our sampler what
we want from him we provided the model we provided the canvas to paint on top it was painted with this noise with this seed but what if we will provide to this
canvas we will provide to this ampler not the empty latent image but we'll provide some image and I will just say
uh save image and then uh copy image and contrl V yeah so right now what I have
in my possession is a is an image PNG and I want to use it as a base for my work which means I want to exchange
the empty latent image to normal image so first of all of course we need to transfer this image to the Lattin space and we'll say vae
decode okay let's I will try to be lucky no en code okay good so it will be VA and code or usually I'll just drag it and he will say well you probably want
VA en code and this VA en code will go from the red green blue space to Lattin space using vae of the model and that's
it we have our latent and instead of empty latent image we will feed our sampler with the picture Okay so that's
um exciting let's have a look how it will work I will press contrl enter in order to calculate since um now let me
just go here since my seed is set to random every time he will give random seed meaning he will every time will want to calculate I will set it to fixed
so he will not bug me anymore so he will not calculate it twice and I will set here my seed of the sampler to fix as well because we kind of like like you know we don't we don't want many
different pictures right now and I will press enter as you can see he's again he's loading the checkpoint checkpoint is loaded and then he's calculating the sampler but this time he didn't get
black Lattin space he did get this uh uh this image okay so what we've got we've got quite similar result okay because like our prompt is exactly the same our
seat is the same same uh but uh trust me some stuff did change there okay so let's change now let's play and let me just put it them next to each other you
know this guy supposed to be there let me just put here let's start playing with the stuff as a promised in the in the sampler so first of all we can
change our seed to some different seed I'm just like going up and I'll press enter so different noises applied and as you can see different noises
applied and our picture did change let's go to very random value of this so right now he will uh apply random uh random noise and we could
expect that our uh picture will change quite a lot uh we can also say well uh we we we can say that we want that it
will be a human or whatever we want to be there but for now actually uh let's say uh ter Ator okay and let's see what it will
give me again we have random seat our base is the okay and now we've got our Terminator now I'm actually very very
surprised let's just give him just Terminator and remove half of the of the prompt because I I really don't want to have a exactly the same result okay so
now we're getting way way more Terminator and uh with a different seat okay so I will say fixed seat we're not interested in seat anymore uh we
interested in playing with more uh values here you remember I did a jump over the the noise now what the noise
value is doing for us the noise it's how much noise of this seat will be applied to our picture if we applying zero noise
noise uh okay we cannot apply zero noise let's go very very very very little our picture will change very very little
because okay because what's happening we're giving to this um we're giving to k k sampler we're giving a picture and
then we asking in the Lattin space and then we're asking him um yeah we and then we asking uh sampler to
den noise this picture which is a lottin space but this picture is very very kind of like we know the algorithm that's going to do it so he will give us if we not adding additional noise he will give
us exactly the same picture and that's why when our do noise level set to very little one almost no changes will happen when it's set to very big one which is
one maximum change will happen to our picture and where why we need it imagine you have a s picture and then you just want to change it a little bit you can
to start to nudge it to certain direction you want to add more details you want that it will be a little bit more happy in certain way you want to change maybe some slight stuff and then
you're saying well you know what I like this picture let's skip most of them of the picture and we'll just set the noise to 0.2 so we will stick to that but I want it to be a little bit more rocket
ter Terminator Okay and that's what we're getting okay so okay that's too little 0.4 and I will stop now on 0.4 because I think that's kind of like you you uh
getting the point okay more Terminator CFG okay let's force it to be more Terminator 15 which is crazy A crazy number and you
will see how CFG is going to break our picture if we will push it too far huh it didn't break it 25 I will break it don't
worry okay so you can see that picture starting to be less and less realistic it's starting to be more and more contrasty so you don't want to go so and and you will see sometimes you can break
it completely so I will go back to uh to eight and um basically what uh is a important for you to get from a uh from
a image TW image is the importance of the noise it's how much the Noise We applying to our picture in order to
change that uh questions about image to image yes um just one question yeah can you can you change just a bit of the
image instead of like manipulating it in a hole yes I can we will get there yes okay okay cool cool thank you I have just a a really really simple question
yeah um you say when you're when you're starting the process you say uh you just press enter does that mean you just press enter somewhere in the interface
or in order to calculate I can press Q or I can press control enter and he will calculate just yeah it I mean I I learned it after
like one week and I was like oh man I mean it's very handy okay good just one more thing uh when you're saving an image um you just go on image and you
save it and it gets saved to like a default directory or saved in your downloads by default ah okay yeah additionally every time when you're generating image in saved inside of your
output folder so you need to be aware that slowly slowly your output folder will be huge with a with a with the stuff that you don't want there is also option to make preview image and not save but we don't have this installed
yet and this is one of the custom plugins which most of comy UI you will see it's custom plugins but right now we really are in the basics um next
question all right good so I would like to go to the next one so we did now um basically um we we were able to create
simple workflow for text to image we were able to create simple uh very simple workflow from image to image okay now let's see what will be our next
step image to image um and just let me see that I'm not missing something okay fine this is a how we loading the image load image will have
two outputs one of them will be image second one will be mask and that's what we're going to touch in in in a moment right
now we didn't use mask our vae encoder which we did encode our pixels inside of flat space using knowledge of vae and
done very simple you see next one okay let's try the in paint workflow so um little disclaimer I'm not going to use
it right now but for in paint you should use special inpaint model meaning that if you have a dream shaper model there will be also a dream sh shaper in paint
model um why because when you're going to do in paint in serus one within model which is not specifically in paint model you
will select The Mask you will say give me the robot there he will not look too well on the left on the right and he will put the robot but it will be not so connected to your image while the
specific inpaint models will do it better or forus notes it's a custom suit it's
custom notes of com UI it's not of comi it's the custom model and what this model is doing is operating on your clip
you remember conditions and what it does this model specifically trained to help you to set your conditions in the way
that every model that you're using will be in paint model so this this specific Focus notes and focus um model is just helping you not to deal with that okay
so this is little disclaimer will be very much not important for us right now as we going to do very simple in paint okay so I will again I will duplicate
control and uh drag and then control shift C that was unexpected ah sorry contrl Ctrl C
control shift V shortcuts you will get there don't worry okay so how do we in paint so right now Doug you said well I want only specific parts of the image to
be changed and I think very logically we will just have to create a mask we can create external mask but right now I will just create mask directly on this
image and I will just go right click open and mask editor very simple like Nob brainer guys what we changing first one just choose something simple
please the head what the eyes the
head ice was first one uh I will uh okay good I just this morning tried something with ice didn't work with Salvador but let's
see let's let's see okay ice save to the node okay good so we have very simple mask and now what we want to do and let me go back to my presentation we we did
our masks now we need to say that we want to apply this mask H our latent Den noise and then renoise noise and then Den noise just where the mask is there
and there is a node which called VA e and code for in painting but this note actually is just few notes together and every composor will immediately understand what's Happening Here we have
VA and code as we know we're going to Latin space here we're getting our masks and we're applying expansion to mask if
we want aod and here we're going to set latent noise mask we're applying the mask to our space so all that is inside
of one node here here but let's do it complicated so we will be really knowing what we're doing there okay so grow mask and set Latin mask so we going to our
Latin space then we will say well we will grow mask we not necessarily will use it actually um I'll put it aside I don't want it and then what El this a latent
mask mask please this is little annoying thing that said Latin Noy mask that you really kind of like sometimes
set latent noise mask not able to find your note okay so let's not uh use any grow because we don't want to dilate so what we want we want our
samples we want our mask that's it and we will bring this latent inside of the image okay so
now let's see what's happening well I I very hope that it will work so we have a picture and then we want to alter this picture but just in this marked areas what we want which eyes guys now we need
to give a prompt which eyes we want to have there it's blue change it blue okay okay uh uh blue
snake ice Biden ice I don't think he will do it but maybe okay blue snake ice okay good now uh let's have a look what's happening here let's leave it as
it is for now just just to see contrl enter and um are we calculating I press contrl enter ah yeah yeah yeah he's running uh let's see let's see let's
see okay good so as you can see uh it was kind of a h success uh what did we write there blue snake eyes Okay so it
was a very very minor success he didn't give us blue eyes and let's have a look on our sampler and my first thing was Den noys how much I denoising the
original image and as I can see it's 0.4 meaning he was keeping uh 60% of the image and then he was applying it but let's say no dude we
want one we won't go crazy I think one with will be too much but um yeah he kind of went completely crazy so let's not go and by the way if it
will be in paint model most likely he will not go so crazy but let's give him like a snack 0.8 ah this is snake eyes and again this is very good example
when the model is not in paint model it will give you something but it will not sit there in the way that you would like it to sit there okay
0.5 I I just want to bring it to something which will be not completely embarrassing okay great okay good enough okay now additional things that we can
do this is a a compositing area we can go we can apply col color correction to our material before and then uh we will be better now I'm not going to touch
that this is not the workflow I will just show you in the end how the real in paint is going to work this is very basic the very bad part of this workflow
right now when we cutting this stuff out and I will show you what we I will go from latent to and
um back to to the image and I will make H save image just to preview what we feeding the model with okay and you will
see the problem Oh usual thing I forgot vae that's why he told me no D you cannot do it contrl enter uh actually he didn't show me but he
did um I for some reason don't see the mask applied okay uh let me do something different right now and maybe I was mistaken but
it will be quite E uh V what was this let's apply instead of this notes let appli VA and code for in painting vae and code for in painting because I think
he will do something else I will get my image I will get my vae let's say don't improvise I told myself don't improvise and here we go
starting to improvise and I will go again from Lattin space I will go to um decode and from decode I will go to save
image and I will give it a VA contrl enter let's see if um ah yeah you see there is difference so you see what's happening when we applying the mask he
actually putting the gray there usually you don't want to have gray you want to have actual some color there so let's not uh let's try it with um with this ltin instead of what we did previously
and then I will go to to our next example because I afraid I will lose you otherwise with me rumbling look at that it's way better it's still not blue he
has problems with blue huh 0.8 H now I will try to force it sorry I promised you but I want to force it yes blue eyes great again it's broke okay
good uh but this will be the in paint Philip yes your question yeah just wondering I mean could it be that just in the uh nature
blue snake eyes are less common than you know the the other ones and then it's just not in the the sample days absolutely yes this is one of the very
important stuff if your model doesn't have it you can just write blue eyes till next morning you will not get your blue eyes absolutely this as well okay
and that's one of the really really things like you can spend like hour but I I I I did it in my like second day I was trying also kind of blue eyes but model just did have it uh okay good so
this was in paint but I would like to go to the next okay because we kind of like we have this overview as you can see let me clean this a little bit delete delete
uh I will delete also my preview here yes D yeah I just have um a question about the inpaint process yes so um is there a
simple way to inverse a mask basically is there a note this something let's say you you first want to change robot and then you're happy but then you think you also want to change the background but
you want to use the same absolutely you have invert mask and you will use it a lot yeah yes you have okay um questions regarding in
paint okay good let's go to next one and the next one will be um in paint we still in in paint here H by the way you see I told him the shark T it was cool
and I yes used a VA en code for in painting so it's actually did better job there okay control net
um and here now let's go here so first explanation on the control Nets okay control net is a again additional models
and these models uh what they providing us they providing us with additional ability to add additional condition to our clip okay so you see this this is
our clip this is our prompt additionally we can add control net note there what exactly control net is conditioning he's conditioning the shape so we can say I
want this to be like that all the Tik Tok videos that you saw right this was uh done with the control net and this will be the uh uh this will be the look
of our control net in our um in our example in the moment and I want just to give you over overview before we build building that so we will provide our
model we will provide our control net model which understands one specific type of control net which called kannie
what kanie is doing kannie is basically producing lines from your image or cany one of them in order to uh produce these lines you have to use um node which
called a pre-processor and there are specific pre-processors for different control Nets something that you can see here this is an example you have a image and then
you're running one of them through canny second one you're running through open pause and that's one that you're familiar with next one I'm running here through depths or segmentation seg
something so there are many of them you could use one that you think that works best for you or one that actually has um uh license that is good enough for you
to use because I'm not so sure that we you can use open um uh open pause uh pre-processors and control net so there is different kind of like you know
licenses licensing back to us what else we will have to take care of we'll have to provide an image we'll have to resize our image to the size of our uh actual
material because it will try to force our robot to be in this shape and uh here I'm previewing just what I'm doing with the upscale with the resize then
I'm applying pre-processor that then I'm previewing for myself how much I see actually and then you can set threshold and then we feeding our control net okay
easy peasy let's start from very very beginning and I will start from I can start from this but I will start from because I do want to keep
everything easy contrl C I will take very very basic control V very very basic H H text to image and I will say contrl enter I will let him calculate
meanwhile I will bring uh image load
image and let's uh bring something um um I like just something from here ah let's
go okay so this is will be the image that will drive our control net okay as I mentioned control net will sit in our condition area
and um let's try from memory so I need to load control net model load
con control net model okay good then control net model and here I will just like you know because he will give me what he need control net apply fair
enough okay good conditioning and image okay very good so first I will go conditioning conditioning then image so this is my image but this will be wrong
to feed my image directly because um it's not and I I want Cony control net sorry I forgot so I'll just zoom in so you will see what's happening here
another important part you have control net for SD 1.5 and you have control nets for Exel and I will choose Excel because my model is Excel very important be
prepared when you're starting with comy UI a lot of red screens I mean error error error because we're missing stuff and now a image we a Connie as as a
pre-processor image going to Connie and image going inside of image here so we are ready right now we just need to put our condition inside of our
sampler let me move it because I do like everything very linear in the positive um let's just run it even without checking what is my threshold uh
kind of like result there I I will put it actually here save image just to see what I'm getting there and um I will leave the same prompt I will leave everything by default we will go now
back to control net to see what can we tell him okay first of all I see that my lines are quite cool and let's see if uh I will
succeed I will o i kind of over succeeded here because it's a a really really really craz did like everything I
want now let's see what where we can control it now okay so let's go back to our control net and we have app apply control net and strength is one um you
don't have to use the save image for preview there's image preium note uh no I didn't find it Mel maybe the
image I was thinking that it's a a it's in the um when you not in this menu That's Right preview image okay good
okay so you get preview image but uh let's yeah uh let's jump right now let's jump back to control net and here is our control net there's not so much here
that you can control you can control in a pre-processor uh the strength of H lines and I think this is great what we have and we can control strength of the
control net so let's just go and not go so crazy 0.3 because I think we kind of really overpowering right now the condition there
okay cool way better as you can see right now we have our robot in the pause that we did Define but it's not human
anymore now I do understand it looks terrible but H this is basically your way now to control your stuff you can use your control net together with the
image to image which you will be able to push your image to image to certain direction while it's still looking the same of course right now we we can see that we have two heads we really don't
like it so I probably will want to go change my seed and um something will go and will be different and then slowly slowly you will get where you want to be
okay boom way better guys that looks I believe we can agree that we achieved our goal at least for this presentation the guy in the pause that we would like
him to be and I would like to jump to the next um example uh please questions uh regarding control
net no very good um is anyone still awake yeah okay okay very good stay away guys we're almost there one half hour
it's it's impossible to hold I know yes Make Some Noise exactly uh but we will get there okay so I'm catching up on my control net uh we mentioning sdxl we mentioning there few control Nets there
that's how it looks like that's by the way how I created the Chuck Norris here it was very hard to push him to be there uh but well um loras okay next one guys
fun one loras I I wrote it low rank adaptations again remember loras it's fun okay
so what are luras loras it's a small models which trained specifically to push your big model to certain
direction all right so um what does it mean somebody really likes anime he will pre-train many many images of anime and then you will add your Laura to
conditioning so your big model will produce something then it will go through Laura as a conditioning and Laura will extract from the big model or will connect to big model and will push
your conditioning to be something that Laura was trained for let's try it I will go back to base again because I do
want again to be very much simple we easily can combine all of them and I will show you example of combination of all of them but right now let's bring
Laura Lo Laura Lord L L Lord Laura okay very simple it's expecting the model is expecting the
clip and it will output clip again and the model okay I like the Cyber Mech Exodus Exo Suit Laura and you have like thousands of them it's very important
not only for our department not only for concept guys to very go to certain concept it's also important for a DMP for comp if ever we will use it because
you can design decide which style you want certain object to be let's connect it Nob brainer okay Laura here clip yes
clear here uh we output him our model meaning that we will replace in our sampler model to model and well clip
will go to our conditioning here so that's how it will look like condition going to Laura going to prompt and going to positive now let's
make uh now just for fun because robot no no let's let's let's do it let's do it let's do it okay control enter boom does it have to go to negative as well
or not no just positive so this Laura will push our robot to be cyber Punk uh not cyber Punk whatever it was there
um H how cyber Mech exo suit and we can see how much it did change our picture everything is the same direction so we
still there but now we have our suit there and Laura similar similar to uh control net we can uh say how much we
want apply of the Laura on our image so basically how much Laura will intervene with our prompt you can mix few luras you can put one Laura after another you
can you can mix prompts and luras we're not touching this right now but it's like simple merges that you're doing basically okay as you can see right now we just slightly added stuff and he
looks really bad s this is Laura there are hundreds of luras out there that's how we using this questions about
Laura very good let's go to next one so this was my example of the Laura when I was basically preparing for this one okay let's try upscales upscales
something that we're using yes Andrea oh Andrea hi didn't see you hey hello uh sorry Alex um so who makes Laura so like how how you make Laura you
can make Laura I tried to make Laura my computer kind of died on it I Tred to Laur yeah yeah you can make Laura you need to have computer which is not completely crappy H and then you will be
able to create Laura by yourself there are tutorials you can create luras also inside of comy UI I didn't try to do it because it was kind of a little bit H there is special program to do it it's
called con and if you're not scared you can do it yeah because it sounds pry interesting that that yeah awesome thanks yeah you're welcome let's
try to upscale two things with upscale that I would like to touch there are many workflows to upscale I will touch two base ones you can upscale the image or you can upscale the latent when
you're upscaling the latent you upscaling inside of Lattin space and you basically adding just some additional noise but you're not going to go out of
latent upscale and basically the the result will be slightly different and I will I will show you the difference okay when upscaling the image usually you're just saying well let's
just upscale it with a certain model that's what I will show right now okay so let's upscale the image very very basic way we'll just use upscale model that you have on internet four times
Ultra sharp and so on as you can see this is the whole setup okay so I will just go there I will take this guy contr
C and we already like 1K here and um oh no I need to save it uh save image and then uh copy image
control V okay so this is our image I will fly through this uh upscale with model upscale image using model okay good so what he's expecting he's
expecting the image he's expecting the model um uh load upscale low add
up thank you very much and we will output image that's it save image thank you very much control enter um he's calculating he doesn't like that this one is flying around I
will just delete it but this is pretty much Nob brainer here we go we have four times upscale and
uh basically um for hard surface is just awesome okay this is really really good so this is one type of upscales but I
will talk about now different type of upscale I will keep it here just second I'm starting to rush it's bad I will not rush I will talk about the second type
of upscale how we upscale inside of the latent space okay let me duplicate again the uh very basic setup contrl c contrl v
and let's see where we're going to upscale and this is one of the very very base things of comi or anyone who's producing actually quality images that
you're never just going through one sampler you're going to go through few Samplers in order to get your result so I will just duplicate my
sampler and uh I will see what we what we have here we're getting out with latent meaning exactly as we're coming in with a certain latent we're getting out with a certain latent and I will go
this latent from one sampler to another I will reconnect my model and I will reconnect my negative prompt as well
just for fun and M negative not positive and I I will give different positive prompt okay this will be maybe not crazy
important but it will be uh I will just say uh high quality uh right now I'm improvising okay high quality okay so that this will be the
sampler out of my sampler will go inside of the vae and to the picture so what's happening between this guy and this guy and this is the workflow that used a lot
the first step will be to produce image with not too many steps to get some general idea how it will look like and usually it will be 10 steps but then you upscaling your image and calculating
more steps in the next case SI so I will say upscale latent by certain amount by 1.5 for example and I will put this upscale
latent by between these two k Samplers 1.5 interpolation I will choose by cubic let's see and they both have
the same seed you probably don't want to have the same seed because it will it's called burn itself because it will try extract the same details again and instead of doing that I will just go
different c a little bit and also what you probably want to adjust is the noise value you don't want to denoise your image completely because as you can expect if you're denoising this a lot
you will get completely different picture you want to denoise it slightly 1.2 for example and then it will be upscaled and
then it will bring back the new details in upscaled image usually okay you see red I forgot to connect my clip and I will connect my clip
usually um you will have to play quite a lot maybe it will be broken right now for me the upscale I very hope it will be not because this is the last thing that I would like to show from the workflows it will be kind of like fail
in the end and I would like not to have it but let's see so he went to First sample now he's uh then he did upscale the Latin space and then he gave all
this upscaled noise back and we just try trying to improve the uh quality of our image okay as you can see I did fail and right now for me it will be kind of like
back and force to understand okay where is my mistake and I can say well maybe my conditioning is wrong okay and always
doing this maybe I want to go 1.5 with the noise and um let me keep it like that I frequently falling inside of this and then I'm like ah yes of course I
forgot something like one or something like to let's see if I will not get of this then it will be fail I don't want to waste more time of on that because eventually I will get it or you will get
there okay so now it looks already a little bit better I don't think it's way oh yeah it is way better so my Den noise level was too low probably or my CFG um
what I did change yeah I did change the noise uh or maybe my prompt was wrong so I didn't have to replace by prompt um
questions about the upscaling okay good now uh let's see where I'm I very hope I'm I'm done with showing new stuff because my brain is
boiling upscale latent this for example was example of me upscaling the latent and you can see how many details he did add on top which is quite cool um oh no
IP adapters okay the last one guys fun one F fun one okay so additional thing to move our uh footage to certain direction
will be IP adapters please allow me not to create work uh not to create a workflow allow me just to show you how it works and then I will show you the
workflow that I already used IP adapter is operating on the model and the IP adapter is
telling uh which uh style of a picture you want to produce so in this case and of course it's using its own IP adapter models which you will be you need to be
really specific uh with and you're applying this on the model level in this case I found this picture from our time of Captain Marvel and I did apply it on
our robot and that was the result that's how robot came in when I told him I want you to be more like this and that's what came out and this is very very powerful
and I specifically leaving last 10 minutes to show you actually something that more resembling how it's actually will work and show you the actual
implementation in our tests right now okay and that will be this so the first of all I will show you some play blast
of the test that we had so basically um what I tried and you guys probably will be able to read what's happening there I will just go from left to right and I do expect I do expect that you will know
what's happening now first of all I'm using stumps Tada but basically just for you to just very understand that every time when you see that something is not
connected is probably connected to the base stamp very base model so I have my model here I have some conditioning
futuristic prison compound okay um I also have here additional condition we don't care and this is H what is this my original image that I'm going to feed
okay that's how image looks like I'm going to my control net and it will give me the shape of the image and here is IP
adapter it's how I want my prison to look like now guys extremely yes yes okay H jilber you have
Eco can turn off please um sorry yeah so H this is IP adapter and what I found extremely essential you really want to feed the
look with very good IP adapter your Black Point your white Point your intensity of Shadows and the and the sun it will be very important to produce
final result so this is my IP adapter I am scaling it to certain resolution I want I'm applying a adapter and I'm going inside of my sampler and here I
have comparision of what I had to what I have right now okay so basically this is I I I put the strength of U control net very high
so it really will try to go and get as much as possible and then the next step I'm upscaling my prison so I'm going to my second case sampler and giving him
more samples and if I will compare what I had in beginning and what I have now I do have better defined objects in certain areas and I can play with this
way more but right now it's just examples and the final result is basically this okay and right now what I can do for
example if I want to experiment with the look I can go here and let's see and I can Deno instead of 0.3 I will Deno 0.7
meaning if I'm not mistaken oh uh he's loading right now everything my final result here will be way way different from what I entered in very very
beginning I will let it calculate but I will go to the next example which is in paint while it's calculating okay so uh this is one of
the uh images that uh we have and basically this is the in pain process that we went through just a little bit on the steroids and that's the result that
we're getting there's zero manual in paint okay and this is very very impressive as a base later to bring the this to comp to bring this to photoshop
and keep going and improving and fixing stuff okay now let's have a look on the tree itself okay so guys you already can read it you can read it I believe if you
can't read it then I felt miserably we have a model that we're loading this model will give us everything we need model clip VA we have an image and we have a mask this time I did load
external mask I converted it to mask and it's used later and then you remember our gray eyes that was bad we can in
paint using a model Philip yeah I'm just wondering The Mask you input the black and white image was that like a very precise timec consuming
mask just you know very rough and blurry which is important so what I did here instead of pre-processing and making whole gray I wented I pre-processed with
a model called Big llama and this model somehow filled in somehow our holes but it gave me very nice uh structure there already to start with and it's has a
seed so it took me time because it was giving me different kind of like benches that I don't like and then after that I
went inside of already the focus in paint H and um um an additional round of the in painting which is happening with
a K sampler and that's what I've got here then I did upscale it one more time just to get better details and in the
end final upscale to bring it to 4K resolution okay so this is the this is the very practical usage that can be
packed which is packed inside of one workflow so we theoretically practically can connect to Nuke we can connect to photoshop to CR to just give it something and it will speed you
automatically whatever you want if your workflow is uh good enough let's see what we did here okay as you can see I denois it a lot and then we lost all
connection to our prison but if you're going 2.5 2.6 you're starting to get very cool different seed very good different different stuff so you starting to play with
that this example um alien this is another example I was just playing around I just told I want westle Snipes give me the direction and then I want to create an alien and then I want
just in this area to exchange his te and exchange and then I want to upscale it so it will look a different way and so on so on but this is really not important back to our
workshop and done questions if you have
questions um otherwise I think that's more more way more than enough for one day um okay closing statement probably I
didn't prepare but anyway generally uh I I I'm saving this video it will be as always accessible for you guys Zoe Zoe Zoe sorry my daughter is at
home okay sorry so what I would recommend specifically I'm talking to compositors the trickster guys we have it for educational purposes right now in Company please touch it please play with
this because the more people will touch that the the more people will touch it the the more we kind of have the confidence that we actually can use it on certain moment and it is very very
scary and I was very scared to enter it same with the Deep fakes where like two years ago oh my God this you're entering it's a tool you can use it eventually you will be able to use it without
feeling bad towards certain direction and I'm I'm talking about our chat group um so I would recommend you to try to
play with these things yes questions hands I assume this works for image sequences okay good image sequences I didn't touch it is H something that by
myself I did not arrive there and the reason is that it does not work for image sequences so well uh there's a lot of examples of uh basically uh gifts
which looks very weird and for me as compositor image sequences are not as important currently because so far there's nothing which is providing very
stable very good patches in a in a conf in stable diffusion and that's why I prefer to push stuff which I really want to use right now and can use and I will wait maybe for um better Splats
implementations or Sora or something else and then we'll talk about that right now I would not touch movement
Doug um yeah I was just wondering about so um so this is basically comy UI and um um is there a specific reason why Cy
UI is really good I mean um there must be other variants of this and I'm just kind of curious about if this has a really big user group and it's good for
particular things and others are good for other things that kind of stuff um the answer of diletant which I am so I don't have much experience I did play
with the easy uh diffusion which is um interface on in your uh basically um Chrome as well the very big difference with the compy comy UI and everything I
saw else if you working in the mid Journey if you're working inating 11 11 you're going to produce one image you're going to copy it you're going to next page produce another one you're going to copy
or going to next page producing next one now try to come back there maybe you can but not based approach is easy you always can come back you have good
overview it's basically same thing like you will say well after effects or nuke okay very s very so that means that this is very uh nice in the sense it's very
controllable and extremely controllable it's extremely customizable so you can you can enter everything everywhere in every place and people able to create on other applications mindblowing stuff
which is fine but I would like to have full control and I think this one gives the best uh from for my taste yeah yeah thanks um
Salvador I'm wonder about the bit depth of the images that uh you know all of these we are outputting currently and the Very base
it's it's 8bit and it's pgs and it's jpex it is possible from my knowledge that smart people can uh convert it to T and to exr and the uh the cool thing
with comy that it's very accessible there is a API and like I don't know it's written in python or not but there are already exr for comi basically I
think Magno all r one or was using one uh um magob Borno which just released the connection of kii with nuke so
um as I know there is no stoppage to make it exrs as I know I may be mistaken usually model internally are 32bit as m
is mentioning as as you can see yeah so I I was I was kind of like not wrong with that so yes you can go and we we can go kind of like in quite professionally there
uh more questions okay then uh I like I I I thank you uh uh for for being here again I'm going to save that I'm going to put
this video and uh in our confence and um well I hope you will be able to do something with that and maybe not in comi in another application but a base knowledge hopefully we'll be there
already thank you very much everyone thank you ALK you
Loading video analysis...