LongCut logo

Create ANYTHING with Sora 2 + n8n AI Agents (Full Beginner's Guide)

By Nate Herk | AI Automation

Summary

## Key takeaways - **Sora 2 is 6x cheaper via Key AI**: Using Key AI to access Sora 2 is six times cheaper than using OpenAI directly, costing 1.5 cents per second compared to 10 cents per second. [01:52] - **Automate video creation with n8n**: You can connect Sora 2 to n8n to automate video creation, enabling text-to-video, image-to-video, and storyboard modes within your workflows. [00:03], [00:55] - **Polling ensures video generation completion**: Polling is a method where your workflow repeatedly checks if a video generation task is complete, which is more efficient than guessing the time needed. [09:07], [10:01] - **Enhance videos with UGC style and cameos**: Sora 2 can generate realistic UGC-style videos from images and incorporate cameos of public figures, allowing for personalized and engaging content. [11:44], [15:09] - **Storyboards for consistent characters**: Sora 2's storyboard feature allows for the creation of videos with consistent characters across multiple scenes by defining different scenes and allocating time per scene. [17:06] - **Optimize prompts for cinematic quality**: By using detailed prompts that describe subject appearance, setting, lighting, and camera style, you can achieve more cinematic and higher-quality video outputs from Sora 2. [18:48], [22:15]

Topics Covered

  • How to Slash Sora 2 Video Costs by 6x.
  • Essential Steps for Reliable AI Video Automation Workflows.
  • AI Generates Hyper-Realistic UGC Ads for Any Product.
  • Create Consistent Characters Across AI-Generated Scenes.
  • Optimized Prompts Transform Raw Ideas into Cinematic AI Videos.

Full Transcript

So, Sor 2 has been taking the internet

by storm. So, what I'm going to do today

is show you guys how you can use Nitend

to get 10 times the output, higher

quality outputs, no watermarks, and use

Sor 2 for six times cheaper than through

OpenAI directly. So, Sor 2 has some

really fun use cases, and I love that

it's getting people who aren't super

interested into AI interested in AI. But

it's more than just having these funny,

cool videos. Businesses and

organizations are actually using Sora 2

to power their creatives, their media,

their content, their marketing, all this

kind of stuff. Not only does it create

video, but it automatically creates the

audio, too. Just look at this Starbucks

example.

>> Starbucks. Discover your flavor.

>> So, if you understand how to use this

technology, not only can you save

yourself or a business a ton of time,

but you could make a lot of money. So

anyways, today I'm going to be showing

you guys step by step how to connect to

SOR 2 in NADN over API, but then I'm

also going to be going over these

examples where you can do text to video,

image to video. You can have cameos, so

people like Mark Cuban or Sam Alman in

your Sor 2 videos. You can create

storyboards so you can lay out different

scenes and have consistent characters

throughout. And then I'm also going to

talk about prompting. So feel free to

follow along. I'm going to give you guys

this entire template for free so that

you guys don't have to actually go build

this yourself. You can just use what I

have here already. And you can access

that by joining my free school

community. The link for that will be in

the description, but I don't want to

waste any time. Let's get straight into

the video. All right, so before we dive

into these examples, I want to start

from scratch and show you guys how I

connect to Sora 2 in Naden. So the first

step is to go to a platform called Key

AI, which is what we were just looking

at over here. It's spelled Kie.ai.

In the past, you guys have seen me use

foul.ai, which is a very, very similar

platform. Essentially, it's just like a

marketplace for all of these image and

video generation models, as you can see

here. But here's the key difference on

price. FAL and OpenAI are charging you

10 cents per second of Sora 2 video

generation whereas keys only charging

you 1.5 cents per second. So it's six

times cheaper to make these videos. And

of course you're still getting the same

quality of output if not better. If we

make a 10-second video, that's only

going to cost us 15. Whereas on Fowl or

OpenAI, that 10-second video would cost

us a dollar. Anyways, the first thing

you're going to do when you get to

key.ai is go to your billing information

and just make sure you have some

credits. You'll probably have some for

free when you get in there, but

otherwise just grab five bucks worth.

It's going to last you a long time. And

then on the lefth hand side, you can see

there's a section called API key. And

we'll have to use this in just a minute

or so, but just remember for now, that's

where it is. But anyways, I'm going to

click on models market. We have all of

these different providers and options we

can use like Google Vo 3.1, Sora 2 Pro,

and for now, I'm just going to go ahead

and click on Sora 2. And so this opens

up a playground environment where you

could write in here start to prompt Sora

2 and get your outputs right here just

to play around with how the prompting

works and how the outputs look. But what

we're interested in is using this over

API. So I'm on Sora 2 textto video and

I'm going to click on API right here.

And now all we have to do is scroll to

the bottom and I'm going to show you

guys how we can set up a curl which

makes it really really easy. You can see

down here there is a request example and

you make sure you're on curl and then

we're going to copy this right here. I'm

going to go back into Nitn. I'm going to

add an HTTP request to our workflow. And

I don't have to configure this method,

this URL, any of this stuff. I'm just

going to go ahead and hit import curl.

Paste in that curl command that we just

copied. And then when I import it, it

basically fills out pretty much

everything that we need. Now, we just

have to make a few tweaks. So, the first

step would be to add your API key. So,

right here it says authorization and

then it says bearer space API key. And

so, this is where you would go back into

key. You go to your API key and you'd

copy this value right here. And then

you're just going to paste that right

there like that. And then you'd be able

to access key. And this would basically

access your billing information that you

put in there. But because we're going to

be making multiple requests to key, I

don't want to copy and paste this every

single time I want to make a request. I

just want to save this. So what I'm

going to do is I'm going to scroll up a

little bit. And right here you can see

authentication. I'm going to go ahead

and open this up and click on generic.

For generic type, I'm going to go ahead

and choose header because you can see

right here, this is a header parameter.

And then you can see I have all of these

different ones that I've already saved.

You can also see that I already have one

for key, but I'm just going to go ahead

and make a new one with you guys right

here to show you how it works. So, you

click on create new credential. And

remember, it's the same thing as we just

saw down there. So, for the name, we're

going to type in authorization. And for

the value, you're going to type in

capital B bearer space and then paste in

your API key. And then when you go ahead

and save that, now you're connected to

key and you can name this credential

like keyai and save that. And so now I

have this saved every single time that I

need to use key, which as you can see in

this workflow, each one of these little

workflows is one that we're going to be

using key. So now I just have it saved

and all I'd have to do is choose it

right there rather than go back and copy

my API key every time. So just a little

fun trick. And then once you do that,

you can just turn off the headers cuz we

don't need to send our API key twice.

But from here, what we have is the JSON

body. And I'm just going to change this

to an expression and open this up full

screen so we can all look at it. So what

we're looking at here is basically all

of the little filters that we're sending

over to Sora 2 and saying this is the

type of video we want. We can see we

have model is Sora 2 textto video. We

have a callback URL, which I'm just

going to go ahead and delete because

that's an optional field. The only

reason I know that's optional is because

in this API documentation is what it's

called. It basically shows you what you

need to send over. So you can see the

model that's required callback URL

optional input prompt required aspect

ratio optional. So you can actually go

ahead and look at this API documentation

and understand how you can change the

behavior of the Sor2 API. So I'm not

going to dive too deep into that right

now. If you want to deep dive, I made a

full course on that. You can go ahead

and watch that video. I'll tag it right

up here. Anyways, going back into Nitn,

we can see that we have a prompt. We

have um an aspect ratio. We have a

number of frames which we can choose

between 10 or 15. And then we have

remove watermark true. So right here,

what I'm going to do is just change this

prompt. And this is turning our text

into video. So I could just say a video

of a young man throwing a coffee mug

against the wall.

So we have our JSON body ready. What I'm

going to do now is hit execute step. So

this just executed. It gave us a 200

code which is good. It says success. And

then we see our task ID is this long

string. So if we go back to the

documentation now, we can see what

happens is when we make that request,

Sora 2 or key basically says, okay, we

got this request. We're working on it.

And so what we need to do next is

actually grab that back. So there's

another endpoint right here that says

query task. I'm going to click on that.

And all I'm going to do is copy this

curl statement once again. So the

request example, I'm going to copy that.

We're going to come back into NAN and

we're going to add another HTTP request

and once again we're just going to

import that curl. And so this is going

to set us up with what we need. The

first thing you'll notice is that we

have a task ID parameter. So basically

it's saying what request are you looking

to get back. So I'm going to delete this

example task ID and all I have to do is

drag in the task ID that we just got

from the previous node. So I'll put that

right there. Now that's a dynamic

expression. And then the last thing you

can see is once again it's sending over

our API key. So, I'm just going to turn

off the headers because we know we

already set this up up here as a generic

as a header. And then we should have our

key API somewhere in here. There we go.

Key AI. And so, now we're set up once

again. And so, I'm going to go ahead and

execute this step. And you can see when

I run that, it comes back and it says,

okay, the state is generating. And while

this is generating, I'll just show you

guys in key how you can actually check

on your requests. So, if I go to my

logs, you can see right now that this

says it's running. And you can see my

past runs have taken 195 seconds, 227

seconds. So it may just take like 3 to

four minutes for us. But this lets us

look at all of our inputs. So we can see

our prompts. We can see all of the

things that we've requested from um

these different models. So anyways, I'll

check back in with you guys once this

one is done. Okay, so I ran it again and

you can now see that it says state

equals success. And then down here, what

we get is a result. So we got two

different URLs. We got one with a

watermark and one without a watermark.

So, if I go ahead and copy this URL and

I just basically say go to this URL, it

downloads a file. And when I open up

that download, this is what I get.

Okay, that's pretty ridiculous. And

first of all, what you'll notice is it

was like it was in slow motion. The

sound was a little weird. And the reason

why this all happened is because we

hardly prompt this thing at all. If you

remember in the request that we made, I

literally just said a video of a young

man throwing a coffee mug against a

wall. And so, you really can't expect to

get a good output if you don't prompt it

very well. And so, I'll talk a little

bit later about prompting and how you

can really get some cool outputs from

Sora 2. But anyways, the point of what I

just did there right up here was just to

show you guys that we're making two

requests. The first one we set up to

say, "Hey, Sora, here's the type of

video I want." The second one we set up

to say, "Okay, Sora, is that video done

yet? Like, can I can I see it?"

Basically, so we're basically going to

follow that pattern for all of these

different workflows and it will start to

make sense. But like I said, you'll be

able to download all of this. So you'll

be able to play around with it and see.

The first example that we have here is

turning text into video, which is kind

of what we just did up here. But what I

wanted to introduce to you guys is this

concept of polling. So let me real quick

start this request and then I'll explain

what polling is. So, if you guys got

into this template, all you'd have to do

is go to this video prompt node. And

right here, you could basically just

input your video prompts of what you

want to get back. So, right here, we

have the default example about a

professor giving a lecture and

explaining that Sor 2 is live. So, what

I'm going to do is execute this

workflow. We're going to see this run.

We can see what happened here is it made

the request. So, Sor 2 is currently

working on our video. Then, we have a

wait node because we know this takes

anywhere from, you know, 2 to 3 to 4

minutes. But what you just saw happen is

we checked in to see if it was done

after the wait node and it's not done

yet and it just happened again. So this

is going to be an infinite check where

it's going to every 10 seconds go ask

sor 2 if it's done until we know for a

fact that it is done. And this is called

polling because we're constantly making

checks. And this is better than just

estimating and saying okay well roughly

this takes 3 minutes so I'm going to

just set my weight for 4 minutes to be

safe. Well that could be inefficient.

And also what happens if for some reason

it takes five minutes and then your flow

moves on and there's just a big error.

So anyways, the reason why this works is

because in this if node what I'm doing

is if I just make this section a little

bit bigger over here, the state equals

generating. And what I'm doing is I'm

saying if the state equals success, then

we're good. And then we go up the true

branch. But you can see we've had six

items come through and they've all gone

through the false branch because the

state equals generating. There was

another seventh one. So, it's a super

simple conditional check. We're just

looking to see if it's done or not. Now,

one thing you would want to consider is

potentially using a switch node because

there's other states that could happen.

You could get success. You could get

generating. You could potentially get

failed. And if you get failed, it

wouldn't really know what to do here.

So, if you wanted to make this more

production ready, you would probably

also work in a conditional check to see

if the state equals failed. And that

would send you some sort of notification

or something like that. All right. So,

that just finished up. You can see that

it took 18 tries, so about 180 seconds.

And then what we get at the end is our

final video URL. I did a quick

expression within this JSON variable to

isolate just the link that we want. So,

I'm going to go ahead and open this up

and we'll take a look.

>> And here's the exciting part. Sora 2 is

now available on Kai AI, making it

easier than ever to create stunning

videos.

>> You can experiment, iterate, and bring

your wildest ideas to life right from

your lap.

>> Okay, I mean, that's pretty good. You

can see there's dialogue. You can see

there's energy. And once again, we

didn't even really put any best

practices with prompting into play yet.

So, we'll show that near the end of the

video. But let's move on to this next

example, which is turning an image into

video, which is really, really cool. And

I think this is what unlocks tons of

potential. So, what we have here is

similar. When you guys get in here,

you'll have an image URL. So, right

here, I have this image URL, which if I

open this up real quick, you can see

it's a AI generated image of a fake curl

cream product for your hair. And then if

we go to the video prompt, you can see

I'm basically saying a realistic UGC

style video of a young woman with curly

hair sitting in her car recording a

selfie style video explaining what she

loves about the product. So UGC ads is a

huge use case here because that's like

what converts really well nowadays

online on Tik Tok shop, stuff like that

is just real authentic people holding

something with a selfie style, you know,

video and just saying, "This product is

awesome. This is why I love it." And so

imagine you have a product and you can

just pump out five organic videos like

that every single day without hiring

actors or anything like that. So what

I'm going to do is just go ahead and run

this workflow and then we'll dive into

once again how it's working. So you can

see we have the polling flow set up very

similar to the way we had it up here.

The only difference really is in this

HTTP request to key is we have the video

model which is not text to video. Now

it's imageto video. We have the prompt

once again, but now we have a section

called image URLs. We're sending over

that public image which the model will

use as reference. So in our final video,

it should be holding the product that

looks exactly the same as our source

image. As you can see here, I said

nothing about the product from the

source image should change. It should

appear exactly as given. We're also

telling Sora that the woman in the video

should say, "I absolutely love this curl

cream. It keeps my hair bouncy, curly,

and lightweight all day long. You guys

have to try it." So you can have full

control over what the AI person in the

video is saying. You could also control

their accent and their tone and their

style, that kind of stuff, too. Just

keep in mind that for this image URL, it

has to be a publicly accessible file.

So, it can't be local. And then for

aspect ratio, we said portrait, and

that's pretty much the only difference

because we want it to be like a Tik Tok

or an Instagram reel style. So, this is

running. This is doing its polling

check. And I will check in with you guys

once it is finished up. All right, we

just got that one back. Let's go ahead

and take a look at this video.

>> I absolutely love this curl cream. It

keeps my hair bouncy, curly, and

lightweight all day long. You guys have

to try it.

>> Okay, that's insane. If you guys see

what happened here at the end, she puts

it near the camera and this looks pretty

much identical to the source image that

we had. It has the same font. It has all

of the words looking really good and it

has that little logo. One thing you'll

notice though about Sora 2 right now is

that the first like millisecond will be

your source image. Right now, there's

just not great support for this, but I

imagine that will be fixed very, very

soon. But you can see there's the

original image and then there's the

product appearing in the video. So it's

like pretty much the exact same. One

other thing that you may notice is if

you want to do image to video with Sora

2, you can't have it be a person in the

image. It will basically tell you we

can't do this because it's a realistic

looking person. even if that person is

AI generated, unlike V3 where you could

give it an AI generated person holding

your product in case you wanted that to

be like your brand ambassador on all of

your videos and then V3 could take that

image of the person holding the product

and turn that into a video and that

would help you sort of like leaprog over

the issue with the first millisecond

being the source image, but that's very

easy to crop out. But anyways, I think

that those UGC content use cases are

super cool because you could basically

just keep generating those and it will

be a new person every time and it will

look and feel very real and it will look

good for your brand. And once again,

that would only get better if you

actually prompted it with best

practices. Okay, so this next thing is

really cool. This is using cameos, which

basically means you can use famous

people's faces and likeness in your

videos. So on the Soru app, it's kind

of, you know, like a social feed and

people have profiles. So, if I went here

and I searched Sam A, this is going to

pull up Sam Alman's profile. And you can

see here in his profile, if I go to

cameos, we can see all of the times that

people have made videos with Sam in

them. And all we have to do to use Sam's

cameo because he made it publicly

accessible to people is we just copy his

username, which is Sam A. So, in our

Cameo example here, I have Sam A

recording a selfie style video in a car

explaining how Gravity works in two

short sentences. So, let's go ahead and

run this one. And it's going to do the

same polling flow and everything like

that. Keep in mind you can also use

cameos in your um image to video

generations as well. Right now I'm just

doing a text to video to keep it simple.

But one thing you may notice with cameos

is Sora 2 may be a little bit more

restrictive. So I tried doing some

crazier prompts or I tried throwing in

some other cameos like Shaq and Mark

Human and it was kind of rejecting my

prompts. So it just may be a little bit

sensitive especially right now

especially with so many people using

cameos and using Sora 2. So just keep

that in mind. Anyways, I will check in

with you guys once this is done polling

and once our video has been generated.

All right, looks like we got that done.

I'm going to go ahead and download this

video and we'll see what Sam has to say

for us. So, here we go.

>> Everything with mass pulls on everything

else. That pull makes things fall toward

Earth.

>> Okay, that was kind of ridiculous, but

looked like him, sounded like him. we

maybe just would want to prompt that a

little bit better so he doesn't sound

like he wants to die because I literally

just said a selfie style video in a car

explaining how gravity works in two

short sentences. And what's cool is you

can go ahead and create an account on

Sora and you know do the process of

setting up your own cameo and then you

could use that in your automations. So

you can have you know an avatar of

yourself on your social media and you

can just generate all these videos. So

very cool stuff. All right, so now let's

look at another really cool feature

which is the storyboards. And to explain

how this works, I'm real quick going to

actually switch back to KAI and we're

going to click on right here Sora 2 Pro

storyboard. And for those of you

wondering, we see Sora 2 text to video,

Sora 2 Pro text to video. Honestly, I

haven't seen a huge difference. And for

the cost, how cheap Sora 2 regular is on

KAI, I would just stick with this for

now. But if maybe you do start to scale

up the system and you want all of your

content going out on Tik Tok or

Instagram and you want to use Pro, then

go for Pro. But anyways, for the

storyboard, how this works is you're

able to set different scenes. And like I

said, you can have consistent characters

within those scenes. And you're able to

allocate a different amount of time per

scene as long as it adds up to the total

duration, which you can choose between

10, 15, or 25. But as you can see, it

has to allocate correctly across the

three scenes. So anyways, what I would

do is I would go ahead and copy the

request example once again like you guys

saw in the demo. And what I did here was

I set this up so you guys could

basically put in an image URL. And then

you could have three different scenes.

So this image, as you can see, is an AI

generated image of a curious,

adventurous frog. And then our three

scenes are basically that frog finding

treasure and jumping around in the

forest. So we're going to go ahead and

run this. I will say I have sometimes

had a 500 internal server error from key

when I've been trying to do these

storyboards, but we're going to go ahead

and give it a try. So I'll execute the

workflow. It's going to run our prompts

through this request. It's going to go

ahead and do the polling feature. And I

will also say when I've done storyboards

in the past, it's taken anywhere from

like 500 to 700 seconds rather than just

like, you know, your typical 180 to 250.

So, this one may take a little bit

longer, but I'll just check in with you

guys and see if it worked or not. All

right. So, while this one's running, I

thought it would be a good opportunity

to just talk about prompting because

this has taken 740 seconds and I've just

been sitting here staring at it and I'm

getting bored. So, let's talk about

prompting a little bit. So, there are

people right now that are making a lot

of money by going into like marketing

departments and teaching them how to

prompt Nanobanana, which is one of

Google's image generation models and

things like Sora 2 or Google V3. Because

if you understand how to prompt these

things and you can make high quality

like UGC ads or VFX ads or scenes for

movies and TV, whatever it is, there's a

lot of money in that space because it

costs so much money to get drones out

there or have all the mics and you have

to make sure the weather's correct when

you're doing all of this in real life.

And so, if you can just generate these

scenes with good prompting, like I said,

it's a really, really cool opportunity.

So, I'm not a prompting expert when it

comes to these creative generation

models, but what I do know is there are

certain things that you want to have in

your video prompts. So, here is a very

very basic highle prompt that I have for

this AI agent that's going to take our

raw input, optimize it, and then shoot

it off to Sora 2. So, I'm sort of going

to skim over this, but you guys will be

able to access this for free when you

download the free template. And so what

I have here is you are an expert AI

video prompt engineer trained to design

optimized prompts for Sora 2. Your role

is to take a raw input concept and

transform it into a highly detailed

video prompt. So this agent needs to

always describe the main subject like

appearance clothing age gender

expression, and motion. The setting, so

what's the location, what's in the

background, what's the lighting like?

What's the time of day? All this kind of

stuff. The camera style, so the angle,

the lens, the type of video, if there's

any camera movement, all of that stuff

really matters. The idea is that the

prompt should sound like a professional

cinematographer describing a shot to an

actual like visual effects team. So

anyways, this agent will take a raw

input and transform it. So what I'm

actually going to do is I'm going to

stop this generation because we will be

able to check on that in our key

dashboard right here. You can see this

one is still running. And what I'm going

to do is drag this down here and run

this one. And so if you remember what we

did earlier with our texttov video

prompt which was up here we had the one

of the professor saying to its students

that sor 2 is available on key. We have

that exact same system prompt. So this

is the exact same raw input that we gave

earlier. And now what we're having the

agent do is take that raw input and make

it better and optimized for sore 2. So

you can see now we get this huge input.

We get a lively cinematic classroom shot

as a sequence of natural documentary

style coverage that highlights a

charismatic professor and the engaged

students. We've got a wide shot which is

24 mm tripod dolly. We later have medium

twoshot which is 35mm gentle handheld.

We've got a different type of reaction

cutins with 50 mm. We've got the um

quote right here. We have lip sync. We

have all of this kind of stuff. It even

gives us the overall tone, directorial

notes, all of this stuff. So now what's

happening is that got sent off to Sora 2

as you can see and it tells us that once

again it's generating this message and

so I'll check in with you guys when this

one has been completed and we'll compare

it to that video that we had earlier.

All right, so this one finished up. It

was a bigger prompt so it took a little

longer. It took about 300 seconds. So

first let's watch the original one that

had no prompting, just the raw input.

>> And here's the exciting part. Sora 2 is

now available on Kai AI making it easier

than ever to create stunning videos.

>> You can experiment, iterate, and bring

your wildest ideas to life. Okay, nice.

So, that's still not bad, right? But

look at this one. Because we had the

best practices of prompting worked in,

>> Sora 2 is now available on KA AI, making

it easier than ever to create stunning

videos.

All right, so I hope you could tell that

the second one felt a lot more cinematic

and there was different shots and there

was like different scenes going on in

there. So, I just thought that that's

pretty cool. And once again, the idea is

all you have to do is give it the raw

input and then the AI agent that you

prompt to specialize in making it into a

video prompt is going to take care of

that for you. And so, yes, it's nice

that we have kind of like a highlevel

here are some good practices for

prompting. But what you would do is when

you have your specific use case, like

let's say it's your UGC ads, on top of

giving it just these basic rules, you

would come in here and really tailor

this towards specifically UGC content

video prompting. And as you refine that

prompt and make it better and better,

your outputs are just going to get

better and better as well. So, for

example, and just to sort of hint at a

future video that may be coming, you

could have a Google sheet like this

where you have a product photo of your

own product, you have just the ICP, you

have the features of the product, and

you have a video setting, and that's all

you have to give it. And then your AI

agents in NN could take all of that and

make the optimized script, make the

optimized video prompt, and then you're

just getting all of these UGC content

videos pumped out automatically because

you could have 10 coming out a day, 10

coming out an hour, however many you

want. Okay, so the storyboard video that

we generated together live here, it took

35 minutes, but it did finish up. So,

let's go ahead and give it a quick

watch.

Okay. So, that's pretty funny. You can

see what it did though is it was able to

use our image URL that we provided right

here. So, this is the character that we

wanted it to be and it was consistent

throughout the three scenes. it would

now just be a matter of making these

prompts a little bit better because all

of these prompts had, you know, not

really the elements that we discussed as

far as lighting, background, camera

movement, camera style, all that kind of

stuff. But hopefully now you can just

get a sense of how those storyboards

work and how you can control the scenes

and timing to create some consistent

character videos. The final thing I want

to talk about here is doing some data

cleanup. So something that you guys

didn't see that happened in this step up

here was our AI agent output this prompt

with new lines. So we had new lines and

then also down here you can see we had

double quotation marks and both new

lines and double quotes will break the

JSON body request. So as a best practice

you can use these expressions which I'll

show you guys in a sec in your request

to Sora 2. So in this body request you

can see I don't just have the output.

Let me actually just delete this real

quick. So this is the output of the

agent and on the right hand side is the

result. So you can see there's new lines

here which would break this. And then

also we have our wherever they are

somewhere down here. Right here we have

the double quotes. Sor 2 is now

available on KI. And so when we put in

this little expression, it's basically

replacing those new lines. As you can

see, they got chopped off. And I'm not

going to be able to find it again now,

but the double quotes would have been

removed as well. And that's how you make

sure no matter what your AI agent

outputs, because sometimes they'll throw

in new lines and double quotes even if

you prompt them not to, you can

basically get rid of that no matter

what. And so what I wanted to show real

quick is if I run this setup down here,

so you guys can kind of just isolate

those variables and look at it. We have

right here we have new lines. So this is

line one, this is line two, this is line

three, and then we have another text

over here with double quotes that says

pizza pizza. And if we run this second

one, we have the replace function up

here to get rid of new lines. And then

we have the replace function here to get

rid of the double quotes. And now you

can see on the right hand side they're

both coming out completely clean. So

that's how those replace function works.

And then you can basically just copy

these and save them. So whenever you

need to use that replace you've got it

right here. The last thing I wanted to

talk about is errors. When you're maybe

getting a failure in your sore video

generations. So here are some failures

that I got. You can see all of these

have error code 500 which means there's

something internally going on and that's

why it errored. And by internally I mean

on the key or sor 2 on the server side

of things. So, this could mean that

they're getting way too many requests

and there's just an issue. It could mean

that AWS blacked out and they're down or

something like that. But it could also

mean that your content is being

restricted because maybe you put

something in there that is automatically

getting flagged and they're just

basically rejecting you. So, here's an

example of a Sor 2 Pro storyboard that I

did and you can see it got rejected.

Internal error. I'm not exactly sure

why. But then here's another Soru

storyboard that I did and it was

successful and this one took um almost 7

minutes. So, this was the highle video

of different ways that you can use sore

2. If you guys have some specific use

cases that you'd want to see me build an

automation for or certain agents for,

then definitely let me know down below.

I'd love to bring some more Sor 2 and

some VO 3.1 content for you guys cuz I

think this creative AI space is super

cool. And once again, you guys can

download this entire template and get

all of this stuff so you can play around

for completely free. All you have to do

is join my free school community. The

link for that is down in the

description. When you join this free

school community, all you have to do to

find that is go to YouTube resources or

search for the title of this video. And

in that post associated with the video,

you'll find the JSON right here. And you

just have to download that JSON, import

it into your NADN, and then this exact

workflow will pop up. There will also be

a setup guide over here that shows you

like what you need to connect and all

that kind of stuff. And if you're

looking to dive deeper with AI

automations and connect with over 200

members who are learning every day and

building businesses with AI every day,

then definitely check out my plus

community. The link for that is also

down in the description. Like I said,

it's a super supportive community of

members who are building businesses and

sharing what they're learning. And we

have three full courses. We have Agent

Zero, which is the foundations of AI

automation for beginners. We've got 10

hours to 10 seconds where you learn how

to identify, design, and build

time-saving automations. And then for

our annual members, or if you've been

with us for 6 months, we've got

oneperson AI automation agency where we

start to talk about how you can actually

get in front of business owners and

start to sell these solutions. On top of

that, we also have one live Q&A per week

where I get to just basically talk to

you guys and have some pretty fun

discussions. So, I'd love to see you

guys in those calls. I would love to see

you in these communities. But that's

going to do it for today. So, if you

enjoyed the video or you learned

something new, please give it a like. It

definitely helps me out a ton. And as

always, I appreciate you guys making it

to the end of the video. I'll see you on

the next one. Thanks so much everyone.

Loading...

Loading video analysis...