LongCut logo

Seedance 2.0 Officially Public! Full Prompting Tutorial (Claude + Higgsfield)

By Rourke Heath

Summary

Topics Covered

  • One Prompt Makes You the Director
  • Years of VFX Experience Replaced by One Prompt
  • Hundreds of Marketing Videos a Day
  • AI Understands Context Like a Director
  • After Effects Is Now Obsolete

Full Transcript

Check this out. In today's video, you're going to learn everything you need to know about SeaArt 2.0, how it can be used with real-life footage to create shots like this, how it can be used to

create VFX shots like this, and how it can replace After Effects by adding camera tracked text [music] into scenes, how you can use it to create effective storytelling techniques so that you become the director and create shots

like this in one [music] prompt. Where

the hell am I? How it can be used for marketing purposes, and how you can create meta ads in [music] one prompt to get seamless video shots like this.

There's a lot to cover in this video, so let's dive right in. Hello everyone,

quick introduction. My name is Roe Keith, and I'm the CEO and founder of Gen HQ, which is the largest creative AI education platform in the world. To get

started with SeaArt, the first thing we're going to need to do is to create effective prompts. So, to do that, we're

effective prompts. So, to do that, we're actually going to use Claude. And what

I've gone and done for you is I've created a Claude skill, which I'm going to leave in the description so that you can download that for free and create effective prompts every single time that you prompt with SeaArt. So, I'd

recommend that you go and download that right now so that you can follow along.

So, once you are inside of Claude here, we're going to install that skill by going over to the left-hand side here and click on customize. Once you've

clicked on that, go to skills, click on the plus button here, and go to create skill. Then you can press on upload a

skill. Then you can press on upload a skill, and you can select here, and this will bring up your file manager. And if you just downloaded this skill, it'll come up right at the top. But, if you've got

it downloaded somewhere else, which is I have right here, you'll find it here.

It's called the video prompt builder.

You can double click on that, and if it's installed successfully, you'll see it here in your personal skill section, and it's called video prompt builder.

Now, each time we write into Claude, all we have to do is do something like this.

Write me a video prompt that will create a seamless transition shot where the green car transforms into a transformer.

Now, it will write that into the prompt bar for us, and I'm using an AI tool called WhisperFlow that allows me to talk, and it will then write that into the prompt bar. If you're interested about WhisperFlow, um you can find that

on your own. I have no affiliate, no uh you know, attachment to them whatsoever, but I use it all the time.

So, next step is you can either add in your first frame of your image that you're trying to turn into a video, or you can just prompt like this and it will then go and create a prompt for

you. But, let's say I had an image as my

you. But, let's say I had an image as my first frame that I wanted to use. So,

let's say I wanted to use this image here, for example. Well, we could just double click on that in our files and upload that into Claude. Then, we can hit submit here, and what you'll see

happen is it will say in the Claude chat, "Reading the video prompt builder skill." Now, this is a Claude skill. If

skill." Now, this is a Claude skill. If

you've never used it before, if you're familiar, that's fantastic. But, what

it's doing is it's now referencing the skill that I've created for you. Now,

what is inside this skill, just so that you have a bit of contextual knowledge?

Well, inside the skill is all of this data, which I have gone and collected and written out, and what it's doing is it's breaking down every single prompt into extreme detail. It's including

things like lighting design, specific hex codes of colors. It's including

color palette, camera behavior, particle and atmospheric effects. It's including

character presentation, uh the environment arc, the mood and tone of the shot, um how to prompt. So, it's

giving like a little structure in here.

Um it's giving like specific elements that we should include. And another one that I've put together here is a frame-by-frame breakdown so that Claude understands the level of detail that we

want to go into for our prompt. And it's

got all of this incredible terminology into it. And it breaks everything down

into it. And it breaks everything down into actual timestamps, too. So, it

knows to get so granular with the prompt that it's going to give us incredible outputs. And it has unbelievable amounts

outputs. And it has unbelievable amounts of context in here. I'm talking like heaps of incredible terminology, so it can get incredible VFX languages into in

our prompts for us so we can get great output. And you'll see here, it's now

output. And you'll see here, it's now broken this prompt down into shot one, which is uh from 00 to 2 seconds. And

it's it basically works by breaking every single scene down into 2-second chunks. But, let's say I wanted this to

chunks. But, let's say I wanted this to be one seamless shot so it's not broken down and the camera cuts every 2 seconds, then we just tell it, and it will actually do that for us. And

because I mentioned the word seamless transition in here, at the end of our prompt, you'll see here it says, "No cut, no end card. The frame rests on the figure's glowing eyes and the city

stretching out behind it." So, what we can do is you can copy your prompt, and then you you'll see as well, um I only always copy it from where it says shot

one down to the very bottom here.

Um don't include like the master effects inventory or any of the effects density map. You don't need to worry about that.

map. You don't need to worry about that.

The only part that you need to worry about is just the shot list. And let's

say you wanted the video to only be 6 seconds long. Well, in your prompt, you

seconds long. Well, in your prompt, you can just ask it to be 6 seconds long, and then it will write out that prompt for that specific time period. Something

that's also important to mention is that SeaArt only allows prompts that are 4,000 characters or less. And some

platforms only allow 3,000 characters or less. So, if your prompt goes over, do

less. So, if your prompt goes over, do not panic. All you do is copy the

not panic. All you do is copy the prompt, give it back to Claude, and ask it to make it underneath that character limit. So, for example, if this was over

limit. So, for example, if this was over 4,000 characters, I would just copy the prompt, paste it back into Claude, and just say, "Make this under 4,000

characters." I didn't include that in

characters." I didn't include that in the skill because it changes depending on which platform you end up using. So,

you can choose how long you want that prompt to be. So, you'll see here this is now under 4,000 characters, so we can copy this prompt, and then we could chuck that right into SeaArt to get incredible outputs, and we can use that

first frame that we just showed at the start as well to get an incredible result. So, here is that output, and you

result. So, here is that output, and you can see that on the screen right here.

[music] And you'll see that we got an incredible transformation, one seamless shot. The

next thing we're going to learn about is how you can use SeaArt 2.0 with real-life footage to create cool shots.

So, here's an example of that right now.

So, I recorded myself in my living room, and then I go to snap my fingers here, and I pause this video here, take a screenshot of that frame, and then I bring that frame to SeaArt 2.0, and I

use the prompts that you can see above my head right here, and then we stitch that with the real-life footage to create this seamless transition effect, and it creates this new transformation

here, which would have taken years of experience with 3D skills, but now we can achieve it in a single prompt. So,

let's walk through how we could create this together if we wanted to. So, step

one would be record yourself on your iPhone. So, for example here, I have

iPhone. So, for example here, I have this video of me going and I press the watch that I have right here. Now, what

I'm going to do is I'm going to take a frame from where I touch the watch. Now,

I have this still frame here. We can go and upload that into SeaArt 2.0 as our start frame. So, let's do that together

start frame. So, let's do that together right now. So, you can access SeaArt 2.0

right now. So, you can access SeaArt 2.0 on Higgsfield, so I'll leave a link to that in the description as well. So, if

you go to the video section here, you'll see SeaArt 2.0. Now, when you click on here, you're going to want to upload your media in here. So, you can simply drag and drop that image inside here,

and you'll now see that it says uploading. And what it does is it

uploading. And what it does is it actually checks all of the media that you upload into it to make sure there's no copyright issues or anything like that. So, what it's going to do is just

that. So, what it's going to do is just run through a few checks first to make sure that the image you've uploaded is okay for use. So, here we go. We can now see it here, it says eligible. So, we

can click on that, and it will upload it as our start frame in here. Now, we can uh simply go to Claude, right? And I'm

going to upload that start frame into Claude as well in here. So, now with your start frame loaded in, we can simply tell it what we want to happen.

I'd like to create a video prompt that is 6 seconds long where the man touches his watch, and a graphical UI screen pops up that has a bunch of different

dinosaurs on it. And then I would like the man to scroll through the UI, and then I would like him to click on one of the dinosaurs, and then it will load in

a dinosaur behind him, which lets out a giant roar. And there's this really nice

giant roar. And there's this really nice satisfying load-in effect that happens.

So, we can then run that prompt in here, and we'll see what happens. So, you can see here we now have this first part being written out. So, it's 0 to 1 seconds, we've got the watch tap. Um and

it's having a macro digital zoom scale-in. Um and it's writing out all of

scale-in. Um and it's writing out all of the key terminology, so camera scales in slightly towards the watch face as it activates.

Um and then it's got all of this terminology, which I would never think to prompt in my day-to-day because my vocabulary simply isn't that good. Now,

it's going to write out all of the other shots. So, the holographic UI browsing

shots. So, the holographic UI browsing um kind of phase, a selection confirmation, and then the dinosaur loading in. Um and you can see that it's

loading in. Um and you can see that it's gone through sequentially, and it's 6 seconds long. So, Higgsfield only allows

seconds long. So, Higgsfield only allows prompts that are around 3,500 uh characters, I believe, or 3,000. So,

it's probably safe to try and go for under 3,000. So, let's make this prompt

under 3,000. So, let's make this prompt 3,000 characters or less. So, now we can go ahead and

or less. So, now we can go ahead and copy this new prompt here, which is under 3,000 characters, and we can paste that in. Now we've got everything loaded

that in. Now we've got everything loaded in, you can change the aspect ratio in here to be 16 by 9 or whatever aspect ratio you're looking for, adjust the duration to be 6 seconds long, and you

can go up to 15 seconds, by the way, in this. So, we want it to be 6 seconds,

this. So, we want it to be 6 seconds, and then you can hit generate. And now,

when you connect that with the start of our videos, you could see the output here.

all [laughter] this.

So, now we've got that clip in, we've got the UI, and then we've got the nice dinosaur that loads in behind us. So,

that's really effective. So, next up, we're going to learn about how we can use SeaDance 2.0 for marketing purposes.

How can we create Meta ads, for example, that are 15 seconds long with one seamless prompt? Well, let's do it

seamless prompt? Well, let's do it together right now. So, this example that I'm about to show you was one of the first generations that I made with our Claude skill, and it just shows you how powerful it really is. So, take a

look at this.

[music] Yeah, that's right. So, it created all of those VFX, all of the text effects, and edited it all together in one single prompt. And that wasn't even a

prompt. And that wasn't even a 15-second-long generation. So, you can

15-second-long generation. So, you can literally create fully edited, fully customizable videos now that are 15 seconds long, and you can create hundreds of generations a day, which you

can then run as different adverts with Meta ads. You can upload your product as

Meta ads. You can upload your product as an Omni reference inside of SeaDance, and then use our Claude skill and describe the kind of effects that you want to happen, and it will create

videos for you. It is mind-blowingly good. So, the next thing we're going to

good. So, the next thing we're going to learn is how you can use it for effective storytelling. Now, this is

effective storytelling. Now, this is where I feel SeaDance 2 has one of the biggest unlocks for creatives. So, this

is a full AI production that our team put together to show you right now what's possible with SeaDance. Check

this out.

Where the hell am I?

What the hell?

[music] Hello, ma'am?

What the [music] Oh god, please tell me this is AI.

The [music] best video model today.

SeaDance 2. [music]

2. [music] Amazing, right? So, this was all put

Amazing, right? So, this was all put together in one day, um, to create this mini sequence here. And all we did was pretty much used the skills that we just

taught you. So, we used the Claude code

taught you. So, we used the Claude code skill to write out all of the prompts, and with all of those things combined, it then created all of these amazing shots for us that we were able to put

together into one sequence. So, I'm

going to introduce a new concept to you, which is called Omni reference. Now,

what Omni reference means is that you upload ingredients to the prompt bar.

So, for example, we could upload a character sheet of me, we could upload an image of a location, and we could upload a character sheet of a second character. And then, with all of those

character. And then, with all of those ingredients, SeaDance will understand all of that context, and it will create a video that has all of those elements

inside it. Now, why would we use this

inside it. Now, why would we use this over creating a start frame? I'm going

to introduce that to you right now. So,

let's take a look at a couple of examples. So, all of these video clips

examples. So, all of these video clips that you're going to watch here are all made with Omni reference. Now, this has got a character sheet of me and an image

of a specific location. And this shot here is actually an image of a location and a character reference of this mannequin here and a character reference sheet of me. And so, it has all of this

contextual understanding of what I look like, what the mannequin looks like, so that it can create effects like this, where the mannequin falls apart, and you'll see the head rolls into the

screen here, and it's actually the correct mannequin head. And that's

because it has that contextual understanding of the character sheet.

Something that's interesting and that's worth noting here is that because it has contextual understanding of this character sheet here, it understands what my face looks like. So, you'll

notice in this shot here for storytellers, this is what we call a character reveal, where you can't see the character's face at the start of the video. My hat my head is down like this,

video. My hat my head is down like this, and I'm rubbing my eyes cuz I've just woken up, and then my face reveals. And

it's actually me, right? I mean, it hasn't got my nose quite right, but that's okay. It's not perfect. The

that's okay. It's not perfect. The

character sheet, I could have made it better. And you'll see that it's

better. And you'll see that it's actually doing the camera cuts for us, right? So, as we go through these shots,

right? So, as we go through these shots, we've got this nice handheld camera motion, and we're using my Claude skill that I made to create these shots, by the way. Um, and then it's cutting to

the way. Um, and then it's cutting to this seamless transition here. Notice

that it's perfect, by the way. As my

body rotates here, the character rotates, too. It's a seamless shot,

rotates, too. It's a seamless shot, which is exactly how directors think when they are creating shots like this.

It edits together perfectly, and we haven't had to do any post-production work to this. Now, in this next shot here, this is where we get the classic VFX. In order to get this shot here,

VFX. In order to get this shot here, again, I used Omni reference. And all I did was upload an image of the general location from the outside, which you can see above my head right here. And this

is just an image from the streets from a bird's-eye kind of view, or an aerial kind of shot. And with a prompt like this with Omni reference, it will actually create like a unique shot, and

it will create a really cool effect like this.

So, overall, incredible VFX, and it will actually create like multiple shots because we prompted it to. And this is a 15-second-long generation, and you can see it goes a little bit crazy towards

the end here. But, this shot here is incredible, right? To think how far AI

incredible, right? To think how far AI video has come, this is off the charts.

The fact that we can do this for a very minimal cost now is mind-blowing. Now,

here is another example of where you can use Omni reference. So, Omni reference works again here in this example, where we actually used this image here, again,

the same reference image that we provided for the previous generation, but this time, we uploaded a image as a reference of the text, right? Which was

a 3D image of the text that just said Native Audio.

Um, and [snorts] with this, it was able to insert that inside of our scene for us. So, if you wanted to add floating

us. So, if you wanted to add floating text into your scenes, you can upload your kind of reference of the location, and you can upload a reference of the text that you want to add in, and then

just tell Claude, "Hey, I want to add this text floating in 3D inside the scene. There should be a shadow

scene. There should be a shadow underneath the text where it's floating."

floating." And then you can create text that is literally built into the scene, right?

And before you would have had to do this in After Effects, and now you can achieve it flawlessly with SeaDance. And

not only that, but you can, let's say an explosion goes off, you can have that text exploding in the scene as well. And

look at it, now it's going in slow motion, and it's rotating inside the scene. So, everything you need to get

scene. So, everything you need to get started with SeaDance 2.0 is available in the description. We have the link to the Claude skill that you can use, and we also have a link to Higgsfield, where you have unlimited generations with

SeaDance 2.0 right now, as well as access to all of the other best video models all in one place. Thank you very much for Higgsfield, as well, for sponsoring today's video. And if you're

interested in turning creative AI into your full-time career, then you can check out GenHQ, which is also in the description. Our students have generated

description. Our students have generated over $350,000 in revenue since we launched the program 8 months ago. And our company mission is to create 100,000 new jobs in the AI space in the

Loading...

Loading video analysis...