The 86 Minute Guide to Building Apps With AI (Full Course)
By Riley Brown
Summary
Topics Covered
- Vibe Coding Equals Content Creators
- Extract Philosophy from Videos
- AI Extracts Your Unique Ideas
- Nodes Chain AI Models Seamlessly
- One API Key Unlocks All Models
Full Transcript
Hello there. I'm Riley Brown and today we're going to be talking about vibe coding AI apps. This is going to be a complete guide. Software has completely
complete guide. Software has completely changed because of AI. If you look at the way it was 3 years ago with Google, you typed in a keyword or a phrase and you got a list of pages. Now we have
chatbt and it's much more of a conversation. A whole different
conversation. A whole different experience. One that is much more direct
experience. One that is much more direct and just a lot better. And this doesn't stop at chatbt, right? Photoshop, you
used to have to manually edit all of the layers in a photo. Now with Nano Banana, you can just describe the change to any image and that image will just change, right? You used to have to manually
right? You used to have to manually shoot all of your footage to get B-roll.
Now you can just generate videos that look photorealistic. This is a complete
look photorealistic. This is a complete guide to building professional apps that actually use this futuristic technology.
This video will be divided into four main parts, right? We're going to start off by understanding the popular tools.
Then we're going to kind of move into comparing the tools and creating workflows slashcomparing all of the popular models. And then we're going to
popular models. And then we're going to talk about integrating these tools that we do some testing with. And then we're actually going to build an app. And
we're going to build two apps actually.
We're going to build a web app and a mobile app that leverage this technology. It doesn't matter how much
technology. It doesn't matter how much experience you have. If you have been asleep under a rock and have never heard of vibe coding, you will still be ahead
of 99% of people by the end of this video. I've made a similar video in the
video. I've made a similar video in the past. I called it the beginner's guide
past. I called it the beginner's guide to AI and it was kind of this massive sweeping overview of all of the different AI tools. This one will be somewhat similar except this one is
focused on vibe coding, right? It'll
prepare you to vibe code apps that use AI technology. Okay, so who am I? I am a
AI technology. Okay, so who am I? I am a content creator turned startup founder with over 1.4 million followers across social media and our company does over seven figures in revenue per year and we
are the number one mobile first vibe coding platform in the world and that's just a little bit about me. So who is this video for? This video is for people with any amount of coding experience. I
would say that if you're a zero to a seven in in level of coding knowledge, you are going to learn a ton. If you're
a seven to a 10, you're going to learn a very good amount. I believe that there's going to be tons of value packed into this video specifically around all of the AI tools that I use and then how I
integrate them very quickly in the apps that I create and the workflows that I create. And so I promise you that this
create. And so I promise you that this is actually going to be the most valuable video that you've seen on AI, specifically around vibe coding on the entire internet. So I recommend grab a
entire internet. So I recommend grab a cup of coffee and your laptop and let's learn how to create apps using AI. We're
going to be talking about 10 different tools. We're going to be building
tools. We're going to be building multiple workflows and we're going to be building multiple apps with many AI features. So, first I want to talk about
features. So, first I want to talk about vibe coding in some circles has a negative connotation and it's very similar to the way that content creators started. The concept of content was very
started. The concept of content was very cringe early on. I remember Gary Vee started saying it a lot in the early days and whether whether you're not you still think it's cringe. it doesn't
actually matter because content creators have quite frankly taken over the world in terms of narrative dominance and just their cultural impact the way they drive revenue to different places and I
believe we're on a trajectory where vibe coder is the next content creator or vibe coder is the software equivalent of content creator in fact vibe coding is
taking over the world it was just named Collins dictionary word of the year and this is mostly because AI is getting really really good at coding And it's created a world where building at least
building your MVP is just one prompt away. And I would say that currently
away. And I would say that currently vibe coding is easy to start and it's hard to master. And this makes it a very valuable skill. It is a well-known thing
valuable skill. It is a well-known thing that you can take on. It's easy to get started, right? You can build some basic
started, right? You can build some basic applications, but it is really hard to master. You could say, well, why don't
master. You could say, well, why don't you just learn to code? And you could learn to code, but in my opinion, it's much more efficient to just learn how to become a good vibe coder. The tools are
now coming out the same way that Cap Cut was released. And you could become a
was released. And you could become a good video editor or a good content creator simply by learning Cap Cut, which is a much simpler editing process than using a pro tool like Premiere Pro.
You can now be a videographer. You can
make millions of dollars per year as someone who creates videos without even knowing how to use the professional editing tools. And that is the same
editing tools. And that is the same thing that's happening in vibe coding and companies are hiring vibe coders like straight up. So Visa is hiring a vibe coder familiarity with vibe coding
tools such as bolt lovable and vzero.
Here are six other positions posted.
It's here. It's happening. And so vibe coding has made the app store accessible to anyone. Right? The hours of learning
to anyone. Right? The hours of learning needed to get an app on the app store is dropping and it is dropping to zero.
There are hundreds of people. yesterday
submitted their app to the app store without writing a single line of code and Apple accepted it into the app store. And so what is vibe coding?
store. And so what is vibe coding?
You're likely to already kind of have a general definition in your head which is something like building software without touching the code using AI text to app or voice to app. And you can vibe code
web apps. You can vibe code mobile apps.
web apps. You can vibe code mobile apps.
You can vibe code apps that are actually on the internet that make you money. Or
you can vibe code internal tools that you and your team can use or just you personally can use. And what we're focusing specifically on this video is the tech that you put into your apps,
not necessarily the AI coding tools themselves. So it's less about these
themselves. So it's less about these rappers that you type into that you go text app. There's tons of videos on the
text app. There's tons of videos on the internet about this. This video is more about the tech that powers these applications like things like this like LLMs, image recognition, image
generation, image editing, video generation, video editing. We're going
to talk a little bit about agents. We're
going to talk about workflows, tool calls, web search, APIs, and more. And
then we're going to talk about the logic behind it all, right? Like when to use this technology and how to find good ideas to turn into technology or to turn
into an experience with software. That's
a a concept I've becoming much more obsessed with where how do you use this futuristic technology that's becoming widely available to anyone even if you don't know how to code to create an
experience with software that people are willing to pay for. That is a completely new concept that anybody, no matter how much experience you have, you can take
on this problem. And that's what we're going to be talking about today. And so
before we dive in, we're about to dive into those four sections I was just talking about. If you're interested in
talking about. If you're interested in the biggest vibe coding course ever created in partnership with the biggest AI companies in the world, I can't even say who those are yet. There's a link in the description if you want to sign up
for that early. There's going to be huge benefits if you sign up early. Anyway,
I'm not going to talk about this anymore. Let's dive in to the meat of
anymore. Let's dive in to the meat of the video and let's start actually learning and using these tools. Strap
in. This is going to be a long run.
Let's do it. Okay. So, what I'm going to do right now is I'm going to rapid fire go through all of the tools that I use or most of the tools that I use. I do
not think you should follow along this section. What I'm trying to do in
section. What I'm trying to do in section one is to get your juices flowing and understand the way that I use these AI tools and the way that our company uses these AI tools. I work with
seven or eight now of the smartest people in the world at our startup here in San Francisco and they are incredible at using AI. No one has a real rigid way of using it. they just default whenever
they run into a problem whenever they need it. We have ways in which we use AI
need it. We have ways in which we use AI for different things within the business and anytime there is that repetitive thing that you need to do over and over again we just turn it into a little piece of software that anyone can use within the company and it has become a
massive amplifier and we're able to grow our company at a rapid rate without increasing headcount because we all use AI very efficiently, effectively and most of all creatively. And so I'm about
to go through a lot of the tools that I use. All right, so let's get started. So
use. All right, so let's get started. So
this is chat GPT on chatgbt.com and we're going to start off with a temporary chat because this doesn't have any memory. And so if you first download
any memory. And so if you first download chatgpt, this will be your experience.
And so let's say we're using a prompt like this, right? I am writing a presentation for a video. Vibe coding AI apps complete guide. Please write it.
This is going to be a very poor prompt.
one, it doesn't have any of my experiences because I'm writing it from experience and I like this presentation is deeply rooted in what I have learned over the past year. And AI has no idea
what I've learned over the past year unless you actually give it context. It
also doesn't know what a good presentation is because you haven't told it what a good presentation looks like.
You haven't given it that underlying logic or philosophy on how to create a really good presentation according to your term. Because remember this AI
your term. Because remember this AI model was trained and reinforced by people who have differing opinions on what a good presentation looks like. Is
it a good presentation where it's a bunch of gobbledegook nonsense given at some corporate company at like a Microsoft presentation, some marketing presentation where it's more of just like checking the boxes versus actually
delivering value. You want to make sure
delivering value. You want to make sure that you give it proper instructions and the right context, right? So it
understands your experiences so that it can actually deliver the value that you want. buttons, you need to give it
want. buttons, you need to give it proper guidance. And so this is just a
proper guidance. And so this is just a prompt. This is an example of a prompt
prompt. This is an example of a prompt that is not very good. So let me show you an example of one way that now how I use AI to create really high quality presentations and it comes from a
philosophy in a video that I saw on YouTube and the YouTube video is called how to speak. And so if we go to how to speak, I'm telling you right now, if you want to give good presentations, this
will come off as like the most boring interview. This this guy's very monotone
interview. This this guy's very monotone and boring. You will be by the 10 to 15
and boring. You will be by the 10 to 15 minute mark, you will be just so into this video because it's so good and he talks very clearly and concisely about all of these ideas on how to actually
evoke strong emotion in your presentations and it's just very very good. It's a very good video and it's
good. It's a very good video and it's something that I want to distill into a philosophy. Luckily, it's very easy to
philosophy. Luckily, it's very easy to do that. So, if we were to copy this
do that. So, if we were to copy this link, we open up a new tab and I'm going to go to Google and I'm going to type in get YouTube brand script. And what we can do is we can just go to whatever one. This is not sponsored by this
one. This is not sponsored by this company. I don't never heard of this
company. I don't never heard of this company before. The top one changes. And
company before. The top one changes. And
we're just going to copy this. Let me
see if that worked. No, it doesn't look like it did. There it is. So now we can copy this transcript. I believe that has copied to my clipboard now. Right now,
if we were to like go to a docs, we could actually just copy this transcript. Now we actually have context
transcript. Now we actually have context of everything that he was saying in the video. Right. And what we could do is we
video. Right. And what we could do is we can copy this and we can go to chat GBT.
Let's open up a new chat. I want you to analyze this transcript and create a concise bullet distillation of all of
the important parts of this video regarding how to structure your presentation. I want you to ignore the
presentation. I want you to ignore the stuff he talks about. And so this is why it's important to actually watch the video. I watched the video and part of
video. I watched the video and part of the things that he talked about is like when to have lectures because it's talking about lectures. But I find it very it's a very interesting conversation about how to present and to share ideas and not to have too many
words on your slides and it's just a really good presentation on how to structure your lectures. And so, um, the
stuff about the lectures and anything not related to how to structure your
presentations because I want to use this for my as as the kind of the underlying philosophy on how I'm going to prepare for my YouTube videos. And then what I
can do is I can paste in the transcript and we can run it. And now AI will analyze it. And what it's going to do is
analyze it. And what it's going to do is it's going to create a kind of an underlying philosophy after thinking for a bit on how to create videos in the style that he teaches in this video. So,
we're going from video to philosophy.
And there we go. After about 30 seconds, right, it thought for 20 seconds using the most powerful AI models in the world with chat GPT. If you have the pro plan, it uses the best models. Here's a tight
distillation of what Patrick Winston on how to speak. And here we go. We have
this is a very good outline of exactly how to create a presentation in this style. Now what we can do is we can copy
style. Now what we can do is we can copy this. What I like to do and what I
this. What I like to do and what I advise you guys do is to start saving these little nuggets of philosophy and refining them over time. So I'm going to open up a notes app that stores
everything locally. And I'll explain why
everything locally. And I'll explain why we do this later, but basically just know I'm opening Obsidian, which is basically just a notetaking app that allows me to save files locally. So, I'm
going to create this. And here we just have the notes app right here. I'm going
to full screen this. I'm going to close out of this. And so, this is just notes app. And so, I'm going to change the
app. And so, I'm going to change the title of this to writing presentations philosophy. And now we can do we can
philosophy. And now we can do we can just paste this in. This is just a very concise outline. And so basically what
concise outline. And so basically what we just did is we just solved for the structure. But that still leaves my
structure. But that still leaves my ideas and perspective. I do not want to give a cliche presentation. I don't care how well it's structured. If it's not based in my own experiences, then like
it's not even worth presenting. And this
is just what I've learned over constant iteration of creating content is you need to figure out a way to use AI to pull out the ideas and your perspective so that it can actually create right it
can actually create a high quality accurate presentation based on own experience right that is what we're trying to do so how do we solve for ideas and perspectives um one way is you
could just list them or what you can do is another fun thing that you can do with AI I'm going to show you right now so if we go back to chat GBT And what we can do here since we saved that locally in this Riley philosophy document, you
can see here that we have this writing philosophy MD. We can just immediately
philosophy MD. We can just immediately use this in any AI tool that we want to use. And you might think, oh, but you
use. And you might think, oh, but you can actually save these into projects in chat GBT for those of you who know projects. And that's true. But what
projects. And that's true. But what
we'll see later when you get into kind of the more advanced tools like cloud code, having them locally on your machine and and tools like cursor as well, it's incredibly valuable. And I'll
get to that later, don't worry. But as
you can see here, I can just drag in this philosophy. I can say given this
this philosophy. I can say given this philosophy of creating a presentation and the structure defined in this philosophy, I want you to a ask me
questions one by one. I want you to ask me five questions one by one on what I believe about the topic I'm about to say right now. So that after you have
right now. So that after you have gathered these questions, you are able to create a presentation that follows the guideline. Right? You are gathering
the guideline. Right? You are gathering information for five questions and then you are going to attack the task of writing this presentation. I want you to
dig deep and ask me questions one at a time that really pull out the great ideas and thoughts from this. So yeah,
go ahead and do that. This topic of this video is vibe coding AI apps using AI.
And we can even just copy this whole thing. The official title is the
thing. The official title is the following. Okay. So now AI can analyze
following. Okay. So now AI can analyze this presentation philosophy like how to speak and how to give a high-quality presentation. Now it's going to read it
presentation. Now it's going to read it and now it's going to ask for my perspective, right? And so we're using
perspective, right? And so we're using AI to a ask me questions. And so what you're seeing here is this workflow that you might think, oh, I don't want to do that every time I want to create content. I don't want to have to like
content. I don't want to have to like copy it from YouTube. But all this entire experience right here, right?
This this entire experience can be turned into software that makes this immediate and you can use this philosophy and this question flow immediately and you can bake it into a piece of software. And this is a problem
that I've solved for myself. And at the end of this video, we're actually going to make a piece of software that helps with this and goes beyond this and actually makes content for different platforms. And so I'll get to that later, but I I just wanted to kind of
foreshadow this and this is kind of the direction we're going. We're taking this philosophy. We're taking this process of
philosophy. We're taking this process of answering me questions and then generating the script and we're going to create software out of it. So, I'm going to very quickly answer these questions.
I don't think it's super useful to do it on camera. I think it'll take me about
on camera. I think it'll take me about 10 minutes. And then I'll come back and
10 minutes. And then I'll come back and I'll show you what the final result looks like. And by the way, I I really
looks like. And by the way, I I really don't like typing. So, I use a tool called Whisper Flow. I talk about it in all my videos. And that's how I do the voice uh to text here. And I can just press one key and I can immediately start answering these questions rapid
fire with voice. I like it much more than the voice mode. I'm not a huge fan of that with the OpenAI. So, the viewer will be able to create apps with the most popular technology in the world.
It's going to be awesome. There's so
many so much technology out there that people can use in their own apps and they don't even need to know how to code. This is going to be a complete
code. This is going to be a complete guide that teaches them end to end exactly how to do that. Boom. There's
question one, right? You got it. So,
your core belief, what is the fundamental belief you hold about vibe coding that you want the viewer to internalize? Anyone can do it, right?
internalize? Anyone can do it, right?
like uh vibe coding makes software experiences accessible to anyone, right?
You can do it. You can just pick up a computer and if you understand the basic logic and the tools that uh like the main AI tools, you can mix and match and almost use them like Legos to create a
highquality application. So, what do you
highquality application. So, what do you think the real reason people still never build their app? I think people don't um just haven't taken the time, right? the
the few hours it takes to to just mess with these tools. Learn how to use them and then learn where they can use them in different workflows, right, in order to um make a difference in their lives.
And they don't have a problem they need to solve. They don't have a startup.
to solve. They don't have a startup.
They don't create content. They don't
try and solve problems in novel ways.
So, they just never really think it's possible for them. But if they just took the time to learn for five hours tops, they can really start building really
useful and novel things. And so we're just answering these questions. Okay,
create the full. This is question five.
And I'm just going to say create the full presentation. And now it has all of
full presentation. And now it has all of the context, right? It's it's writing this right now. And now it has all of the context. It not only has the
the context. It not only has the structure, right? It also has the ideas
structure, right? It also has the ideas and perspective that are mine. I just
gave it to it because I asked AI to extract them from me, which is super underrated use of AI is to get AI to extract it from me. And so if we go back
here, all right, here's a clean, high impact presentation. And this is very
impact presentation. And this is very cool. So most of the people walk around
cool. So most of the people walk around with billion-dollar ideas in their head and never build them. Not because
they're not smart enough, not because the tech is too hard, it's because they've never taken a few hours to learn the tools that make software accessible to everyone. This guide fixes that. You
to everyone. This guide fixes that. You
know, this might actually have been legit a better intro to the video that I'm making right now. By the end of today, you will be able to create apps using the most powerful technology in the world without writing a single line of code. and I'll show you how to
of code. and I'll show you how to understand tools, combine them like Legos, and build AI powered apps end to end. This isn't hype. This is practical,
end. This isn't hype. This is practical, tactical, complete. So, the core belief,
tactical, complete. So, the core belief, here's what I believe. Yeah, this is great actually. And so, this is a
great actually. And so, this is a highquality presentation outline. And
the entire process that I went through right there is something that we could turn into a platform that could look unique, feel unique, and become an app that many of you use and maybe even pay
money for if we really refine the philosophy and we get it to ask the user really good questions if it were to create highquality presentations like this every time, right? Because we can actually go beyond this. In fact, I'm
going to do that right now. So, if I wanted to go beyond this right here, right? We could actually open up Claude.
right? We could actually open up Claude.
And again, I don't expect you to download all these different AI tools. I
work in a company, an AI company. I'm in
the center of it. So, I use all these tools. I'm not saying you need to adopt
tools. I'm not saying you need to adopt this philosophy, but I personally like to use Claude. Right? We can paste in that presentation. I want you to create
that presentation. I want you to create a slide deck on the following. Please
only put maximum 20 words per slide, but also have the script for each slide in a
collapsible default collapse state and include charts white. And so, okay, so right now clog can actually create presentation. And so I don't want it to
presentation. And so I don't want it to look ugly. And so what I want it to do
look ugly. And so what I want it to do is I actually want to have a style. And
so if we go back into this Obsidian, you could actually just write a new note here, right, that we could always use.
And we could just call this style guide, presentation style guide. And what we could do here is we could say I want to use inter font. Don't uh have any curved
edges in diagrams. Make everything like right angles. Make it kind of a light
right angles. Make it kind of a light Japanese vibe with a very light background. Let's say you um Yeah. So
background. Let's say you um Yeah. So
this should be black, gray, and white only. You can use different shades of
only. You can use different shades of gray and add borders to the slides. Make
it very elegant like a esoteric professional designer. Someone who would
professional designer. Someone who would aspire to work at Apple but went off and did their own thing and now they're very successful and created a very cool
agency and they love interior design and it impacts how they do their art. I
don't know. This is a random example but again this creates a file that we can always use and this will take some refining. Right? you after you create a
refining. Right? you after you create a presentation with this style, you want to adjust it, right? You want to adjust it. And so what we can do is we can
it. And so what we can do is we can always just open up Finder and now we have this presentation style guide that we can use at any time. Make it in the
style of this presentation. Boom. Now we
can use 4.5 sonnet. And Claude, by the way, is just like chatbt except I think it has a better artifacts feature. And
we'll see how it does. And you can see here it's actually writing code for this slide deck. And so it's going to create
slide deck. And so it's going to create this full slide deck based on the presentation because we gave it the presentation and now we have a presentation style guide. And I think you're kind of seeing the whole workflow come together, right? We're going from
idea, we're giving it a structure, then we're turning it into an actual presentation and it's going to come together in this clean slide deck and we're using all these different tools.
We could actually just bring this all together into one experience that we'll get to later. Okay, so it is done and we can actually just click on it to view it. And here we go. Vibe coding AI apps
it. And here we go. Vibe coding AI apps complete guide. Here it has view script.
complete guide. Here it has view script.
And so we can actually just make a quick edit to this. I actually don't want the script to be like on the page. Instead,
put it off to the right. And there
should just be one toggle at the very top that has show scripts. And so the script should be off to the right. It
should be 3/4s of the width should be the the presentation. And then have the text smaller on the right. Keep it same font. I like it. But off to the right,
font. I like it. But off to the right, not actually on the slide deck. And so
while it's making those changes, we can take a look at this. Oh, I like this input, process, output. And these are just very basic, but this is just a presentation that we can create. And
again, I didn't give it a lot of examples. I didn't tell it what charts
examples. I didn't tell it what charts to create. This is just a basic example,
to create. This is just a basic example, but I just want to get your mind flowing on the different things that you can use AI for. Okay, there we go. We can hide
AI for. Okay, there we go. We can hide all the scripts. We can show all the scripts. We have the presentations here
scripts. We have the presentations here on the left. This is kind of weird that you have to scroll down to it, but here we go. We actually have side by side the
we go. We actually have side by side the slides and the script for each slide.
And we've created this using a couple tools using an underlying philosophy that I got from a YouTube video. And it
can be your own philosophy as well, right? And that's even better, right? If
right? And that's even better, right? If
you can kind of mix and match the things you learn to create that philosophy and now you can actually create a consistent output. Okay, this is example one. Let's
output. Okay, this is example one. Let's
move on to another example. All right.
So, I have 155,000 subs on YouTube.
Three months ago, I think or maybe four months ago, I was at 50,000. Part of the reason I've been able to grow so quickly is because I've kind of understood the thumbnail creation process. And it's
really hard to create thumbnails, especially if you're ABC testing. And
what I want to show you is how I use AI to ideulate and create thumbnails. And
so, one thing I actually want to show you, again, using YouTube as an example, right? It's an archive of really good
right? It's an archive of really good data. If you want to become a good
data. If you want to become a good YouTuber, if we just type in let's go ahead and type in learn AI. Okay, so
here's an example right here, right?
Let's take this one right here. What if
what if I was making an AI basics video >> last week? I spent and I wanted to use this thumbnail, right? All you could do is you go to Google and type in get thumbnail from YouTube video URL. Now,
what we can do is we can paste that in and we can get this thumbnail. And I'm
just going to copy this image. Now,
Korea is a place where I like to you can do a lot of things on Korea. Actually,
let's go over a basic example. Um, image
thumbnail thumbnail image of a man doing YouTube AI basics. All right. It's kind
of weird to prompt an AI for an image.
It's a whole different strategy here.
It's using the model crea, right? It's
using Crea 1. And these are really bad thumbnails. Now what we can actually do
thumbnails. Now what we can actually do is we can switch to a different model, right? We could use chat GPT image. We
right? We could use chat GPT image. We
can make sure that this is the proper aspect ratio. Now we can try the same
aspect ratio. Now we can try the same prompt using a different model. What I
like about Crea is it allows you to just generate a bunch of them. So now we have six of these generating right now. And
again, these are the most powerful models that we're going to be getting into. And you can actually use any of
into. And you can actually use any of these models in your own applications.
And so we're still in section one where I'm just kind of using image tools. In
section two, in just a second, we're going to create little workflows around these and then we're going to talk about how you can actually find these and just insert them all into your own applications or automations or anything
that you create in the future. Right
now, we're using chat GPT image and next we're going to use IDOG 3.0. I'll show
you that. Actually, while this is loading, let's go ahead and ceue up ideogram 3.0. And now what I want to do
ideogram 3.0. And now what I want to do is I want to show you this new power, right? This is one of my favorite AI
right? This is one of my favorite AI models for images right now because of this single feature, which is character references. Okay, so you see these
references. Okay, so you see these coming in. These look a lot worse. This
coming in. These look a lot worse. This
is Korea's built-in model. Like this is just the the own model, and their model will probably get better. I don't think it's nearly as good as ChachiPD. These
are better, but again, it's not me, right? This is not me. What I can do is
right? This is not me. What I can do is I can say thumbnail image of a man AI basics, right? AI basics. And we can do
basics, right? AI basics. And we can do ideoggram 3.0. And what we can do is we
ideoggram 3.0. And what we can do is we can include a character reference. So I
can include a photo of myself. So I'm
going to upload this photo right here.
This is a photo of me from a year ago.
I'm just going to hit open. Now we're
including this as a character reference.
So now we're going to create an image of me. Thumbnail image of a man AI basics.
me. Thumbnail image of a man AI basics.
And we're going to generate it. And we
are now using image our ideoggram 3.0 with a character reference. And what
this allows you to do is to create an image of someone with a consistent character. It can't be cartoons. It's
character. It can't be cartoons. It's
not good at cartoons. It's only good at actual humans. But you can see here,
actual humans. But you can see here, here's an image of this. But you notice here that this is probably a better thumbnail technically. Like the text is
thumbnail technically. Like the text is much better. It even has the Chad GBT
much better. It even has the Chad GBT icon which you could include. This one
has an image of me, right? This looks a lot like me and I'm in different poses.
Like this one's actually pretty good. In
fact, I'm going to download this and I'm going to use this as the new reference.
I just think this is a better photo of me. I'd rather I look like this than the
me. I'd rather I look like this than the image that I gave. So what we can do instead is we can download this image right here. Let's say I really like this
right here. Let's say I really like this thumbnail image. We can do is we can go
thumbnail image. We can do is we can go to this edit tab within Korea and we can upload that image. Now what we can do is we can edit this image. And so we can
edit this with nano banana but I would actually prefer using ideoggram the same model that we used to generate a consistent character of me. And what we can do actually this is more cartoonike.
So I don't think this will work as well.
What I want to do here is let's go back to this. I want to do this one. Let's go
to this. I want to do this one. Let's go
ahead and use this one right here because it looks more like a real human.
And we'll go to edit. We'll upload this image right here. We'll change it to ideoggram. Now, let me show you. You can
ideoggram. Now, let me show you. You can
use the character reference again, but this is actually different. This is an edit. So, I can include that image, that
edit. So, I can include that image, that good image of me right here. We're going
to use this as a character reference.
And then we can use the brush. And so we can actually just select the part of the image that we want to change. Let's go
ahead and select the head here. And
let's give it some We'll just make sure that we select everything right here.
We'll give it some room to work. I do
have kind of big ears, so that's fine.
And now we'll select all of this. And
now what we'll do is we'll go man and we'll hit generate. And what this will do is it'll generate a variation of this thumbnail and it'll have my face. And
boom. Okay, it's done. And look at this.
We have a thumbnail image of me, right?
Maybe I don't like this as much. I can
very easily just keep generating new ones. And boom. There you go. So, these
ones. And boom. There you go. So, these
are okay, right? These are okay. That's
because the thumbnail is actually not that good to begin with. What I can do is we can create a new session. We can
upload this image. And what we can do is we can upload this image of this AI basics video. Mr. Beast almost released
basics video. Mr. Beast almost released a tool earlier this year that allowed you to basically remix other people's thumbnails and then just replace the face on it and it blew up in terms of
popularity except he got so much hate for it he ended up removing the tool and then he completely backtracked and apologized etc. One thing that a lot of successful YouTubers are doing now is they're just taking high performing
YouTube thumbnails like this one right here and what they're doing is the exact workflow that I was just showing you which is using this character reference.
Now, we're just going to drag a rectangle over the face here. And now,
we're just going to type uh man serious face, and we're going to use this character reference, and we're just going to generate it. We're simply
taking a tool or a good highquality thumbnail, a video that I want to make as well, and we are turning it into my face here. Is this unethical? I'm not
face here. Is this unethical? I'm not
sure. Would I release a tool that does this specifically? I don't think so. But
this specifically? I don't think so. But
the point is is you can do that. And
here, this is my face. Um, you can basically create this and you can generate as many of these as you want.
Uh, man, serious face, clear, smooth skin. You can generate it again. And you
skin. You can generate it again. And you
can just kind of learn how to prompt it here. And so these tactics, these ideas.
here. And so these tactics, these ideas.
Okay. Yeah, smooth skin. For whatever
reason, it doesn't want to do that.
Serious face, no wrinkles, beautiful.
Let's try this. You could also do it with four. And there we go. Now my face
with four. And there we go. Now my face is a little bit more clear. And there
you go. We have a thumbnail image of me right here with AI basics. And I know I basically just took someone's thumbnail, but it would be high performing and added this weird like blemish, but
that's okay. We can uh go to um nano
that's okay. We can uh go to um nano banana edit and we can say the brown
logo needs to be a a white star logo.
Instead, just change the logo to the right of the person. And so we could use nano banana to make edits. And so this is just kind of a glimpse into the workflow uh that I've learned from hours
and hours of creating thumbnails for creating images. See here we go. We have
creating images. See here we go. We have
that star, right? We can start to do things with AI and it's really fun. And
okay, so this kind of concludes section one just kind of showing you how I solved my creating presentations problem and then creating YouTube thumbnails problem. It is a quick glimpse into how
problem. It is a quick glimpse into how I use some of the basic AI tools. I use
chatgpt and claude. And then I just recently started using Korea. This is
not sponsored by Korea. This is just the tool that I've used to kind of very quickly rotate through some of the most popular tools. And just to foreshadow to
popular tools. And just to foreshadow to the final section, we're going to be creating a new app. And this is an example of an app that I've created with this technology. Remember the ideagram
this technology. Remember the ideagram tech that I was showing you earlier that allows me to create an image with myself? Well, this app allows me to
myself? Well, this app allows me to create an image of myself. And so I can type in man in studio cinematic close-up image glowing and I
can generate a thumbnail here with me the same way that I could in this app. I
created a mobile app that allows me to just do it really quickly and then I can use nano nano banana to really quickly iterate on the image. And as you can see here's an image of myself. Now if I hit
the X I can just use nano banana. And so
we can say make the background look like an office with a view of San
Francisco.
And now it's loading. And boom. There we
go. Now it added text. And again, this is this app is something I've created really quickly. Um I would probably
really quickly. Um I would probably refine it so that it doesn't add any text. Here I'm going to say remove text.
text. Here I'm going to say remove text.
make glow behind the man a nice glow outline to kind of give this like subtle backdrop vibe to it. We'll see how it does on that. And there we go. We now
have this image and we can say add text on the left that says AI guide and make
the background slightly more dim. Boom.
There we go. And then I added a feature to this app which allows me to export it as a PDF. I can also very easily export all images and it will show up here in
my photos. If I go to photos, it will
my photos. If I go to photos, it will show up right here. All of the versions, right? Version. All of these versions
right? Version. All of these versions will show up here and I can export them.
And this is how I've kind of created an app with this. And so before I actually did this, I did a lot of testing, right?
The way that I actually found the way that I actually found this ideagram workflow is I used what I call a playground. And there's two main
playground. And there's two main playgrounds that I want to talk about today. We're going to be talking about
today. We're going to be talking about Korea's nodes, which is a brand new platform that you can use. And this is exactly how I did it here. And I I'll
build one from scratch really quickly. I
don't want to dive too deep. This is not a full tutorial on these nodes, but I'll explain as much as I can maybe in a 10-minute process. So, what nodes allows
10-minute process. So, what nodes allows you to do is it allows you to like create any workflow. I could actually just find any image, right? I could
actually just drag this image into here and this allows me to use it in future workflows. So, I could actually just
workflows. So, I could actually just drag this out and we could turn this into we could upscale this. And so, we could create an upscale. So it receives this image and then upscales it by a
factor of two. So we can run this node and now here it is taking this image and it is upscaling it. And then what we could do is we could actually make it go
from the upscale to a video, right? We
could make it a video. So this we could do like VO 3.1 which is the best video model. And we could say the text
model. And we could say the text animates out and the logos fly out to
the right and the man puts on a cowboy hat. And now we can run this node. And
hat. And now we can run this node. And
AI generated videos just take longer in general. But I do want to just kind of
general. But I do want to just kind of take some time here to kind of explain what's going on. This is like image upload. And this is something that we
upload. And this is something that we can change. We can add, let me see if I
can change. We can add, let me see if I can just Nope, I can't duplicate it. But
I will add another sticky note here.
This is going to be upscaling by factor of two. And then finally, this note,
of two. And then finally, this note, right? If I add a note here. Now, I
right? If I add a note here. Now, I
could basically give this node or this this board, I don't know what they're called, to my editor, and it could go and he could go through the same exact workflows that I do. And here we could
say taking upscaled image and turning it into a video. And here we can see that this is now a video. If we full screen
this, I mean, look at that. That is actually insane.
Okay, so now I actually just came up with another workflow. And here we're actually I'll break this down very simply. Actually, I'm just going to show
simply. Actually, I'm just going to show you what it does and we'll go along because it's going to automatically trigger future ones. So, what we're going to do here is I'm just going to
say man man holding a basketball on the basketball court cinematic image realistic vivid colors. All right. So,
we're going to run this entire workflow.
Now, what it's going to do is it's using a character reference which I showed you in the earlier examples, right? You can
use character references to generate images using ideoggram which is just an AI model. And now here it is generating
AI model. And now here it is generating an asset right here where it's going to generate an image. Now it immediately triggered. So now I'm on the basketball
triggered. So now I'm on the basketball court playing just like this. That's me
close. Now it's upscaling it. You can
see here it upscaled it. So it upscaled it. Now it looks a lot crisper. It's not
it. Now it looks a lot crisper. It's not
super noticeable, but here it used an LLM call. So, it took this image as a
LLM call. So, it took this image as a reference and here's this prompt. Look
at the image and please suggest a slight animation here to make it dynamic by writing a prompt in three sentences.
Make the man blank. Make the plane in the sky blank. Something that like this to make a cool animation based on the input image. So, then it generated this
input image. So, then it generated this text. Make the man dribble the
text. Make the man dribble the basketball with fluid motion showcasing his skill and focus as he dribbles. Make
the lights on the court slightly flicker to enhance the excitement in the background. Animate the crowd to cheer,
background. Animate the crowd to cheer, creating an electrifying atmosphere. And
so this is VO 3.1. And it will actually have sound in it. And so that is a really cool thing that you can do. Okay,
so it's done. Let's give it a look here.
Where am I going?
And so this is just one example. Here's
another example. I can say man buying bread at the store cinematic image. He's
wearing a top hat. And we can run this entire workflow here. It's going to generate an image here. It might
automatically just run this. I'm not
sure. I think it will. So here's just input text that we're adding to four different variations at once. So this is now kind of branching out into four directions, but it's all doing the same thing. It's just creating many different
thing. It's just creating many different variations. You could do this for 10 of
variations. You could do this for 10 of them. And here it added, I'm just a
them. And here it added, I'm just a skater boy. So, I forgot to change this,
skater boy. So, I forgot to change this, but here's this image of me right here.
And it added some bread in the background. I'm just a skater boy. I'm
background. I'm just a skater boy. I'm
just a skater boy. And again, oh, it's also running this. I didn't even see this here. It's also upscaling. And it's
this here. It's also upscaling. And it's
also turning it into man playing a game on his computer is the image prompt. I
don't even know where it got that. Maybe
it's just that by default, but again, you can just have a lot of fun here. And
so if you hit this plus sign, you have access to all of it, right? You have LLM call, line spplitter, concat, get video frame, crop video, video speed, stitch titles, all these different things. Lip
sync, audio, 11 labs. I didn't even see that. Text to speech, generate sound,
that. Text to speech, generate sound, speech, and soundscapes for a video assets. You can do so. Oh, you can do 3D
assets. You can do so. Oh, you can do 3D object. How does that even work? That's
object. How does that even work? That's
insane. And I have no idea how this is going to turn. It just animated it.
And yeah, so this is just a random example here, but this is similar to uh N8N or Zapier or tools like Glyph where
you can visually start chaining these different tools together. And I think this is just really good for kind of opening up your mind to see how these different AI models can fit together
because here we have an image generator, an AI upscaler, then an LLM that takes the image as an input and generates a prompt for that upscaled image and then
turns it into a video. And you start learning the relationship between all these different tools, which I think is the most important thing to understand.
And every time you learn kind of one of these workflows, your brain like starts to open up and you can kind of picture other workflows and they'll come up at random times. And especially if you
random times. And especially if you create content a lot, you start to realize that you can actually use this technology in a ton of different areas.
And again, these aren't full tutorials on how to use Crea Nodes. In fact, Crea Nodes is a brand new beta feature that is in Crea. There's other tools that you can use other than Crea like Flora is a
very similar tool that's not in closed beta and there's a lot of other node-based image tools. There's super
studio by Kyber and other tools that you can use instead of the Korea nodes tool.
And so yeah, so these are kind of the three things that we've talked about. We
talked about chat GBT along with Claude and how we come up with this workflow uh to create highquality presentations. We
talked about the Crea models that you can use in isolation and then we talked about how you can use them with the nodes feature to kind of see the relationship and create these like
automations between these models. Now
let's move it back to text generation and let's go to uh Google AI Studio.
This is my favorite place to use like Google's models and Google has a ton of models. I mean you've already seen it.
models. I mean you've already seen it.
We use Gemini here and here you can actually create apps. This is like your own little vibe coding platform here.
But what I actually like with I like the build section. Where is it? Yeah. So I
build section. Where is it? Yeah. So I
like using this chat section right here.
And what I like to do is you can just use any of the popular models. You can
use Nano Banana. You can use Gemini 2.5 Pro which has a really long context window. And this allows you to come up
window. And this allows you to come up with you can use system instructions. So
you can create system instructions for the AI and you can create system instructions. So you format
instructions. So you format presentations and we can actually use that doc that we were using earlier right remember the doc that we created this presentation guide we can actually
open this up actually no this philosophy doc we can copy this whole thing and we can paste it in here you format presentations you format presentations
in the following format take the user's idea and create a presentation outline for them. And yeah, this is what you can
for them. And yeah, this is what you can do. And you can just make this the
do. And you can just make this the system instructions right here. And I
believe we can just click off. And we
can I'm just going to download this right here. This presentation that I'm
right here. This presentation that I'm making right now. And I'm going to go back to Google AI Studio. I believe we can just upload a file upload right here. And we can say here is my
here. And we can say here is my presentation.
Please search the internet. And what you can do is you can actually tell it to do like structured outputs which I'll get to later once we get to the final section. I think the app we're going to
section. I think the app we're going to create will require the structured outputs. We can also turn on grounding
outputs. We can also turn on grounding with Google search which allows it to actually search the internet. So here's
my presentation. Please search the internet before responding and find some facts that back up what he is saying and
find studies. And we could do a lot of
find studies. And we could do a lot of different things here. Media resolution.
We actually don't need to edit that. And
we can actually just kind of prompt engineer. And I love using Google AI
engineer. And I love using Google AI Studio for these types of things because it's just really easy to get the code, right? We can see the code which I'll
right? We can see the code which I'll explain later and we can very easily get an API key and add it to our own applications which I'll talk about in just a second. So Google AI Studio
basically just allows us to like prompt engineer. This is very similar to
engineer. This is very similar to anthropics console where you can actually go in and do a very similar thing as well as get the API keys which we'll be talking about in just a second using Claude's models. The only thing is
is Google AI studio and Google's models in general allow you to just do more things. Anthropic is very focused on
things. Anthropic is very focused on coding while Google focuses on a lot of different things and you can actually upload video right so what I can do here is we can actually upload a full-on
video to Google and what we can do here is we can did you know that the next generation spends about 90% of their this is an ad that I've created before
and so we can actually just insert this into the model and we can say please analyze this write the full transcript
script. Then I want you to suggest
script. Then I want you to suggest popups that I should add and what those
popups should be and say these will be simple graphics with some text, right?
It the AI model will literally analyze the footage. It can analyze the full
the footage. It can analyze the full footage and then what it will do is it will suggest what to put and at each time stamp. So, this AI model can
time stamp. So, this AI model can basically watch the video and it can watch the video faster than it would take a human to watch the video. It
converts it into these 15,000 tokens and it's analyzing the video. It's going to give me the full transcript and pop-up suggestions with timestamps for those suggestions. And look at that. It
suggestions. And look at that. It
analyzed the full video. Did you know that the next generation spends about 90% of their mobile internet time using mobile apps, not websites? And it goes through this whole thing. Time stamp at 8 seconds. a simple graphic of a cost
8 seconds. a simple graphic of a cost out symbol saying next to the no coding required. Let's see if that would work%
required. Let's see if that would work% of their mobile internet time using mobile apps, not websites. That's why
you should learn how to build mobile apps without writing a single line of code. That would have worked really good
code. That would have worked really good there. No code required. Full screen
there. No code required. Full screen
popup at that 8-second mark. At the
13-second mark, a graphic showing smartphone icon with arrow pointing at the app store. Let's see. At 13 seconds.
Well, the most powerful tool in the world to build mobile apps is the Vibe Code app. And you can use it directly on
Code app. And you can use it directly on your phone and ship it to the App Store.
No coding required. That is cool, right?
And it literally can basically watch your videos. And I've actually created
your videos. And I've actually created an app that uses this model before and uh we actually gave it to some of the people on our UGC team, our uh user
generated content when they were making ads or making videos about our app. it
analyzed their video and you can have it give instructions and if you make your system instructions right in fact I can see here that these system instructions we should have deleted these system
instructions for this because it's probably confusing the model and we can actually just run this again without the system instructions but yeah you can use
this model to help people who are creating videos for you change the videos the AI can actually give smart instructions on how to change those videos and that's really really cool and
super useful when creating highquality applications. Being able to analyze a
applications. Being able to analyze a video is incredibly useful while creating and vibe coding applications.
Okay, half time. This is what I've been starting to do in my videos because we've covered a lot here. A lot's going on. We're talking about all these
on. We're talking about all these different tools. You're not necessarily
different tools. You're not necessarily supposed to be following along yet. I
just want to get your mind flowing and I want to talk about what we've talked about so far before we move to the second half. So, I'm drawing a line here
second half. So, I'm drawing a line here and we're going to talk about what we've covered, right? We talked about how you
covered, right? We talked about how you can you can make text responses better by giving a philosophy document and you can get that philosophy document from a YouTube transcript, right? That's what
we did using chat GPT. And then we had the AI ask us questions which improved the quality of our presentation. And so
we generated a presentation script and then we also generated from that we generated a presentation deck using uh
anthropic and then we even edited that deck. Then we went to Ca right on Crea
deck. Then we went to Ca right on Crea what we did here and I'll just put te uh LLM tools and then I'm just going to put image tools because I don't care which
one you use. it does not matter to me.
What I'm focused on is the actual underlying technology. And so for the
underlying technology. And so for the image tools, what we did is we used um ideoggram, right? We used ideoggram to
ideoggram, right? We used ideoggram to generate consistent characters from an image. Then what we did is we edited
image. Then what we did is we edited that image with text with nano banana.
And then we edited an image with ideoggram edit which allows us to select the portion of the image that we want to
change as well as edit with a character reference. And I showed you exactly how
reference. And I showed you exactly how to do that. And then we kind of went into this Crea nodes tool and like we created workflows and then part of these
workflows we uploaded a YouTube thumbnail that we got from the internet and then we basically swapped the face
with mine using ideoggram. We also
turned an image into a cool video. And
then we went through this last one right here. Right. If we go back to crea nodes
here. Right. If we go back to crea nodes right here with this basketball one like what did we do here? So within krea nodes we have an input prompt which is
deterministics. In this ideagram prompt
deterministics. In this ideagram prompt here, what we did is we we have an character reference, right? In this in this dropdown, you can see that we're
using a character reference, which is me, right? And we use this character
me, right? And we use this character reference. And I'm just going to put
reference. And I'm just going to put that there. So, we know that we used a
that there. So, we know that we used a character reference. And then we used an
character reference. And then we used an AI upscaler. And then after the AI
AI upscaler. And then after the AI upscaler, we had an LLM call which with
image input and custom prompt for video model. And then we just had a video gen
model. And then we just had a video gen from image and prompt. So that's what we did there, right? We're chaining these together. It upscaled this image and
together. It upscaled this image and then this image was the image prompt right here. Before we just generated a
right here. Before we just generated a video from this image, we need a prompt.
And rather than manually doing it, we can give it to an AI to say, "Look at this image. Please suggest the animation
this image. Please suggest the animation and write it in this prompt." And here it generated an image of me dribbling a basketball. And we're beginning to see
basketball. And we're beginning to see how all of these tools are related because here we used one of these LLM tools, right? We used one of these uh
tools, right? We used one of these uh this LLM tool right here, right? We used
this LLM right here. And so these image tools and LLM tools can blend together.
And we also talked about Google AI studio. So within Google AI studio, we
studio. So within Google AI studio, we also discussed how you can do some prompt engineering and all of the system prompts and prompt engineering you do
there. You can uh basically save to put
there. You can uh basically save to put in your app, which we'll get to in just a second. And then we also did video
a second. And then we also did video input. So that's a good summary of what
input. So that's a good summary of what we covered so far. And this is just like scratching the surface. This is
literally like two to 3% of all of the really cool things you can do with AI.
And we're covering a lot of it here. And
so this is just a small percentage of the things that you can do. But now what I want to do is prepare for the next phase, which is building. And I want to
start building with these tools. And I
want to talk about how you can get an API key to plug it into your app. So
whether you're using Lovable, Cursor, or Vibe Code, you can create a web app or a mobile app really easily that solves a specific painoint. All right. So we're
specific painoint. All right. So we're
going to start diving into FAL here. And
FAL, what they do basically is it provides all of the models that builders can use, specifically creative tools.
And so you can think of it like the CRA, like the tool that we were using for nodes and at the beginning when we were just doing like image editing. It has
all those models there. It's like Korea without the frontend experience and it hosts all of the models and and you're going to be able to use these, right?
You're going to be able to use these in your own apps. So, we're going to be able to create apps and we can have our own hosted experiences that are a lot more focused and you can create a full
experience around a narrow use case rather than kind of what Korea does which is kind of this general tool. And
you might be thinking like why would you need anything other than Korea, why would you want to create your own app when you can do it all in Korea? And I
would think that this is kind of like the the if you've used the tool notion, it almost does too much, right? And so
now tools are kind of coming out of the woodwork that solves one specific use case that people are happy to pay for because it's very easy to use. Korea has
so many different features and it's just becoming a little bloated if you only need it for one specific use case. If
you want to create a thumbnail, you know, you might not want to have Crea because it offers way too many options.
People just want the best thing. And so
curating the best thing is often the most important thing. And so I think the best place to find all of the models uh in in a place that allows you to use them in your own applications very
easily is FAL. And so this is just fal.ai.
fal.ai.
And what we can do here is we can actually explore the most popular models. You can see the recently added
models. You can see the recently added added models. Here's this one called
added models. Here's this one called Sauna Video. It's leveraging SA's ultra
Sauna Video. It's leveraging SA's ultra fast processing speed to leverage high quality assets. We have an image
quality assets. We have an image upscaler. And so this one is not as
upscaler. And so this one is not as curated as Crea, right? There's just a lot of different models that you can use here. And so this is where a lot of the
here. And so this is where a lot of the like developers will come and test different models. And you can really
different models. And you can really create cool workflows with these models.
Like there's unlimited amount of models here. And so this is a new model. It's
here. And so this is a new model. It's
called Reeve and it's brand new. I
actually never tried this. And so we're starting with this image right here, which is just a single like salamander looking thing. And I'll say put a
looking thing. And I'll say put a butterfly next to him. And so we can actually run this. Oh, I need to choose this image.
this. Oh, I need to choose this image.
Okay, now we can run it. So now this one has inputs. So it has a prompt input and
has inputs. So it has a prompt input and an image reference URL. And this is what the developer tools look like when you're testing them on their site.
Another tool like this is um I believe it's it's replicate.com.
Here you can see that it added a butterfly right next to it. And this
looks like a pretty solid model here.
And I think it yeah, this is really good. And so what we can do here is
good. And so what we can do here is let's say we really like this model and we want to use it in in an application that we create. Let's start off with a
very very simple example of how to begin using this API. And by the way, for this section, I'm going to be hopping back and forth between these. So, we're going
to show how to use FAL to build a simple app. Then, we might show FAL again and
app. Then, we might show FAL again and go to create a mobile app. And we're
going to go back and forth here, but I just want to show the relationship between all of these tools. We're not
necessarily going in sequential order.
Okay. So, that's exactly what we're going to do here. What we're going to do is we're going to use this cool AI technology that we found and we're going to make a very simple web app that uses
this technology. And for this, we're
this technology. And for this, we're actually going to use Replet. And Replet
is a very simple web app builder that you can use. I'm going to log in right now. I'm going to sign up and and so
now. I'm going to sign up and and so back to this slide. Yeah, we're
downloading Replet, which is one of the coding tools that you can use. It's just
one of the more popular ones. And we can actually just create an app here. And
what you can do is you can say, I want to create an app that lets me generate images or that lets me edit images. I
want to be able to upload an image then edit it. Don't add the AI functionality
edit it. Don't add the AI functionality yet. Please just make a quick mockup.
yet. Please just make a quick mockup.
And I wish I didn't use Replet. Replet's
actually one of the slower tools. They
take pride in their like super long like it can it's like an autonomous coding agent which I actually don't think is that fun but it is a quick a tool that you can use to generate an app and so
we're generating an app here and just like lovable bolt and vzero it's like I'm going to create it and we'll just wait for it to be done. Okay, so it is
now done and it has light mode and dark mode. It's kind of in the colors of
mode. It's kind of in the colors of replet and we can upload an image here.
And here's an image of me. Obviously, I
can't really do anything, right? Like
select a tool to edit with. All right,
what was my prompt? I want to create an app that lets me edit images. Okay, now
what I want to do is I want to use this technology that we had in FAL in this app, right? This is called Reeve. I had
app, right? This is called Reeve. I had
it open right here. What will happens when I want to use this exact technology? Well, what I can do is what
technology? Well, what I can do is what I'm going to do is I'm going to first of all, I'm going to copy this link right here. And so, whenever you're using
here. And so, whenever you're using something on FAL, there's a playground tab which allows you to test it. Then
there's an API tab. This gives you all of the instructions on actually how to implement it into your own apps. And so,
what you need when you're using an API, right? When you're using an API, let's
right? When you're using an API, let's go over here for a second. When you're
using an API, you're going to need a key which is going to be an API key. This
allows your account to be connected to FAL because it will charge you money every time you use the the API and it might be a tiny amount. Maybe it's 5 cents per edited video, but it basically
needs to track who you are. And this
will actually allow you to gain users and you can decide how many generations they get per month. if you're charging them $10 a month, you don't want them to be able to use $100 worth of credits.
And so this is how they keep track of all of the um of how much money you owe them for using their technology. And you
can buy credits through FAL, which gives you access to all of this technology.
Without tools like FAL, you would have to pay and get an API key for all of these tools individually instead of just one platform, right? And so I can go
into my I have $105 worth of credits up here and all I can get my API key. And
the way that I do that is yeah so I can click on manage API key. And now this key we're going to add it here. I will
delete this. So delete after this video.
Create key. Here's my API key. Never
share it with anyone. I am going to delete it. Hint. store in your
delete it. Hint. store in your environment. Okay, copy. Now, what we're
environment. Okay, copy. Now, what we're going to do is we are going to go to replet. In replet, what I'm going to do
replet. In replet, what I'm going to do is I'm going to click new tab and I believe they call them secrets. So,
environment variables or secrets. You
can select this right here. And what I'm going to do is I'm going to new secret.
And now I'm going to paste in the key right here that we have, which is this one. And then this is FA_API_key.
one. And then this is FA_API_key.
And you can add this as a secret. Now
what we're going to do is I'm going to say I want to use the Reeve technology from FAL. Here are the doc for that. Now
from FAL. Here are the doc for that. Now
what I'm going to do is I am going to give it I'm assuming that it can surf the internet and read before it implements it. I I I bet it can. I
implements it. I I I bet it can. I
actually haven't used Replet in like a year. So, we're using this Reeveedit
year. So, we're using this Reeveedit API, right? We were testing it in the
API, right? We were testing it in the playground when we were just browsing models. For this API, I'm actually just
models. For this API, I'm actually just going to copy this right here. I'm going
to go back to Replet. Here are the docs for the FAL Reeve model. I want the only
edit I can do to be me typing a change to the image. Please make this work and
look up the FAL docs for more info on how to use the API generally, right?
Because there's instructions on how to use the specific model which is this Reeve model and then there's just basic docs, right? Every single API that you
docs, right? Every single API that you use, you can search up whatever API you're using and you can type docs and you'll find the documentation for
everything. And so this document can be
everything. And so this document can be pasted. And oftentimes it's better to
pasted. And oftentimes it's better to just paste this into whatever AI agent you're using to build your um app. And
we can just run this. And if you're a little bit confused right now, all we did is we went to FAL, which is a technology that has a bunch of models,
right? FAL has model one, model 2, and
right? FAL has model one, model 2, and then model 3000, right? All the way up to 3000. And so FAL hosts these. All of
to 3000. And so FAL hosts these. All of
these cost a different amount of money, right? Some of these, model one might
right? Some of these, model one might only cost a little bit while model 2 might be very expensive and model 3000 is like dirt cheap. But the nice thing
about this is all of these models, all of them, right? This one, this one can be used with this API key. This one can also be used with the same API key. and
it gives you access to all of these different models with one single API key. And then we put this API key into
key. And then we put this API key into replet secrets. We could also put this,
replet secrets. We could also put this, if we were using lovable, we could also put this in lovable. If we were using vibe code, we could put this into the vibe code env.
And you could also put this in the cursor env tab. And if you are interested in that like cursor for any type of app, I made a video that's three hours long and you can learn about how
to use it in cursor. But you could just basically need to put it in your application so that you can build an app using that technology. So now it's still
working. Okay. So it's now done. We used
working. Okay. So it's now done. We used
the FAL API key and we have created this image generator. Now what we can do is
image generator. Now what we can do is we can very easily upload this image of me and we can say make his shirt red and
ideally this will use the FAL API that references the Reeve technology and it should edit this image using this
technology and we have in basically one or two prompts created a front end for this technology. Okay, there we go. Now
this technology. Okay, there we go. Now
we can this is a lot here. We can remove this. Here we can see the original and
this. Here we can see the original and the edited. This is really cool. And we
the edited. This is really cool. And we
can download it. And so that is just a very simple web interface. I believe if we were to hit publish, we could just call this reev gen. We could publish
this publicly. And this will create a
this publicly. And this will create a build and we will be able to use this on the internet which is pretty cool. And
look at this. Uh, after a few minutes, we published this on the internet. So,
you can go here, reeven.replet.app.
My account on FAL has $100 worth of credits left. I might just leave this up
credits left. I might just leave this up and I'll let people spend those $100 if you go to this link if you want. It's
not a very good app, but the point that I'm trying to show here is we created this little interface that's now on the internet that you could use. You could
send it to your team to use. It's not
that good yet. We're going to make a really high quality workflow in just a second. But I could say make shirt
second. But I could say make shirt orange and you could type this in and hit edit image with AI and it should in theory just work. And that is really
really cool. And boom, there we go. We
really cool. And boom, there we go. We
can download the image. We can see them side by side. And this is Revitor created in one or two prompts. And the
key thing that we did is we used FAL. We
got an API key, right? We went to FAL.
We browsed the models, right? We looked
at the different models and within FAL.AI, right? If we were to go back
FAL.AI, right? If we were to go back home and we were to search Reeve, right?
We can see Reeve fast remix analytics.
We can see the speed. We can see the total cost. And so here you can see that
total cost. And so here you can see that this Reeve edit costs 4 cents per image edit. And I used it nine times for a
edit. And I used it nine times for a total of 36. And obviously if you have a few users and you send it to your team, this could add up in theory, but ideally if you were to sell it commercially, you
would charge people and make a profit, right? You would charge people maybe you
right? You would charge people maybe you get a 100 generations per for $10 or something like that. So you can actually make a profit and you can see the cost
here. But yeah, we went we selected a
here. But yeah, we went we selected a model that had a certain cost reevied create an app that does X. Here are the docs which is the instructions on using
that model. The API key is in the
that model. The API key is in the secrets folder and we were able to create an app and replet does make it pretty easy to deploy to the internet and then we put this directly on the
internet. Right? So after this step it
internet. Right? So after this step it created an app and then we deployed we deployed to internet and so now with the
link right so this link right here will take you to this site and you can actually use it and that is live on the internet we did it in two prompts in about 10 minutes and so that is just a
little overview and we haven't created a a highly personalized fun workflow but that's exactly what we're about to do.
Okay. Now, what I want to do is I want to create a mobile app. And I want to first start off with a very simple screen and then we're going to add APIs in just a second. So, what I'm going to
do is please create a white background app with black text at the top in a cool font. Pick a new cool font that h that
font. Pick a new cool font that h that says create new idea. And then I want an input field beneath it where I can type in my idea. And then that's it. We're
going to add some more stuff in just a second. We're going to create an app.
second. We're going to create an app.
And here we're actually using the Vibe Code app. So right now I'm basically
Code app. So right now I'm basically sending this prompt to Claude Code on the phone in order to create a mobile app. Replet is web apps and this prompt
app. Replet is web apps and this prompt is literally being sent to Claude Code.
And so we can actually while it's loading we can actually begin to take a look at the different APIs that we can use. Throughout this video we discussed
use. Throughout this video we discussed a lot of different APIs. And here in the Vibe Code app, they're actually built directly into the app. So, as I was showing you earlier, we can actually
directly select IDOG and we can select nanobanana and we can actually add these to the prompt. Remember when I showed you earlier about how you needed to include an API key and you also need
needed documentation, right? Here are
the docs and you needed to go find the link and give it to them and also use your API key. You actually don't need to do any of that when building mobile apps on the Vibe Code app because the Vive Code app actually handles all of that
for you. So when you select these APIs,
for you. So when you select these APIs, it automatically points at the documentation and the API keys hooked up to your credits on the Vive Code app. So
you actually don't need need to get any sort of APIs. All you need to do is just select them and say, "Okay, I want you
to deeply learn how these APIs work and integrate them into this app." This is going to be a YouTube video idea
generator. But I want to first start off
generator. But I want to first start off with the thumbnails. So, the the two parts that you need to come up with when coming up with a YouTube video is you need to create an outline, which we're
going to do in step two, which is kind of the um what we've created visually already. I want you to add a bottom tab.
already. I want you to add a bottom tab.
So, the first tab should be idea. The
second tab should be thumbnail. We're
creating the thumbnail page. This new
thumbnail page should allow me to upload an image of myself. And then I want you to use the character reference feature within Ideogram to allow me to upload an
image myself. And then I want to be able
image myself. And then I want to be able to up uh to generate an image of me using character reference if I just upload an image of myself and then use it as a character reference. Then I want
to be able to edit it with Nano Banana.
And that's how I want to be able to create thumbnails. And I want these to
create thumbnails. And I want these to show up in the application. I want to be able to download them. And then after that we'll work on the idea port. Okay.
So, that was definitely a really long prompt that we just typed in right there, but we're adding Nano Banana. And
so, I want this app to basically be on the bottom tab. It's going to be YouTube ideas, right? Which is going to be a
ideas, right? Which is going to be a similar workflow to the to one that we saw earlier where it asks me questions and then I give it a format. And then
we're going to have an image or thumbnail. We'll put thumbflow. So,
thumbnail. We'll put thumbflow. So,
these will be the two bottom tabs. So,
in earlier in the video, we came up with this workflow for coming up with really good video ideas or outlines. In fact,
maybe this will just be the right tab or outline tab and then we'll have a thumbnail tab. And these can just be
thumbnail tab. And these can just be icons. And then each of these will have
icons. And then each of these will have its own interface. So, like right tab will have its own interface. So, when
this is selected, this will have its own interface. And then the right will have
interface. And then the right will have its own interface when this is selected.
So, we'll have these two bottom tabs.
And we can tell the Vibe Code app to do that in just a second. We might as well while it's loading generate some icons, right? We can actually use this
right? We can actually use this technology. The Vibe Code app builds in
technology. The Vibe Code app builds in this image generation technology into the app so that you can generate icons.
So I just going to do a 3D Japanese style blue pen icon. And so we can just enter this in right here. We can paste
this in. I'm going to run this like four
this in. I'm going to run this like four times. And then I'm going to change the
times. And then I'm going to change the Japanese style blue pen icon to a thumbnail, a YouTube thumbnail icon. And then I'm
going to run this four times. As you can see here, these are loading um on the Vibe Code app. We can select them in just a second. And you know what? I
really like this one. And I also like this one. And so what I'm going to do is
this one. And so what I'm going to do is I'm actually going to select this and say pen icon. And then I'm also going to go to this image and say thumbnail icon.
And then what I'm going to do, the central application should be divided into two parts. And so put two tabs at the bottom, a writing tab and a thumbnail tab. But use the icons above
thumbnail tab. But use the icons above for this. And then have the right
for this. And then have the right interface just be pretty blank, the one that we created earlier. And then have the thumbnail interface be the one that I just described in the previous prompt.
And leave that unchanged. Use these
icons big at the bottom. and have a little animation. Make them animate a
little animation. Make them animate a little bit when you tap them as you switch to the other screen. All right.
So, we just sent off this prompt, but the previous prompt is done. So, we can actually take a look at this here. We
have idea on the left, thumbnail on the right. We can click thumbnail. Let's see
right. We can click thumbnail. Let's see
if this works. So, we can actually upload a photo of me and we can choose this one. I can't see my face right
this one. I can't see my face right here, but that might just be an issue with the like how it is framed. Maybe we
should choose one that is square. Maybe
this would work better. Okay, there we go. Square images do look a little bit
go. Square images do look a little bit better. Now we can generate with
better. Now we can generate with ideoggram. Let's say Oh, the app is
ideoggram. Let's say Oh, the app is updating. Oo, look at these. So, we have
updating. Oo, look at these. So, we have idea and thumbnail and they animate a little bit. Make them twice as big, the
little bit. Make them twice as big, the idea and thumbnail icons, but get rid of the text beneath them. I don't need those like the text that says idea and thumbnail. just have the icons and make
thumbnail. just have the icons and make them bigger and make them animate more when I press them. And boom. Okay, so it is done here. Let's take a look. Ooh,
they like animate. This is kind of fun.
All right, this is pretty cool. Now, I'm
going to upload that photo. Where's that
square one? Right there. Let's upload
this photo. Image of man in studio.
Generate thumbnail. Let's see if this works. Okay, it doesn't. Let's go ahead
works. Okay, it doesn't. Let's go ahead and just hit this fix button. All right,
so there was probably some sort of like rendering error. All you had to do is
rendering error. All you had to do is just hit fix error. And these are free to fix within the Vibe Code app. And
yeah, so it struggled with that character reference API. It'll probably
look at the documentation a little bit more and hopefully it gets it. Okay, so
our app is now done. Hopefully that
error is fixed. If we scroll down to this square image of me. Actually, let's
use this image right here. Thumbnail.
Image of a woman asking a question.
Generate thumbnail. Let's see if the ideagram portion of this works. Okay,
generated successfully. Here it is down below. We can download the thumbnail. It
below. We can download the thumbnail. It
was saved to photos apparently. We can
go to photos and there it is. It's
downloaded on our phone. Okay, now I can edit it with Nano Banana. Okay, so this is a really weird interface. Make her
hair green. Edit thumbnail. Let's see if this works. One shot. The ideagram took
this works. One shot. The ideagram took two prompts. This should work on the
two prompts. This should work on the first try, I hope. Edit successfully.
There we go. And here we have these like it's kind of vertical. So, like now hopefully we can edit this one. Make the
background dark and her shirt green.
Boom. We have this app working. We have
vibe code. We're using vibecode.dev. And
so in this feature right here in the thumbnail feature, we can generate with character reference. I found this on
character reference. I found this on Korea. So like discovered discovered
Korea. So like discovered discovered this workflow on Crea, right? And that's
why we mess with all of these popular AI tools. I found it on Crea and go like
tools. I found it on Crea and go like this generated with character reference.
Then I can edit with nano banana and we can edit multiple variations of this.
And so that is what we've done. That's
half of this. Now what I want to do is I want to do what we were doing earlier.
Remember earlier in the video when we created this presentation style guide actually no this this one right here. If
we open up we can actually use this style guide. So I'm actually just going
style guide. So I'm actually just going to copy this full style guide and we are going to be using the style guide that we got from YouTube. And we can actually
distill we can put this into a prompt, right? We can put this into a prompt and
right? We can put this into a prompt and this style guide is like a little a little document that will be used every time we try and come up with a YouTube
video idea. And so the first thing that
video idea. And so the first thing that I want to do is I just want to be able to go from my idea for presentation and I want to create a presentation in the
style of this style guide right here.
And so the way that we're going to do this is we are going to first what I want to do is open up the chat within the vibe code app which is claude code
and we can clear history. Now what we're going to do after we clear the history I want to now type here. Every time you switch features you should just clear the history. Now what I can do is I'm
the history. Now what I can do is I'm going to go to the APIs tab. Right?
instead of having to go to open router or replicate or fal to find APIs, we can just use them in this own little like API browser. And so what I'm going to do
API browser. And so what I'm going to do is I'm going to use Gemini 2.5 Flash for this. And I'm going to say, okay, what I
this. And I'm going to say, okay, what I want you to do is I want you to focus on the first tab, which is YouTube idea.
I'm going to put in my YouTube idea and I want you to generate a a presentation using structured outputs in the style of
the style guide that I'm going to paste below. So user puts in their idea,
below. So user puts in their idea, you're going to take their idea and you're going to write a full presentation based on that presentation style guide that I'm pasting below. Use
this API. And now what I'm going to do is I'm just going to go to I already have it saved to my clipboard, I believe. Yep, we have this style guide
believe. Yep, we have this style guide saved to my clipboard. I can copy it.
And then on my phone, I'm just going to paste this right here. Paste. And now
we're going to run it. And so this is the prompt that we're using. And we'll
wait for Cloud Code to finish. However,
when I enter in one of these ideas, when I press generate, I actually want a sound effect to happen. So, what I'm going to do is I'm going to go to the audio tab. I'm going to choose sound
audio tab. I'm going to choose sound effect and I'm going to go pleasant whimsical ding. Enter noise. And I'm
whimsical ding. Enter noise. And I'm
just going to run this a bunch of times and we can actually generate these sound effects.
Ooh, I like that one.
O, that's not good.
This one's definitely the best. Yeah,
let's go ahead and use this one. When I
press generate on this page, that's generating the report. Play this noise at 30%
volume. Okay, so it is done. Let's go
volume. Okay, so it is done. Let's go
ahead and try it. I've copied just a basic idea. Let's see if this works. We
basic idea. Let's see if this works. We
just want to test this. Ooh, it made the noise. I forgot about the noise. Oh,
noise. I forgot about the noise. Oh,
what the heck? Here's the slide. Ooh,
the visual. Oh, I can already see this.
Okay, this is pretty cool. So, here it just kind of describes the visual is I want to add that first part. Okay,
great. You did a really good job. I like
the slides. I like the um the scripts, everything about that part. I want I like that. Now, I want to add an
like that. Now, I want to add an intermediate step. I want you to use the
intermediate step. I want you to use the following API to ask me questions about my idea. So when I type my idea, I'm
my idea. So when I type my idea, I'm going to hit generate presentation. But
before that, I'm going to say here are five questions that I want you to answer. And what I want you to do is I
answer. And what I want you to do is I want you to make this interface minimal and somehow figure out a way where I can answer follow-up questions. So and then after I answer those five follow-up
questions, which should use AI to answer good ask me good questions based on like it should be based on the final output.
So I want you this new AI prompt to be created in such a way that it answer asks me questions that would yield a uh adequate amount of information to generate it based on the style guide. So
the style guide the AI that asks me those five questions should be aware of the style guide as well. And then after I answer the five questions and I also want to be able to skip the the
intermediate questions but it asks me five questions and then I want to be able to press generate presentation.
Then it takes all of my uh inputs uh like all of my original one and then the five questions that it asks me and that should be given to the AI um that should
all be given to the create presentation AI that we've already created and that process should be exactly the same.
We're just creating an intermediate step or a step before it that allows us to ideulate better and AI will prompt us so that we have all of our ideas out there.
That was a really long prompt. You guys
are understanding this, right? We are
basically creating an app that takes the ideas that I was showing you earlier, the workflow that I commonly use. One
thing I uh one other thing that I need to do real quick is I need to tell it which AI to use. And for this, since Gemini Flash did a really good idea, I think we should just use Gemini Flash
again. And now we can send this prompt
again. And now we can send this prompt to cloud code. So let's take a quick look at this. We can now generate presentation slide decks with a script and slide on this part and then we can
also generate thumbnails using our face and it generates a thumbnail. Um, and
then we can edit that with ideoggram and it's a very simple app. We have our own custom buttons and we created our own custom sound. As you can see here, this
custom sound. As you can see here, this is how you vibe code using different APIs, using the technology. And if you have a technology that isn't built into
Vibe Code, you can very easily press the environment tab or the ENV tab. And here
you can see all of the variables that are built in automatically, but you can actually add an environment variable.
And this is identical to the process in Replet. Remember replet secrets tab?
Replet. Remember replet secrets tab?
This is the same exact thing. You can
add any external API to your app in this tab and you can go get your own API key.
You can use basically any technology. So
you use API when you want to use one of the built-in ones and they're adding a lot more soon to the Vibe Code app. And
then you can also add this ENV. You can
add variables to the ENV tab which is very very fun. All right. So it is done and let's go ahead and take a look here.
So here we can say building apps with AI. It should make a noise and it should
AI. It should make a noise and it should give me follow-up questions. Okay, by
the end of this video, what specific type of AI powered app you will be able to build a mobile app using replet vibe code and building a web app with replet
and also you will understand the basic technologies that you need in order to use APIs in your apps properly and also create workflows and learn how to create
interesting workflows. Okay, who is the
interesting workflows. Okay, who is the primary audience? Beginners. We're going
primary audience? Beginners. We're going
to be demoing on the Vibe Code app and we are also going to be demoing on Replet. And I'm just going to generate
Replet. And I'm just going to generate the presentation real quick. Generating.
Okay, there we go. If we go to script here, we now have a full script. We
didn't create the the slides page yet.
We haven't made that. We haven't turned that into an image. We could probably do that. In fact, let's just go ahead and
that. In fact, let's just go ahead and do that. So, we've created an app that
do that. So, we've created an app that lets us outline and it asks us follow-up questions and then it generates a script when we're done. And we can generate
thumbnails, right? I can upload a photo
thumbnails, right? I can upload a photo of myself and say thumbnail image of a man on a mountain. Generate thumbnail.
And it will take my face and put it on the thumbnail. And then I can iterate
the thumbnail. And then I can iterate and edit it with AI. I can also ideulate with AI. And so we've created this like
with AI. And so we've created this like power tool for creating YouTube thumbnails, which are really important, and script ideas. Here's an image of me.
We can say, okay, so this one is pretty interesting here. If we go to the Wow,
interesting here. If we go to the Wow, that actually does look like me. I'm on
a mountain. I probably could have said facy, but that is a really solid image.
I don't like how I'm wearing so much brown for this thumbnail. So, we can edit with in nano banana. Make him
wearing a jean jacket and also add white text that says the journey begins. This
is big white text to make thumbnail epic. Now we can hit edit thumbnail.
epic. Now we can hit edit thumbnail.
Again, this interface is not amazing. We
can make improvements to this, but this is not what this video is about. This
video is focused on how to use all these different technologies to create a workflow that makes sense. Look at this.
We can download this thumbnail. Again, I
wish there was a way to view it. We
could very easily add it. But again,
this is what we created. Photo of me using me as a reference image. Now we
can edit it. The journey begins. So
yeah, so that was the vibe coding AI apps complete guide and we covered it all. Uh we talked about understanding
all. Uh we talked about understanding the tools. We talked about comparing the
the tools. We talked about comparing the different tools using workflows. We use
ka nodes and Google AI studio to compare the different models. We talked about integrating the tools. And we really looked at FAL and using that in Replet.
And then we built the app duh directly on Replet. We used our API key in Replet
on Replet. We used our API key in Replet to create a web app. And then we built we use the built-in APIs to create a mobile app right on vibecode.dev.
And yeah, that is kind of what we talked about in this video. And remember, vibe coding is taking off. It is massive. It
was the word of the year. And I highly recommend diving in and figuring out if you can actually build a useful piece of software because once you do it once, I guarantee you, you'll be addicted. And
so be warned, right? You will get addicted if you start vibe coding. It's
easy. It doesn't matter what tool you use. It really doesn't matter what tool
use. It really doesn't matter what tool you use. We aren't focused on the tools.
you use. We aren't focused on the tools.
You're focused on the technology that you can use to solve your own problems and solve others problems. And once you're able to solve your own problem, you can snap a payw wall onto it and you
can charge for it. People are becoming millionaires from this. And so here's an overview. This is what we talked about.
overview. This is what we talked about.
Thank you guys so much for watching. I
know this one we bopped around a lot, but I really appreciate it. So, thank
you for watching and I'll see you here for the next
Loading video analysis...