LongCut logo

Zero To Your First AI Agent In 26 Minutes (no code)

By Tina Huang

Summary

## Key takeaways - **AI Agents: Six Core Components**: An AI agent is a software system that uses AI to pursue goals and complete tasks for users. It's comprised of six core components: the model (brain), tools for task execution, knowledge/memory, audio/speech capabilities, guardrails for safety, and orchestration for deployment and monitoring. [00:30], [00:51] - **Build AI Agents with n8n, No Code Needed**: You can build a functional AI agent without writing any code using n8n, a workflow automation tool. This involves setting up triggers, connecting AI models, defining tools, and managing memory to achieve specific tasks. [02:27], [04:32] - **AI Agent Guardrails & Error Handling are Crucial**: Essential components often skipped are guardrails (to prevent foul language or abuse) and error handling (to manage failures like tool unavailability). Implementing these ensures the AI agent functions properly and safely in real-world scenarios. [15:56], [16:37] - **Evaluate and Improve Your AI Agent**: Orchestration, including deployment and monitoring, is key for AI agents. Implementing evaluations with test cases allows you to measure agent behavior and continuously improve its performance by tweaking prompts and configurations. [20:02], [20:34] - **From Summary to Audio: Full Workflow**: A practical AI agent workflow can research a topic, summarize findings, convert the summary to audio, and even email the audio file. This demonstrates how to integrate multiple AI capabilities into a single automated process. [02:34], [11:39]

Topics Covered

  • Build AI agents with zero code using N8N.
  • AI agents require six core components to function.
  • This AI agent summarizes topics and converts them to audio.
  • Guardrails and orchestration are crucial, often-skipped components.
  • Measure your AI agent's performance to improve it.

Full Transcript

This is your quick start guide to build

a fully functional and deployed AI agent

today with zero lines of code. We're

going to start off with the basics of

what makes up an AI agent and then

implement it using NAV. All of this is

doable with no code. As per usual, it's

not enough for you just to listen to me

talk about stuff. So throughout this

video, there's going to be little

assessment which if you can answer this

question, then you are well on your way

to building your first AI agent. A

portion of this video is sponsored by

Lovable. Now, without further ado, let's

get going. All right, let's first start

off with a crash course on what is an AI

agent from a practical perspective. An

AI agent is defined as a software system

that uses AI to pursue goals and

complete tasks on behalf of users. For

example, a customer service AI agent

would be able to take user queries and

help them solve their problems. Or a

sales assistant AI agent would be able

to qualify leads, book meetings, and

follow up with sales prospects. There

are lots and lots of different types of

AI agents, but each AI agent is made up

of six core components. The first one is

model. This is the brain that powers the

AI agent. And this can be Chhatabi, it

can be Claude, it can be Gemini, it can

be small models, it can be big models.

Next up, AI agents need tools to be able

to perform their respective task. For

example, a personal assistant AI agent

would need to be able to have access to

things like your calendar in order to

book appointments. Then there's

knowledge and memory. A therapy AI agent

needs to remember the sessions that they

had with a patient over multiple

sessions. And a legal agent may need to

have access to a knowledge base of

specific cases that it's meant to be

analyzing. Audio and speech. Many AI

agents will have language capabilities

to be able to more naturally communicate

with humans. Guardrails are safety

mechanisms to ensure proper behavior.

You don't want your customer service AI

agent to be swearing at people, for

example. And finally, there's

orchestration. These are systems to

deploy and monitor and evaluate your AI

agents. You don't want to just make your

AI agent and release them into the wild

and not care about what happens

afterwards. You can have all these

different components there, but if you

don't know how to assemble them

properly, then it's also not going to

work out. You can give your AI agent

like the best tools out there, but if

you don't tell it that it has these

tools and it doesn't know how to use it,

then it's completely useless. That's why

people spend a significant amount of

time working on the prompts. All right,

that is our little crash course today on

the theory of building AI agents. If you

do want to have a little bit more

in-depth um explanations for things, I

did make like a full video over here

that you can check out and it goes into

a lot more depth, but this is enough for

us to build our first AI agent and we're

going to implement all of these

different components and the prompt

using NA10. But first, let's do this

quick little assessment which I will put

on screen now. Please answer these

questions to make sure that you fully

understand what it is that we just

talked about.

Okay, so this is NAN which is a flexible

AI workflow automation tool and it's

going to be what we're using to build

our first AI agent. Okay, so after you

sign in, you can create a new workflow.

What we're going to be building today is

a hybrid AI research assistant and

learning assistant. This is actually one

of my favorite workflows. In my line of

work, I have to learn things like a lot

of different things really, really

quickly and keep up with, you know, all

the trends and things that are happening

in the AI world. So, what I do is that I

have this AI agent that collects all the

information surrounding a specific

topic, summarizes it, converts that into

audio format, and I would actually

listen to these condensed summaries to

learn about a specific topic really,

really quickly. I am very much an audio

learner, so this works really well for

me. And it's especially helpful if it's

surrounding a topic where there's not a

lot of like YouTube videos and courses

and resources that's already been

created on that topic. Okay, so coming

back to NATO here, the first step we're

going to do is we need something that

triggers the entire workflow. So in this

case, we want to create a form

submission where the user is able to

input the query that they want to

search. So the title of this form, we

can call it search form. description is

input your search query to create

an audio version of a specific topic to

learn and the elements that we want in

this form is topic so we can put a

placeholder like I don't know like live

coding for example and then we want to

add another element so we can call that

time period because we want the user to

specify like what time period they want

to be drawn find your resources from

this can also be test. We can say like

past 6 months something like that. We

will make both these required fields as

well and we will execute step to see if

it works. So this is what it's going to

look like. The search form that we have

here the topic. So we can say like vibe

coding and then time period is past 6

months submit. And we see that it was

able to submit as a task. Great. So this

is going to be what triggers it. And

after the user um goes in and inputs

what they want to submit, the next thing

we want to do is have the AI agent. This

is where we're going to start building

the AI agent. All right. So with this AI

agent here, the first thing I want to

do, remember the first component of an

AI agent is the model. So I'm going to

connect a chat model. In this case, I'm

just going to use OpenAI's Chat GBT.

Let's do that. So here you can create a

new credential. And it's super easy. It

literally prompts you exactly what to

do. Um you can ask the assistant as

well. So how do I set up credentials for

OpenAI? This is the NA10 assistant. So

it'll tell you exactly how to do that.

So we go here, sign in, go to the API

keys, and create a new secret key. So

NA10 project, create new secret key.

Copy that, paste it over here, and there

you go. That credential created.

Wonderful.

Next up, we're going to write the prompt

here. And to do this, I'm actually going

to go into chatbt and I'm going to copy

paste this meta prompt here. So, this is

a prompt where you can specify what your

use case is and it would generate a

prompt for your AI agent. Um, and I'll

actually put this prompt in the

description as well. So, you can use

this to get started very quickly. You're

basically telling CH2BT to create a

complete self-contained NATO ready agent

prompt for the following use case. So,

this prompt is going to produce a good

starting prompt for your AI agent. So,

I'm going to say create a

research/arning

AI agent that takes the input of a

specific user query and time period to

search for the information to produce a

summary about that topic. So that covers

the role, the inputs and the task. We

also want to add this summary will be

translated to audio format at the end,

but this agent will only create the text

summary first, but make sure that it's

optimized for audio. So that covers the

role, the input, the task, and the

output. So constraints is going to

include make sure that sources are

reputable

sources and base the sources on as many

primary sources as possible. So in terms

of the tools that the agent will need in

this case we'll use perplexity as the

way to gather the information to produce

the research surrounding that topic. So

we need to tell it you have access

to perplexity

API in order to search up the

information to produce the summary. This

is good enough to get started for a lot

of the other parts of this. This prompt

should be able to take care of filling

in most of the gaps. You will also store

the information

in just simple memory for now. So just

have some storage of that information.

So this is good enough to get started.

you know, don't worry too much about it.

For any additional information, this

prompt will fill out most of it for you.

So, press enter. All right. So, it has

this prompt over here, which we will

copy paste over here. So, here's the

prompt and there are a few small tweaks

that we do want to make. So, click

expression in order to allow you to use

variables. So, for example, here you

have the research topic and it just

shows like research topic, right? But

here what you can do is actually go to

the schema of the input from the

previous node which is the form and you

can drag this variable. So to the

variable that the user submits. So this

is going to be the topic and then on the

time span it has time window and we can

just replace this with the time period

from the user in the form. This has

other stuff like word limit, audience

regions and things like that and the

focus. Yeah, we can just leave that

because you don't have that information

provided here as well. interpret input

and normalize time. So we'll just very

quickly do the same thing. So just drag

the topic and the time window is going

to be the time period. Great. So we're

searching with perplexity blah blah

blah. You know, we will fix this in case

it's not good, but for now this is fine.

One more here. I'm just changing the

topic in a time window because I know

that these are things that have been

already submitted by the user. So might

as well include them. Okay, great. Now,

before we can actually execute this

step, we need to provide it with the

memory and the tools that we said we're

going to provide. So, starting off with

tools. So, under tool, we're going to

give it the perplexity tool. Super easy.

All you have to do is search it up on

NAN and then it will show you the tool

over here. So, for the credentials, very

simple. You can also just click create

new credentials. You can ask the

assistant um for the exact way of

setting this up. It's very, very similar

to OpenAI. So, in the interest of time,

I'm just going to use the one that I

already set up here. The operation is

going to be message a model. Model that

we want to use from perplexity is the

sonar model. Um and the text that we

want to get. So what are we going to

actually input into the model? Right? We

would actually like to click here that

says let the model define this

parameter. And in terms of simplifying

the output, we're also going to let the

model define this as well.

Great. So now we have this tool set up.

And in terms of memory, let's include a

simple memory that can just store the

specific sessions in here.

Again, like all of these things we can

change like the important thing is just

to get the components there first and

then we can optimize it later. Okay. So,

and for here the session ID, we'll just

write define below and we'll just call

it summary. So, we're just giving a

variable name to save information as.

All right. Now, we have the moment of

truth and let's actually try running

this AI agent. What it should produce is

a summary. Okay, node executed have a

bunch of check marks. So that is a good

sign. Let's actually see what happened.

Okay, if you go over here and look at

the output. So it looks like we do have

an output. Okay, this is looking

promising. So vibe coding the practice

of guiding to write code. What it is

vibe coding blah blah blah. So findings

3 to seven items. Okay, so we might want

to change the format of this a little

bit. But it looks like we do have the

output here. So that is good. So if you

click on the logs, you can actually see

what exactly the AI agent was doing. So

if over here the AI agent came first is

the simple memory and it inputed the

prompt that it's executing. So it

started with that. Then it went to the

open AI chat model, gave it the prompt

that's over here. The model decided that

it was going to message the model in

perplexity to do the research and use

the perplexity tool in order to gather

the information that's there. Then that

information is passed back to the OpenAI

model where it compiled everything

together into a summary and then stored

it again into simple memory. Yeah. So

this is a great way to just see like

what your agent is actually doing just

to make sure. You can also look at um

perplexity if you're like being paranoid

like I am to see okay like it actually

does have the content that is coming

through. All that information is there.

Great. And then you can also look into

simple memory to double check as well.

Oh, look. It looks like it did. It did

save all this information into simple

memory as well in the chat history.

Wonderful. Great. So, at this point,

what you want to do is actually click

save because if you don't click save,

then you're going to lose your entire

workflow and feel very sad. So, this is

great. Now, we have a summary that's

here. It's not perfect, but it's pretty

good. So, the agent itself has done its

job. Wonderful. But, I do want to have

this translated to audio format. So,

what I'm going to do is add another node

here and call it. This is going to be

another OpenAI node. And under OpenAI,

there's a lot of different actions. And

one of the actions that you can take is

generating audio. So, I'm going to use

the same credentials that I had from

OpenAI audio source. And the text input

that I want to generate here is going to

be the output from the AI agent. So, I'm

going to drag the output variable that

is here. Now I'm going to execute this

step to see if it actually works. You

always want to execute things one step

at a time by the way. So you're able to

catch any errors. And it looks like node

is executed successfully. Let us see.

Ooh, stall it. Vibe coding. The practice

of guiding AI to write code has surged

in popularity and capability over the

past 6 months. This rise is reshaping

how developers work and how software

companies view AI assisted development.

Now what it is vibe coding involves

using artificial intelligence models to

generate, explain, test and refactor

code based on user prompts. Create

million dollars in cash signaling strong

market confidence. Source TechCrunch

June 2025. Hands-on experiments show

Vibe coding can quickly produce usable

code for production features when

combined with human validation, although

results vary by domain and data quality.

source YouTube 2025.

Okay, so this is our first try and it's

honestly not bad, right? We managed to

get it to work with just the initial

prompt that we had. Obviously, there's a

few things that we do want to clean up

here. Like for example, you don't want

to have the citations like embedded into

the audio and you don't need to like say

like what the title is perhaps and maybe

you know there's like little things that

we can tweak. And this is what we would

do with the prompt. I changed the prompt

in order to get the format, to get the

summary to be the way that we want it to

be. And but that is really not bad at

all. Now, to finish off this workflow,

it would be a pain in the ass if I just

had to like go and download it every

time, right? So, what I'm actually going

to do is I'm just going to ask it to

email it to me through my email. So, we

can add another item. We can call it

email. Wonderful. Gmail. And it has,

let's see, let's see. Let's see. Send a

message. Wonderful. Same thing over

here. You can create a new credential in

Gmail. Super easy. You can sign in with

Google and then it would directly allow

you to connect it um with NA10. All

right. So the sort resource here is

going to be the message. The operation

is that we want to do is going to be

send. So the email I'm going to send is

so tina lonely octopus.com

topic summary. Email type html is fine.

And the message I'm just going to say

here's the audio file. Under options we

can do attachments. So the audio file

that's here, we can use that as the

attachment. And let's now try executing

this step. And now let's actually check

our email. Wow, amazing. Look at that

topic summary. And it has the audio file

here. It's sent as an attachment

title by coding summary for past 6

months. Audience generate.

If you want to take your AI agent to the

next level and build a customized and

aesthetic web app without even needing

to write a single line of code, you

should check out Lovable. Lovable lets

you build full stack apps just by

describing what you want. You can take

your AI agent/AI app into a functioning

working product with a backend, a front

end, and a database all integrated for

you. It works with tools that you're

already using like N8 end. And it also

has built-in integrations with other

tools like Superbase and Stripe. You get

clean editable code that you can edit

within Lovable itself or you can export

it to wherever you want. There is a free

tier with five build credits per day

that you can get started for free. You

can also use my code TINA20YT in the

next 30 days to get 20% off your first

purchase of the Lovable Pro plan. The

link is in the description. Thank you so

much Lovable for sponsoring this portion

of the video. Now, back to the video.

And yeah, there you go. Pretty cool,

right? Okay, so this is a functioning

workflow at this point, right? And I

think a lot of people on YouTube at this

point will be like, "Yay, wonderful,

amazing, great." You know, and maybe

they'll just tell you to, oh, all you

have to do is deploy it now and then

you're good to go. But, but remember the

six components of these AI agents. We

have not done all six components yet. We

do have the model, we have the tool, we

have the memory. We decided to have

audio and speech functionality. So

there's two more. Pop quiz right in the

description. What two things are we

missing right now? Yes, guard rails and

orchestration. These are the two things

that people always skip out on. And then

when they actually deploy their workflow

and actually use it in real life,

they're going to end up with a lot of

problems because they don't have these

components to make sure that their AI

agent is functioning properly and doing

what it's supposed to be doing in the

long run. So that's why I'm actually

going to add in these two components

now. So, when it comes to guardrails,

the two minimum things that you should

think about doesn't contain things like

foul language, abuse, you know, like

racist stuff. Um, it probably won't

because it's coming directly from

perplexity, but you can imagine if we're

combining a lot of different sources

together and not going through something

like perplexity, you do need to screen

for these kind of things. So, we need to

make sure we have something in place for

that. And the second component that we

want to make sure of is some sort of

error handling ability. So, what happens

if it comes up with something and

perplexity fails for some reason, right?

and it doesn't have the information it's

supposed to have. You don't want your

entire workflow to just break. So you

need to come up with error handling to

think about anticipate these cases when

it happens. What should your workflow

do? So let's actually implement these

first. Let's put in a mechanism to make

sure that the summary that's coming out

is not containing bad languages. And we

want this to be done right after the AI

agent. So we're going to add another

node here. This is also going to be from

OpenAI and they have a action that is

classify text for violations. How

convenient, right? So yeah, using the

same OpenAI credentials and it's just

going to make sure it doesn't violate

any standard safeguards. So we will drag

the output variable here from the AI

agent which is the summary and we just

want to make sure that it doesn't

violate anything. And let's execute the

step and see what it looks like. So it

says here that it's flagged as false. So

So and it's flagging for these different

categories like sexual hate, harassment,

self harm, sexual such minors, etc.,

etc. And it actually gives a score for

all of this as well. And just to test

this out, like say for example, we're

going to write here like I don't know, I

hate you. You suck. Technically, you

should flag. Yes, it flagged as true.

And the category of flagged that is

harassment. Yeah, that's not good. So,

we do know that this works. Wonderful.

Let's put the output back. Now, what we

need to think about is say like if there

is no um flag and there was no issue

with the summary, we probably want to

just go through with this entire

workflow. But what if there is? Well,

there's a lot of things that you can do,

right? Maybe you wanted to redo it

again, like ask the AI agent to redo it

again. You can ask it to send a warning

message. You can go ahead and still do

it, but then just have like a flag when

you're sending the email saying, "Hey,

there's a violation within the flag."

And maybe in the body of the email,

write that, oh, like um here is what it

got flagged for, just FYI. So, there's a

lot of different ways that you can deal

with this, and there's no right or wrong

answer. It's about how you want this AI

agent to behave. So in this case, what I

want is if it does classify something as

a violation, I want it just to directly

cut this workflow and just send a

warning message. So to do this, I'm

going to add another node after the

violations one. It's called a switch

node. And what we want it to do, so if

the flag value is equal to false, then

we want it to continue on the workflow.

While if the flag value is equal to

true, then we want it to do something

else. and just toggle convert type when

required just to make sure that these

errors disappear. All right, so we have

if it's false it would continue on and

if it's true we want to add another node

that is still going to be like a email

node and just send a message summary

error. There was a text violation flag

please check workflow for details. All

right. Now, to test to see if this

actually works, what we're going to do

is over here. I hate you. This should

flag as harassment. And if we click the

switch. Okay, maybe that didn't work.

Let me try that. Let's try again. So,

detect input here. Um, we can do

something like you are terrible. I hate

you. Bad. Execute step. This is flagged

as true. So in the node it should also

go here and it should have sent an

email.

Now final last component is

orchestration. So this includes things

like deployment, includes things like

monitoring, evaluating things and

improving the agent over time. So the

easiest thing that you can do is just to

deploy it and hope that it keeps

performing the way that you want it to

be performing. But for most production

ready workflows, you do want to include

something called evaluations. And this

is where you have a lot of different

test cases that you want to run through

your agent. So you're able to see the

agents response to all the different

test cases. And depending on um what the

results are, you can choose to change

your prompt and tweak it so that you can

keep improving the results of this.

Here's a saying that what doesn't get

measured doesn't get done. So only by

measuring your agents behavior will you

be able to improve that behavior over

time. By the way, if you do want to know

like more details about evaluations,

things like that, I do have a video that

I'll link over here that does dive

deeper into this. But uh for this video,

I'm just going to show you how to do

that. Okay. So, here is the evaluation

spreadsheet that we're going to input.

And here we have different topics like

climate change, AI agents, elephants,

carrots, um and different time periods

that we're going to test out. And the

way that we're going to pass this

through NA10. Uh we're going to come

here to NA10 first. And the first thing

we're going to do is actually add

another trigger node. And it's going to

be called when running evaluation. This

is the evaluation trigger. And you want

to connect the Google Sheets, which is

the one that we have over here. You can

have it by creating new credential. I

already have it linked over here, but

it's super easy to link to your Google

Sheets. I just go through the

authorization and then from the data

set, choose evaluations and you want to

choose sheet one. Great. Now, next up,

we want to add a node that is literally

called the do nothing node. So, this one

is really just for like aesthetics kind

of practical purposes that you can

connect two different triggers to the to

this node going to the agent. Then

coming over here, we have this branch

that is going to be classifying the text

violations, generating audio, etc.,

right? That we already have. But we want

to get another branch that's able to

evaluate all the test criteria. So we

want to add another do nothing node and

then add another node, the evaluation

node. So this one we want to be the set

output node, so we're able to get the

outputs and capture the outputs. So

again, we're going to connect that to

the Google Sheets and we're going to

choose evaluations and choose sheet one.

And now we're going to execute previous

nodes. Add the name. We can just call it

output. And we want to add the value

that is coming in over here. And this

will allow it to actually write um the

output on this column here. And finally,

we want to add add another evaluation

node. This is the set metrics. There's a

lot of different types of metrics like

correctness, how correct it is, how

helpful it is, how good this string

similarity is, how it's categorized. You

can define your custom metrics as well

here to evaluate your tests. In this

case, I'm just going to pick the

helpfulness one. Super simple one. It

comes with a prompt that tells the model

is an AI model to um act as an expert

evaluator that assesses the helpfulness

of the responses and gives you a score

from 1 to five which we can capture. The

model that we're going to use is the

OpenAI model. Again, I just connect that

GPD 4.1 mini. It's good. And configure

this. But the user query as query is

fine. execute this step and we can see

that it gives us a helpful score of

five. All right, let's clean this up a

little bit. Tidy up workflow and let's

actually try running this. So to run

this, we can click save here, go to

evaluations and we can run a new test

five. We can click into this. We saw

that there are four total cases and each

of these different cases has passed. We

can also see over here that it wrote

down the output for the information

that's here. So you can try this out and

you can see that there are have

additional use cases that you add here

to test this out with. Um right now we

see that the helpful scores have have

all been pretty high with this one is

the lowest it's it's out of four. You

can also add obviously like other types

of evaluations like some other things

that I would recommend adding would be

some sort of metric that will allow you

to see if there's like certain keywords

that are being contained within the

summary. You might also want to test

like the overall um structure of it,

overall length of it, a lot of different

types of evaluations that you can do.

Okay, so for this simple example now

that we do have the complete workflow.

Okay, so the next thing we're going to

do is to deploy it. So to do that, it is

really easy on any all you have to do is

go toggle this from inactive to active.

And then to actually see it, go to on

form submission and here we have the

test URL. Just toggle this to production

URL. Copy this and there you go.

Amazing. So let's just try something

out. Say it's called like building AI

agents time period. Let's say it's 2

months.

Submit. There you go. Here is the

summary. So moment of truth

title building AI agent summary for past

two months. An AI agent is a software

system that autonomously performs tasks

by combining artificial intelligence

with tool use and data access. Building

AI agents involves designing workflows,

managing security, and ensuring ongoing

monitoring and updates. Key findings

bulleted three to seven items. OpenAI's

2025 releases include APIs and SDKs,

simplifying agent workflows, integrating

tools like web search, and enhancing

observability for production

reliability. source open AAI 2020 20

is a fully built and deployed AI agent

that has all the six different

components and the prompt done. Of

course, there are some tweaks to the

prompts that you want to do and based

upon your evaluations, you might want to

go back and tweak the prompt even more

to be able to come up with your perfect

AI agent. But in our goal of getting an

AI agent up and running, we have done

it. So, at this point, there are a lot

of other things that you can do to

improve this AI agent. Like for example,

this form that we have to submit your

topic is not very aesthetically pleasing

that you can use a vibe coding tool like

lovable for example to create a more

aesthetically pleasing UI like this.

Similarly, the workflow right now just

sends an email, right? But instead you

can vibe code using lovable, a UI

component that allows you to create the

summary, create the audio file and

actually just download it directly from

the UI as opposed to just having it sent

to your email. You can also add other

components to this as well like a

dashboard for example that showcases all

the different summaries that you've

generated. many other things that you

can do. Now that you've built your first

complete AI agent, I hope this was a

helpful video for you. I have a final

little assessment. Please answer these

questions on screen to make sure you've

retained all this information that we

have covered. And let me know in the

comments what AI agent that you want to

build yourself. Now, thank you so much

for watching until the end of this

video. And best of luck building your

first AI agent. I will see you guys in

the next video or live stream.

Loading...

Loading video analysis...