Nanobot vs OpenClaw: It's Not Even Close
By jordanUrbsAI
Summary
Topics Covered
- Skills Minimize Token Waste
- Sandbox Agents in Docker
- Automate Content Summaries
- Nanobot Lags OpenClaw
Full Transcript
It's been making the rounds, so I wanted to know if Nanobot was the OpenClaw killer.
I mean, 4,000 lines of code versus OpenClaw's 430,000?
Faster, leaner, no bloat? Come on.
So I threw real tasks at both of them, summarizing my favorite podcasts, of course, and I'm going to give you the honest answer. Which one's better?
But first, in case you haven't heard, these autonomous AI agent frameworks are not completely safe yet.
They are open source, and they're new projects, which means there's security vulnerabilities aplenty.
You do not want to experiment with these agent frameworks on your main machine unless you're doing it safely. So I am going to show you how I install agent frameworks to play around with, like Nanobot and OpenClaw, in a dockerized container so they can't access any data on my main machine. Now real quick, if we just explore Nanobot, we can see it acts like all these other
main machine. Now real quick, if we just explore Nanobot, we can see it acts like all these other autonomous agents lately. It has a chat app it connects to, and then it goes through the chat messaging response with an LLM in the background that calls tools and then adds skills and memory to its context so it can do things right. Memory is like memory files, so it has persistent memory
throughout conversations. And skills are workflows, how to do things step by step so the agent doesn't
throughout conversations. And skills are workflows, how to do things step by step so the agent doesn't have to waste any tokens or time researching how to do something, so it can just do it. As you can see, we can use it for plenty of different things here, from checking the price of Bitcoin, to developing software, to managing our daily routines, to just having a personal knowledge
assistant using that persistent memory feature. So we are going to install it, but we're going to do so in a bit of a creative way that I've been recommending lately using a different AI agent.
I'm going to use open code. Say install nanobot into a Docker container, and I'll paste the GitHub repo there. Now why a Docker container? Docker keeps your autonomous AI agents in a sandbox
repo there. Now why a Docker container? Docker keeps your autonomous AI agents in a sandbox environment in a container so it can't get out. This is how agent zero works and this is also good practice for using OpenClaw on your main machine if you don't have an extra Raspberry Pi or Mac Mini or whatever lying around. Don't give these autonomous agents access to your main machine.
Don't do it. It's not going to be worth it if something happens, you get a security breach or whatever. So this is why I love using OpenCode for something like this. I hate working with
whatever. So this is why I love using OpenCode for something like this. I hate working with Docker containers, especially from the command line. It's stuff that I just never learned. I
kind of have an idea of how it all works. But when you just plug in an LLM to take care of it for you, it knows exactly the commands to execute. So I actually recommend that technique for any kind of agent you want to install and mess around with. Just doing a Docker container before you invest in a full machine, you know? Boom. Nanobot is installed. And I want to use Venice AI as my
provider. So I'm going to say modify the provider list so I can use Venice open AI compatible.
provider. So I'm going to say modify the provider list so I can use Venice open AI compatible.
This is another reason I like to use something like open code to install my open source software because I can make little tweaks. Okay. So now it added Venice as the provider. Now I'll say, okay, set the API key and I'll paste that there and the default model to sci.org.gom5,
which is also what I'm using here. So it's open source and completely private if you're into that kind of thing. Okay, launch it. Now this is why we use the AI. See, it got a 404 or something.
Like, great, just fix it. Okay, boom, there we go. Nanobot. How can I help you today? Let's set
you up as a telegram bot. Now bear in mind, I have not even looked at what the directions are, but I gave this whole Git repo to open code. So it knows what we need to do. So let's give it my token. Now, if you've never used the Telegram bot father, it's pretty easy. You make a new bot
token. Now, if you've never used the Telegram bot father, it's pretty easy. You make a new bot and then you copy the token. Super easy. For your user ID, use the user info bot. Just message user info bot and it'll tell you your numerical user ID. So once you have all that, pop those in to your AI or your config or however you're doing this, but hopefully you're doing this the easy way like this.
And now it reloads the gateway and everything. And the reason this is important is because it's really annoying to use the Docker execution commands from the command line outside of the Docker container, but the AI knows how to do it. Okay, bada bing, Nanobot is now connected to
Telegram using Venice. Hello, how can you help me? Tell me about yourself. What is something cool?
We can do together.
Okay, so here we go.
We more or less have a Claudebot open claw set up here.
So what do we want to do with it?
Well, we can schedule things.
We can schedule tasks, workflows, whatever that might be.
Let's see what skills come built in.
A skill is just a detailed workflow for the autonomous agent to follow.
So it has every little instruction.
So it knows what to do with the least amount of context rot possible.
So it doesn't waste any tokens.
so it can just do the thing and not have to mess around and figure out the right way to do it, because the skill says how to do it. So we have a memory skill, so it can remember things.
Weather. Clawhub, so it can install new skills from Clawhub. Perhaps dangerous,
but at least we're in a Docker container. Cron for scheduling reminders, creating new skills.
Tmux, GitHub. Tmux, GitHub. All right, let's install Summarize.
Now, at the time of me recording this, this repo is pretty new.
It's only a couple weeks old, which means we probably don't know the complete ramifications of this whole system from a security perspective.
So once again, that's why I like to run it in a Docker container.
Better safe than sorry.
Yeah, this was just launched hardly two and a half weeks ago.
And now you see OpenClaw will be the same way.
Agent Zero will be the same way.
It's just going for it.
So I'm just installarizing the summarize skill.
It needs to upgrade Node to make it work.
And then what I'm going to do with that is create some kind of a workflow that scrapes YouTube videos of podcasts I like so I don't have to actually listen to them.
I can just get some summaries.
So it's installing it in the background.
I'm going to say once we have it installed, I want you to summarize my favorite YouTube channels and podcasts twice a day.
Alex Hormozzi, obviously.
Tim Dillon Show.
That's where I get my news.
What else do I like to listen to?
I don't even know.
Where's my phone?
Let's do Modern Wisdom with Chris Williamson.
Oh, yeah, of course.
Creator Science.
Jay Klaus.
Let's do What is Money?
Robert Breedlove.
Okay.
Maybe let's do Lux Friedman.
you can separate these summarizations into two chunks if it makes more sense yeah let's do that so we'll separate into two different chunks so we'll get half of them in the morning half of them i don't know later in the afternoon and the whole heartbeat feature of open claw nanobot whatever
we'll bring it alive to be like hey it's time to go check jordan's podcasts out and give him the updates. I've mentioned it before, but I really hate consuming content. I don't watch YouTube.
updates. I've mentioned it before, but I really hate consuming content. I don't watch YouTube.
I listen to podcasts sometimes. I don't really often have the opportunity to do anything like that. I hardly even listen to music anymore. And for context, we can look here. Nanobot.
that. I hardly even listen to music anymore. And for context, we can look here. Nanobot.
It's using 3.87% of my CPU. And Docker, all of it only has about 8 gigs of RAM. So yeah,
It's really tiny.
I mean, it's not using anything.
Now is probably a great time to tell you about the AI Captain's Academy.
This is where if you're new to all this stuff, you can learn from the ground up the same way I learned.
This is about frameworks and mental model and context-first thinking instead of going, oh, what's the new tool?
I have to have it.
I have to have it.
This is really about getting the fundamentals.
So it basically teaches you how to use Cloud Code, use it in VS Code.
Then in the next module, we get into Agent Zero, using Venice for private workflows.
using Whisper to voice prompt to your AI, using subagents and NN workflows. And by the end of it, you make a whole little app to capture leads all on your own. It then goes into a module for developing a whole knowledge base that you can give to your AI. So it knows everything about your project. And this is a precursor to the knowledge infrastructure course, which I'm almost
your project. And this is a precursor to the knowledge infrastructure course, which I'm almost done with. And what this does is teach you what context engineering is and how to optimize your
done with. And what this does is teach you what context engineering is and how to optimize your context with AI so you get the best quality results every time. So that is skills, that is harnesses, that is autonomous agents like Nanobot and OpenCall here. And overall, this is just like the current thing with AI. It's knowledge infrastructure, it's context engineering,
like how do we do it better and better and better. We have weekly calls. If you join the premium tier, there's a co-working call where you can come ask your questions and find out like my day-to-day, like what I'm building and working on. And it's kind of turning into a mastermind. It's pretty
cool. So there's my little call out here. I'm done now. Let's get back to nanobot. And yeah,
it is interesting how much it's showing me here. I don't really want to see all that.
Don't show me every little thing you're doing like the code exec commands. LOL.
Okay, so we got a little, maybe that's GLM-5 doing it.
I don't know.
Okay, it's catching up with the messages now.
Compared to OpenClaw, that doesn't happen.
I noticed that it'll just keep sending messages in time.
It doesn't cue its responses to me.
I could be talking to OpenClaw about three or four different things.
Not that that's smart.
Not that that's good practice, but I do it anyway.
And it will just respond as it goes.
This is obviously cueing its responses.
based on whatever it's doing.
Also, I think it worked, so I don't even know what it's dealing with there.
Let's just run through our summarize automation first with three of the six podcasts.
Now, I'll also say, if I want to say, okay, I'm not liking GLM-5, I'll say switch model to cloud opus 4.6, and it would probably give better results.
I could probably do that straight to here as well, but I'm just noticing it's kind of, I don't know.
It's a nanobot.
Okay.
So real talk here.
I just had to switch open code to use Opus and it's working way faster and more intelligently.
So it reset the gateway and now nanobot should be using Claude Opus 4.6.
Great.
So now it's setting it all up.
What model are you using?
and yes 9 a.m and how about 2 p.m actually more like yeah my computer's not going to be on before nine probably so yeah so now we've got cloud opens running and we're going to get better performance it's just how it is now it does seem to be working on a good model that's fine run the 9 a.m
summarization right now. But first switch your model to Claude Sonnet 4.6 because we want to save some money. And so this is like the final little point here. You can change your model inside the conversation. Okay. It can't do it. So we got to go back to open code. There is a Docker
the conversation. Okay. It can't do it. So we got to go back to open code. There is a Docker container named nanobot change the model to claude sonnet or six and it's going to use way too many credits now oh well i don't know what it means by that you need to update that in the venecii
settings that's not true with open claw you can change the model right there but of course opus four, six inside open code is kicking ass. Cool. So now the whole thing restarted. So I should say you were just reset, continue the 9am summarization task, checking the skill. All right, there it goes.
Now it's spawning three different agents in parallel. That's cool. We're still getting that message though. All right. So now we're seeing it didn't work quite smoothly. We got to give the
message though. All right. So now we're seeing it didn't work quite smoothly. We got to give the write URLs here. Okay, so for real, this has been super frustrating, even with Opus. I finally got it working back and forth, back and forth. It kept failing. And I finally just said, hey, why don't
you use Apify? We did get the Alex Hormozzi video, the myth of following your passion, because the word passion comes from the Latin root meaning suffering. All right, great. But that's the only thing that worked with this whole summarize skill. So not terribly stoked on that. Just the whole operation here, I was not happy with. Like if I wasn't making a video about it and forced to show
some success to you, the viewer, I would be like done with Nanobot. Maybe it's because of it's a Docker container. Maybe. I don't know. But anyway, I told it to install Apify. I gave it an Apify MCP
Docker container. Maybe. I don't know. But anyway, I told it to install Apify. I gave it an Apify MCP key, an API key. And all right. So now we have all those. So let's say finish the last two summaries. Use Apify if you need to. Meanwhile, I also gave Apify to OpenClaw, which I have also
summaries. Use Apify if you need to. Meanwhile, I also gave Apify to OpenClaw, which I have also in a Docker container. This OpenClaw gives me daily Twitter digests every morning. I want to keep up with things, but I don't like the FOMO and fear-mongering that happens on social media, all the noise. So this just is supposed to send me signal. So I said, okay, OpenClaw,
I want you to use Apify now and scrape more all around social media. Tell me what's really going on because I'm probably going to make content about it, right? If something's trending, I want to learn about it so I can make content. Background task completed.
Okay. All right. Now we've got open claw giving me all its instructions late, but at least it worked. And both of these are running clawed opens 4.6. I'm wondering here if we just didn't do a new conversation. I mean, I'm like hardly ever doing new conversations with OpenClaw. I know I should, but I just
with OpenClaw. I know I should, but I just don't, and it still works.
So, so, new conversation.
Run tomorrow's 8am digest now. This is OpenClaw.
digest now. This is OpenClaw.
Can we do new? New conversation?
Is this frozen?
Am I out of credits?
Does my internet just suck?
Yeah, Nanobots just stuck. So, I don't know. I'll just
stuck. So, I don't know. I'll just
stop the container.
I've never had to do this with OpenClaw.
We'll restart it.
Okay, let's try this again.
Run the morning podcast summarizer now.
Okay, so now it's just doing it fresh.
I don't know if it's using Apeify, but we're giving it a fair shot since we hadn't cleared the context, which is just lazy of me, really.
Okay, Nanobot pulls through.
Morning podcast summary chunk one.
Alex Harmozy, modern wisdom and creator science. Alex Harmozy, I make top 1% income working one hour a day. Yes, you do. Key ideas. Okay. Wow. So it really, it really dissected that. And it,
you know, pretty brief summary. I could read this, you know, while drinking the morning coffee, modern wisdom. You can't stop your productivity addiction. Oh, really? I might need to listen to
modern wisdom. You can't stop your productivity addiction. Oh, really? I might need to listen to that one. Ambitious people can never fully enjoy their wins. You know, creator science,
that one. Ambitious people can never fully enjoy their wins. You know, creator science, Don't become a human AI rapper.
Oh hey.
Yeah, the mid-slop, he calls it.
Like, it's not quite AI slop.
It's not really quality content either.
And then at 2 p.m., it drops the next one.
Let's just say, oh, it's already doing the second one because it's actually 3 p.m. now.
Cool.
MentalBot did it.
And then meanwhile, OpenClaw did its daily AI research and still working on that to get more deep, maybe to do more Instagram, stuff like that, Reddit, but it's working.
So there you have it, folks.
Nanobot versus OpenClaw.
It seemed to do the trick once I started a new conversation.
I think maybe that's the key takeaway because I've had problems with OpenClaw too is it's just like regular AI where you need to start a new conversation.
Just make sure it has the skills to remember how to do things if you're teaching it in a conversation.
And then obviously it's got as persistent memory.
So that is all happening there on the backend.
Nanobot, what is this?
That's an Apify thing.
Yeah, in short, Nanobot's brand new.
So let's give it some time for now.
I think OpenClaw does a great job and there's a reason it went viral and was a hype tool.
Loading video analysis...