LongCut logo

How I Built My 10 Agent OpenClaw Team

By The AI Daily Brief: Artificial Intelligence News

Summary

Topics Covered

  • AI Learns You Best
  • Heartbeat Enables Autonomous Work
  • Match Agents to Mobile Tasks
  • Persistent Research Drives Insights
  • Non-Techs Build via AI Coach

Full Transcript

Today on the AI Daily Brief, yep, I did it. We are talking about the 10-agent team that I put together with Hope and Claw, how I built it, where I'm finding value, where I'm not, what I think you should do, and much, much more. So I'm in the midst right now of a marathon 24-hour trip down to

more. So I'm in the midst right now of a marathon 24-hour trip down to South America, where I'll be for a couple weeks during which the show will proceed as normal. But since I'm not exactly sure when I'll be up and running, I

as normal. But since I'm not exactly sure when I'll be up and running, I have preloaded episodes for Thursday and Friday, meaning, of course, apologies if there's some big news that I'm not covering. I am sure that I will get to it as soon as I can. This is one that I've wanted to do for a while, though, and while it might have been an operator's bonus before, I think there is

enough interest around Open Claw that it's worth doing as a normal episode. Open Claw

has, at this point, very much jumped from a hypey thing that some early adopters were excited about to a key part of this inflection point that we're living through, which is, in and of itself, rapidly expanding outside the early adopter set, and even more than that, showing the patterns and primitives that everyone is going to be using even if they are not with OpenClaw in just a few months to come. What

you're looking at right now on the screen is a mission control that I built for the set of agents that I have running. I can see what interaction I have scheduled, certain things that they've found, costs, and things that are waiting on decisions for me. But how did I get here? Specifically, why jump on this particular trend

for me. But how did I get here? Specifically, why jump on this particular trend as opposed to any of the other million trends that we've seen? First, I think that the promise of digital employees not just AI assistants, but actual workers who can be doing things for you when you are not working, is a level-up goal of AI that we've been trying to achieve for a number of years. It felt like

this might be the first time that we actually had something like that, and specifically in a way that was flexible and customizable. What I liked about OpenClaw is that instead of being boxed into a particular type of digital employee with a bunch of assumptions programmed in, I could just customize it entirely for my specific purposes and use cases. The third part is that this is one where the more people who do

cases. The third part is that this is one where the more people who do it, the better it is for everyone. The network effect around OpenClaw doesn't just get it more press, it gets all of us more resources, more experiences to draw from, better documentation, more learnings, more lessons, as well as more skills and capability sets that people keep building into OpenClaw and then sharing with the rest of the world. Now,

if you listen to my recent episode, How to Learn AI with AI, you'll know that my first step on this journey was to set up a Clawed project to act as my coach, mentor, build partner, et cetera, for this entire initiative. And this

is something that I can't stress enough. I am non-technical. Until the advent of Vibe Coding tools, I had never pushed code in my life. And to get from zero to this mission control center with 10 agents running actively, I watched exactly zero YouTube videos, followed along with exactly zero web or Twitter or X tutorials, because as valuable as many of those resources are, and they certainly are, in fact, the training site

that we're setting up is going to have an extensive library of all of those resources. I still think the big thing that has changed that for whatever reason people

resources. I still think the big thing that has changed that for whatever reason people haven't fully caught up to is that the best way to learn some new thing in AI or to build some new thing in AI is to just let the AI help. This is especially the case for something like OpenClaw that has so much

AI help. This is especially the case for something like OpenClaw that has so much documentation that you can point it to. So you can see my OpenClaw agent project that has dozens and dozens of messages, plus a ton of files, many of which are just context handoffs between different instances of these chats. If I can convince you of anything that will save you agony later, it's to do this. Doesn't really matter

whether you do it in ChatCBT or Claude or whatever system you're using that has some sort of project setting. And projects, of course, are just about access to context.

But the point is, even if you are completely non-technical and this is ridiculously overwhelming, you can tell Claude, I am a neophyte and an income poop and I need you to walk through everything step by step in the tiniest little incremental ways and it will do so with infinite patience. At least until it hits the end of its context window. At which point, where honestly, if it's rushing you or trying to

compress things, That's a dead giveaway that it's getting tired, so to speak. Next up,

after setting up Cloud Code, I did actually go out and buy a Mac Mini for this. It is absolutely the case that you do not have to do this.

for this. It is absolutely the case that you do not have to do this.

It doesn't require some super powerful set of resources. You can certainly run this thing on any old laptop you've got kicking around. But the Mac Mini approach appealed to me, one, because I wanted a totally fresh environment where I could very incrementally give it access to the systems that I wanted to give it access to without fear of it bleeding into other things because those things just didn't exist on the Mac

mini. And I also did want a dedicated machine that was always on, always running

mini. And I also did want a dedicated machine that was always on, always running so that I could port in and access it from anywhere. I'm not going to go through the full step-by-step Mac mini setup, but to give you a sense of what you're in for, whether you use a Mac mini or another device, first you're going to install something like Homebrew, which is a Mac package manager that is going

to let you install everything else. You're going to install Node.js, Claude Code, which is used for building things on the machine, and your build coach is going to walk you through disabling sleep so that the machine stays awake as a server even with the lid closed or the screen off, or even if you use it without a screen in a headless way. You'll also set up something like TailScale for remote access.

TailScale creates a private network between your computers so that you could reach your Mac Mini from anywhere. Your iMac, your MacBook Air, or even your phone. From there, your build partner Claude will show you how to access the Mac Mini you have running OpenClaw from any other computer. But at this point, it's probably worth talking about what OpenClaw actually is. They bill it as the AI that actually does things. And what

that means is that it runs on your machine. It has access to your system with the ability to read and write files and execute scripts. You can also give it access to your browser and extend what it can do through skills and plugins.

OpenClaw has persistent memory so that it learns and gets better over time. And you

talk to it through a chat app like WhatsApp or Telegram. Each agent in OpenClaw loads a set of markdown files at the start of every session that are basically the agent's personality, instructions, and memory. It's got an identity, which is a simple name, descriptive emoji, and a one-line description. Its sole MD file is how the agent thinks and behaves. It's its personality, communication style, what it cares about, what it should and

and behaves. It's its personality, communication style, what it cares about, what it should and shouldn't do, effectively the character sheet. Agents.MD is the employee handbook. It's the operating instructions, protocols, how it should handle different situations, and rules for interacting with other agents or systems. User.md is everything the agent knows about you, your name, role, preferences, time

zone, or communication style. One of my favorite things was when early on as I was building out, Claude added in a section about how I work in the user.md

file, we'll push back hard if something feels wrong. Productive, not hostile. Tools.md is what the agent has access to. That could be file paths, APIs, services, and account. Memory.md

is its long-term curated memories. which are important things the agents should remember across sessions.

But then we get to something really cool, and part of the simple genius of OpenClaw. What has always excited people about the idea of agents is the idea that

OpenClaw. What has always excited people about the idea of agents is the idea that they can work even when you're not there to help. OpenClaw gets at that in a couple of ways. The first is called Heartbeat. Heartbeat.md is instructions for what to do on autopilot. The default setting for the heartbeat is to fire every 30 minutes.

When that happens, the agent reads that file and runs whatever tasks are listed. If

there's nothing to do, it's supposed to reply heartbeat okay and go back to sleep.

You can also schedule cron jobs, which are basically just scheduled tasks at a certain time. So for example, we'll talk about my project manager agents, but they're all set

time. So for example, we'll talk about my project manager agents, but they're all set to fire around 8am with a status update from the day before, as well as do a quick check in at 5pm to see if there's anything I want to add at the end of the normal human workday. So that's the basic architecture of an OpenClaw agent. Next up for me, the question was to decide which agents. And

I think this is actually a pretty important piece for any of you who are considering using OpenClaw at home. As I was thinking about which agents I wanted to build, there were a few parts of what OpenClaw could do that were really intriguing to me and that I wanted to match the agents back to. Some of the reasons to want to use these agents is to be able to work on the

go. I liked the idea of mobile management where I could just use Telegram or

go. I liked the idea of mobile management where I could just use Telegram or a chat app to instruct an agent to do something the second it came into my brain. Doesn't matter if I'm driving, at the gym, in a meeting supposed to

my brain. Doesn't matter if I'm driving, at the gym, in a meeting supposed to be doing something else, I love that capability set. And so I tried to work backwards to which of the tasks that I have would be a good fit for that sort of work on the go. Likewise, I also thought about which tasks and categories of work of mine would benefit from either A, persistent work, i.e. work around

the clock, or B, scheduled work where certain work at certain times was the name of the game. So basically, I went through all of what I do and asked which of my work would benefit from those different things. One thing that I knew that I wanted was a builder bot. I wanted a coding agent as part of this team. I already use Replit's mobile app, and honestly, Lovable's mobile web interface is

this team. I already use Replit's mobile app, and honestly, Lovable's mobile web interface is pretty good. But again, I wanted a builder on call that could build on the

pretty good. But again, I wanted a builder on call that could build on the go, and ideally which would build while I slept. I didn't want to be shackled to my computer going back and forth in an iterative fashion in the way that I work with either of those VibeCoding tools or something like Claude Code. So the

builder was actually the first agent I built. And right away, I got my first real practical experience lesson. As much as I liked the idea of some big complex task that it could work on overnight, It turns out that I kind of just don't have those types of coding projects. In reality, although I have lots of build projects, they're fairly discreet and very iterative. They require a ton of feedback from me

because I'm often working my way through features or designs in very incremental ways where I can't just let it go run on autopilot. At this point, I love having access to the builder and I do use it occasionally, but it is actually one of my least used of this whole team. Next up was something that more clearly benefited from persistence. And that was around research. You may have heard me talk about

this new thing that we're building, AIDB Intel or AIDB Intelligence. It's basically a research and benchmarking platform. Two of the products that I'm really excited about are called Opportunity Radars, which are basically a way to organize use cases around particular functions into a set of different categories in terms of how applicable they are for different types of businesses. And the other one that I'm really excited about I call Maturity Maps, which

businesses. And the other one that I'm really excited about I call Maturity Maps, which is basically a way to visualize where departments within organizations are relative to where they should be around six dimensions of AI maturity, including use cases, systems integration, data access, outcomes, people, and governance. I won't get too deep into the methodology here, but suffice it to say that both of these require a huge amount of input. A lot

of it comes directly from people and companies that are experiencing AI and feeding back into the system. But a lot of it also comes from research. Every week, dozens and dozens of new sources, new studies, new surveys, new research enters the ether around AI. And so I set up dedicated research agents with one honed in on maturity

AI. And so I set up dedicated research agents with one honed in on maturity maps and the other honed in on opportunity radars that are literally around the clock, surfacing, cataloging, and integrating new resources into the set of information that's informing the maps and radars. And those agents aren't just cataloging what we're finding. They are actively integrating

and radars. And those agents aren't just cataloging what we're finding. They are actively integrating it in a way where a big part of their job is to make proposals to me around how we might change some aspect of either the maps or radars.

For example, what we think the on-track line is for systems integration, for marketing departments, based on all the information we have access to. What I've found with the research agents is that I've had to do some amount of quality calibration with them, both in terms of helping them understand the difference between a good, great, or not so good resource. I've also had to calibrate the quality of their writing and justification when

good resource. I've also had to calibrate the quality of their writing and justification when it comes to their proposals for how we change the maps or radars. But that

calibration certainly wasn't overwhelming. Another thing that I've found that's more on the technical side is that heartbeats can be flaky, and this is certainly something that I'm not alone in experiencing. You will often find that for whatever reason, the agent just drops off

in experiencing. You will often find that for whatever reason, the agent just drops off for a while and you kind of have to reset it. And there's a million different reasons why that happens. But I'm still getting in general, a ton of clear, persistent research. One other thing that I found is that a couple of times, I

persistent research. One other thing that I found is that a couple of times, I basically requisitioned one of those agents to do a different type of unrelated research. And

it did a good job without losing its mission focus. Now, the next set of agents that I built were my group of project managers. I've got one for AIDB Intel for a new super intelligent compass product. for growth initiatives around the podcast, and for the AIDB training platform that we're experimenting with now. Initially, these are, I will fully admit, glorified to-do list managers. They are better ways for me to segment my

own brain. Every morning, when I first brought them online, I gave them a huge

own brain. Every morning, when I first brought them online, I gave them a huge brain dump about everything going on with those particular projects, including challenges, to-dos, things that I was thinking about, decisions I needed to make. And I basically have them harangue me on the things that I know I need to get done but for whatever reason just haven't been. It is not uncommon for me to say something to them

like, send me a pile of skull emojis every half an hour until I actually make this decision. Sort of my agent equivalent of a snooze button on an alarm clock. This, however, is not the end state for how I'm imagining these project managers.

clock. This, however, is not the end state for how I'm imagining these project managers.

We're in the midst of organizational redesign across everything we do at Superintelligent, AIDB, etc. And the way that I imagine these project manager agents evolving is that they won't just be interacting with me, but they will be interacting with other systems to be able to also inform me of the state of those projects beyond just what I'm doing with them. There are a bunch of different ways that that interaction can happen.

Some of it might be via skills where, for example, I let them loose in Slack. Some of them might be talking to the agents of other folks who are

Slack. Some of them might be talking to the agents of other folks who are involved in those same projects. You can kind of think about it like phase one, personal assistant without access to a phone or an email. Phase two, true project manager who actually coordinates. Now, I also have built a chief of staff that is kind of frankly sitting idle until that second phase of those project managers gets up and

running. When their remit expands, the idea of the chief of staff is to triage

running. When their remit expands, the idea of the chief of staff is to triage across all of these so I can start my day knowing what's really important and what I absolutely need to focus on. The last agent is the one that I use most frequently, certainly, which is my NLW tasks agent, which is basically just an interactive to-do list. Now, everyone has their own way of managing tasks, and I would

never argue empirically that for everyone, The agentic interaction approach is the right one. I

imagine for many of you, you're fine with your Apple Notes or your Notion Docs or whatever else you use. I'm an inveterate Notion user, or at least I was until NLW Tasks came along. What I like about this interactive mode is that it can map perfectly to my brain, i.e. I have a million different types of lists.

I have a today list, a this week list, a next week list, a future list, even an icebox for things that I don't know when I'm going to get to but I don't want to forget either. The moment I think of or remember something, I can just talk in a telegram and have an update. And the TLDRs, I just really, really have enjoyed managing my to-dos in that way. So my setup

in practice then is not some crazy high-tech thing. It's really about a better user experience for managing my brain, plus the beginning of a particular type of 24-7 digital employee that really benefits from the continuous sweep of information that can be programmed via the heartbeats. you'll notice that I'm not giving OpenClaw access to a ton of

systems right now. I don't have it responding to emails. I don't even have it monitoring my inboxes yet, although that's something that I'm considering adding. I also haven't started using a bunch of skills, which seems to be kind of a good thing, given that they found that initially a ton of them had malware. I talked earlier this week about how much OpenClaw was doing to remedy that situation. And I do think

that the security situation is literally getting meaningfully better every single day. I've got my eye on a few things I might want to integrate, like super memory. But overall,

I'm doing a really simple version of this. The other thing that I'm not doing that's way more complex and will frankly, I think, unlock way more value even than the stuff that I have going on is that I don't have a complex system where agents hand off to one another and have to fully interact with one another.

There is some amount of interaction in terms of shared context and things like the chief of staff having access to some of the system files for the other agents.

But that's very different than some of the things like Vox on Twitter is doing where one completed action from one of their agents triggers the next step for another, and so on and so forth. The funny thing is that's actually what created the need to build this Mission Control Center. A lot of the dashboards that people are building right now are being optimized for that different type of sequential work. You're seeing

a lot of Kanban boards and things like that, which are awesome and I think are going to be super useful for that use case, but I wanted something where I could just monitor everything going on as effectively the complement to my Telegram Chat by Telegram Chat view. Beyond a shadow of a doubt, building this mission control has been the most technologically demanding part. For those of you who are considering doing all

this, this is the part that I'm not sure that I think is actually worth it. I wanted to do it for learning, and it absolutely does fill a gap

it. I wanted to do it for learning, and it absolutely does fill a gap that Telegram doesn't. But I am so certain that there are going to be off-the-shelf options for this extremely soon. And as we start to round up, I really want to come back to this idea that the key thing that you can do with all of this is is to get your build partner in ChatGPT or Claude or

Gemini or Grok or whatever up and running so that it can manage the whole process. If you looked at these dozens and dozens of chats in my OpenClaw Agent

process. If you looked at these dozens and dozens of chats in my OpenClaw Agent product, it would be embarrassing. Here's me asking what the gray indicator light means again, with Claude reminding me and then also suggesting that since it's not self-explanatory, we should go tap into Claude code to fix that. Every single prompt that I put into Claude code, every single problem that I run into, is going into some chat. I

am absolutely shameless about taking even the most simple and infantile instruction, like these four lines of commands where my Claude build partner said, run the commands one at a time. And instead I said, if those things are four separate commands, copy them one

time. And instead I said, if those things are four separate commands, copy them one at a time, please. Which it dutifully did, because again, infinite patience. This comes back to a conversation that we've had a couple of times on the show recently about to whom these experiences are accessible. If you go down this path, and you start building out one agent or a set of agents with OpenClaw. It is almost certainly

the case that there will be a meaningful period of time where you are negative ROI, at least from a time perspective. You are, I promise you, going to spend hours going back and forth with Claude or ChatGPT, figuring things out, hacking your way through it. But I can also promise that there will be no point at which

through it. But I can also promise that there will be no point at which you get fully stuck. At one point, after setting up 10 agents, I tried to force an upgrade to Opus 4.6 before it was officially supported by OpenClaw, and I ended up wiping out all of the tens of hours of work that had gone into setting up all of those agents. Except I hadn't, and we were able to

work through it. The point that I'm trying to make here is that if you have the will and are willing to put in the time, it doesn't matter how non-technical you are. You can go build an agent team with OpenClaw right now, today, without asking anyone permission to do so, without needing to secure any additional resources first.

And that is a pretty cool thing. As you can tell, I decided with this episode to focus less on the technical aspects, because again, your clawed partner is going to handle most of that for you. And instead, how I thought systematically about the actual system and what I wanted to build and where it would be valuable to hopefully provide a little bit of inspiration. That is going to do it for this

very different type of AI Daily Brief episode. Appreciate you listening or watching as always.

And until next time, peace.

Loading...

Loading video analysis...