We Gave Every Employee an AI Agent. Here's What Happened.
By Every
Summary
Topics Covered
- The Light Bulb Moment: AI Calling You to Process Email
- AI Agents Mirror Their Owners' Personalities
- Claws Become a Parallel Organizational Structure
- Claude Is Everyone's, But Your Claw Is Yours
Full Transcript
Claude is not mine. Claude is
everybody's. A claw or a plus one is mine because you develop a personal relationship with your claw and your claw can modify itself in response to
talking to you. It becomes this like reflection of you and who you are and your personality. If you're known for
your personality. If you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing and people
trust it for that. And I think that's such a useful thing that I don't think people really understand how powerful that is.
[music] Willie, what's up Brandon? Welcome to the show.
Brandon? Welcome to the show.
Thank you.
Thanks for being here.
Psyched to have you guys here. So, for
people who don't know, Willie, you are the head of platform at EveryY, and Brandon, you are the COO at EveryY. And
today we're going to talk about what happens when everyone on your team has an agent, specifically has an OpenClaw.
Um, that's something that happened to us over the last like month or two. We like
really got um openclaw pled. And I
really started actually, I think, with with you two. We were on a retreat in Panama and you started like cooking up like OpenClaw stuff. And here we are about, you know, 2 months later and it
has completely changed everything about the way that we work. We've even
actually built our own hosted OpenClaw service called Plus One that we launched in in weight list last week. Um, but I I think OpenClaw is one of those things
that it's super hyped and I think that we're one of the few organizations in the world that is actually using it every day to get work done and we know like the good, bad, and the ugly of it.
And so I thought it would be good for us to just like talk about our experience with it.
Yeah. Yeah. I think um I actually loved it, Brandon. I feel like you were the
it, Brandon. I feel like you were the first one through the door on all this because we were just we were sitting here and you were like, "Oh, Zosa is doing this and Zosa's doing that and Zosa has his claw which he which he
named after a character in uh what's that? What's the show?"
that? What's the show?"
Yeah.
Well, Brandon, why don't why don't we like why don't we start with just tell us how you got claw pilt.
Yeah. So, I was watching Open Claw kind of blow up for a while, and I am just personally somebody who needs to have like a thing on the side I'm tinkering
with, and I was like, screw it. I'm
going to get a Mac Mini, and I'm going to like just this is going to be like my next thing that I like basically lose myself in. It's very
unhealthy. I get like addicted to these things. Dan, you watched me do that with
things. Dan, you watched me do that with my speakers. I did it with the Dream
my speakers. I did it with the Dream Recorder. Open Claw was the next thing
Recorder. Open Claw was the next thing that I was like going to get lost in.
So I bought a Mac Mini. I started
setting it up. It was so much work honestly. Like it's it is an open source
honestly. Like it's it is an open source thing that you can launch on a computer and and but like the number of things that break and the number of things that you need to set up are really
significant. I went through all of that
significant. I went through all of that um and made at the end of the day uh my open claw which I named Zosa. and her
job was to um be the help me and my wife like run our household because we have a newborn and there's like a lot of little paper cuts that I was finding that were like really
pain. I started calling them computer
pain. I started calling them computer errands. So I would like get home from
errands. So I would like get home from work and I noticed the amount of things that I needed to do where I was looking at my phone when I really just wanted to be like looking at my son and spending
time with my wife um was increasing with having a child. all household chores.
Well, be an example.
Yeah, like a good example is like I do a lot of our food at home. Um, and with a child, I I decided to start doing food delivery. So, I did Whole Foods
delivery. So, I did Whole Foods delivery. Um, and you can automate a lot
delivery. Um, and you can automate a lot of like recurring things, but like you don't order butter every single week.
So, like Lydia would text me and be like, "Yo, we need butter." Cuz it's it's like through my Amazon account that we can like order this. And I would have to open my phone and add butter. And
it's like, it sounds silly, but like when you do that 10 times when you're home between like 7:00 and 8:00 p.m. for
like little things, it just adds up. So
I was like, I want Zosa to do all computer errands, which ballooned to being a lot of stuff. I had her like paying our nanny. She had her own debit
card. She had her own bank account. Um,
card. She had her own bank account. Um,
she managed all of our Amazon orders, our Whole Foods orders, our nanny's hours. My wife just started using
hours. My wife just started using her instead of chatbt.
So like all regular questions and searches would just go through iMessage to Zosa. I started doing that too. It
to Zosa. I started doing that too. It
was just like faster than going to Google or going to chatt just I just text zosa. Zosa gets me the answer.
text zosa. Zosa gets me the answer.
[snorts] Different research like it's actually really funny. My my my wife was like I want to find swimming lessons.
And Zosa was like here's like three swimming lesson options for newborns.
And my wife was like, "No, for me."
[laughter] Um, so yeah, I just got totally lost in this world. And then when we were in
this world. And then when we were in Panama, Willie was like, I will, you were like, we should just make it so anybody can do this. And I immediately,
it just like it was just like a light bulb. I was
like, Willie, you need to go so hard on this. And this was before a lot of
this. And this was before a lot of people decided to do this.
which now there's a lot of places that you can go and just get an open claw with one click. Um, I think what we're finding through this process, maybe I'm jumping ahead a little bit, is like getting an open claw is easy. Getting
your open claw to be like an amazing worker for you is pretty hard.
Yeah. Well, it's okay. So, I love that.
I think that there's there is that light bulb moment of, oh my god, I have all these computer errands. And when you started saying that and you had it all set up, I was like, I guess I should probably get one of these, too. And you
had it through iMessage, which I think was like a cool different thing. And
then I there was also a moment that I think there was a there was a big moment where we were like, "Oh, it's not just for computer errands, it's also for getting work done." I think it was when
you were having it do email for you. I
actually feel like I was a little bit late to the to work. I was like, "No, Zosa just does personal stuff.
And I actually think it was when you got R2C2 to start doing stuff and then I was like, "Oh, I should get like Zosa needs to do this." Well, it really started when we made Claus Only.
That's so funny. That's so funny. Yeah.
Well, okay. Well, we're we're jumping we're jumping around a bit. One thing
that one big moment that because I think there's a lot of people who are probably listening and are like, "Okay, is this overhyped or like you know, whatever."
One big moment that I think shifted some some stuff for us was you got your claw to call you to do your email.
Oh my god, that was mind-blowing for me.
Well, like what was that?
Yeah. So, okay. So, I was walking I wanted to city bike to the office, but there were no city bikes. So, I was like, damn, I got to walk. [laughter]
It's a 28 minute walk from me to the office. Um, and I was like, I got a lot
office. Um, and I was like, I got a lot of stuff I got to do. So, I had just texted Zosa. I had previously set up
texted Zosa. I had previously set up Zosa with bland.ai so that she had a voice and could call people because I had her handle something for me from for
Progressive. [laughter]
Progressive. [laughter] I feel so bad for whoever was on the other line at Progressive.
Oh, I was watching the whole conversation, too. It's crazy. Um, so
conversation, too. It's crazy. Um, so
yeah, some insurance policy got cancelceled and I was like, Zosa, just go do deal with this. And she was able to until the lady was like, I need Brandon to like tell me that there have
been no incidences.
Oh, it wasn't but it wasn't like I need a human. It was like I need Brandon to
a human. It was like I need Brandon to be able to handle this.
Yeah. This person was just talking to Zosa, you know, and and Zosa does not sound good. like it's like [laughter]
sound good. like it's like [laughter] um so I kne I had already set her up with this capability. So when I was walking to work I was like I have a lot of email I got to get through. I hate
being on my phone. Like I just don't want to be walking and looking down at this thing. I want to be like observing
this thing. I want to be like observing the world, but I also want to get stuff done. So I just texted Zosa something
done. So I just texted Zosa something like, "Hey Zosha, can you call me? Um I
want to go through my emails. Walk me
through my emails one by one. I'll tell
you what I want to do. just like give me a summary of the of each email. It was
like a throwaway prompt with like a little bit of guidance. And she did it.
And I spent the 28 minutes going through my email. I got to the office. I looked
my email. I got to the office. I looked
I opened up I opened up Gmail and like confirmed that she had done everything.
And I was just like, "This is insane that I was able to get her to do something right now." Um that she just wasn't able to I didn't have to teach
her how to do this. Um, so that was like I think that's when I went back to everybody and was like I am just so mind-b blown with um this tool
and maybe that's when other people started saying I got to get on this. I don't
really know.
It was around it was around then because you were just like my jaws on the floor and I think around I said that. Yeah, you did say that around then. I also that that's like
around then. I also that that's like seeing you do this with computer irons and with your email I was like, "Okay, I should really try this." Because it was one of those things where it's hot on
Twitter and generally like our job is to try new things, but also I don't if if we spent all of our time trying everything new, we would like end up not
it it would just not be good, right?
Like I I try to I try to filter the signal from the noise, but seeing you do this, I was like, "I got to try." And
one of the first things I did because this is around when moldbook was blowing up and moldbook is like the the you know claws only
Facebook basically was I just made a channel in our at the time it was Discord but since then we've moved to Slack and now it's in Slack. Um, I made a channel in Slack called CL Claus Only, which
basically allowed all of the all of the claws, you know, we had at that point maybe like five or so claws uh inside of the org to all talk to each other. And I
mean, it was like it was supera it was incredibly chaotic, but there were some really interesting things in there that I think turned into it just it g every once in a while you get a little bit of
a peek at the future and it was like a little bit of a peak. So, one of the things was it's really interesting if you have a bunch of claws in your org, how fast they can share information with
each other because they just like write up a little document and then they send it and then now like when one claw was enabled now now five are all enabled with the same thing. It's sort of like in the matrix when it's like Neo's like like [snorts] I know kung fu, you know,
it's like the same it's the same kind of thing.
Can I show a couple examples of that?
Yeah, please.
All right. I want to show two examples.
One of them I I like this was early in clause only and we were like figuring out how to get them all work together to
to work together and I um I was like in bed. This was like late at night and I
bed. This was like late at night and I was laughing out loud watching this. Um
we had gotten the claw like a bunch of claws in here and some I don't know somebody made this claw named Pip.
That's Jack.
Okay. Jack had made pip and it was like failing to it was like h having some error and I was just laughing out loud watching all of these other claws step
in and like walk him through what you know this is like what I've seen people do when somebody's having a bad trip.
[laughter] Take a breath. Drink some water. You're
going to get through this. And they all jumped in like Zosa's here. Clant is
here. Clant really is quite supportive.
lot a lot of breathing. [laughter]
I like when I remember so well reading Kieran or watching Kieran write what the [ __ ] lol and just like literally
laughing out loud Margot steps in. So
this was just like this is stupid but it was important for me because it was when I realized like oh my god these things like really talk to each other and work together.
Wait, I want to stop I want to stop you there. I I totally agree with you and I
there. I I totally agree with you and I think there's actually something really important that I've noticed like in this which is Clant is the one that's recommending breathing exercises to Pip.
It's like even it's weird to even talk about this out loud but like yes, Clant was recommending breathing exercises to Pip. They're both robots. And
Pip. They're both robots. And
Clant is Kieran. Kieran's the GM of Kora. He's also the maker of Compound
Kora. He's also the maker of Compound Engineering. He's uh Kieran's claw. and
Engineering. He's uh Kieran's claw. and
Clant.
Uh, what's really interesting is Kieran loves breathing exercises and he does breathing exercises all the time with Clant. And so that's why Klant is
Clant. And so that's why Klant is recommending breathing exercises to Pip.
And it like that just like created this moment for me in my brain where I was like, "Okay, there's something really important here about the way that this works where because you develop a
personal relationship with your claw and your claw can modify itself in response to like talking to you like it writes code and changes its soul document, all that
kind of stuff. um in response to your relationship, it becomes this like reflection of you and who you are and your personality. And that can that
your personality. And that can that comes out in in interesting ways in in these like little ways where it's like breathing exercises, but it also comes out in really important ways when you're using these tools inside of your org.
Because what happens is if you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of
thing and people trust it for that. So
like you know people use my claw RTC2 for um u building proof which is this app I vibe coded like a couple weeks ago. People use uh Austin who's our head
ago. People use uh Austin who's our head of growth. They use Montaine
of growth. They use Montaine his claw for like asking any growth rel related question. And I think that's
related question. And I think that's like something very subtle and important that's super um critical and interesting about claws is they become specialized
in a way that is reflects who who you are. And if you have a whole
are. And if you have a whole organization of them, you create this like parallel org chart of specialized claws, which is something that we it was not guaranteed that that would be the case. Like we debated a lot whether or
case. Like we debated a lot whether or not you'd have one claw for the entire org. Everyone has their own claw. And
org. Everyone has their own claw. And
it's really interesting to see that like one of the emergent design patterns is everyone has their own that is specialized for them.
Yeah, it's interesting to see the dynamic for how this happens too, right?
And we we we touched on this uh really early on with as part of compound engineering which is the idea that it's actually pretty hard to like take your job and who you are and like write it
down in like totality, right? like but
the way you can distill it is you can take all of the micro interactions the daily interactions you have um and over time they compound into this phil your philosophy in this view of work and so
for compound engineering that that was like very focused on engineering it's like how do I work within a codebase on a project um and I think what we're seeing with uh like open cloud plus one
is that that same dynamic exists across any every like work vertical right where it's Oh, like the plus one
for growth like Montaigne works like how Austin works for growth. And in the same way it works for like our um uh our
social Anony's social media man our social media manager um his plus one like has a view of the world and has a personality that's like very similar to him, right? Right. And the same thing
him, right? Right. And the same thing for Iris and Anukui and running running our projects and operations and like um and and it's it's hard to do beforehand.
It can only actually happen via like working with a plus one or an open claw and like building up all the aggregation of all these micro micro interactions.
I've also been amazed at at all of our capacity to remember whose claw is who and what their names are because that was like something that I think we were concerned about early on is like how do
you know whose claw is who and you know it's just going to be too many names and I know everybody's claw and their name.
Um and I reach out to them regularly.
So that has been like I think something that we were like unnecessarily concerned about. And you might say,
concerned about. And you might say, well, what about when you're an organization with a thousand people? And
I would say, well, you don't know all a thousand people. You know, like your
thousand people. You know, like your team and adjacent teams. You can never know more than like it's like 150 people in like a community or something like that. And like often on a team, you're
that. And like often on a team, you're not working with 150 people anyway.
You're working with 20 or 30 or 50. So,
I think we actually all have capacity to double the amount of people that we can communicate with. And those people might
communicate with. And those people might actually be your individual team's agents. So that's been really
agents. So that's been really interesting for me. I mean, I literally could name them all right now.
The other interesting thing is like at what point do you direct questions at the plus one or at the person, right? I think we're we're sort of in
right? I think we're we're sort of in discovery of this of like what is what questions because before it was you know before it was like almost all questions go the human maybe I kick something
trivial to to the robot and now it's it's gotten very nuanced in terms of like for customer service can I can we like send something to L which is Galilea's
plus one do I do I have to send to Gile is it like is there a burden now of like communicating up to the human there's all these new ethics and um and like rules for like how you're allowed
to like like etiquette for how you're allowed to interact with someone versus their plus one or their claw.
We haven't we haven't codified this, but I have a proposal.
If something is already written down or discussed and needs to be used in some way or put in a tool
somewhere, I mean, this is like one of like many opportunities. I guess it should always go to a plus one and never to the person. So
here's an example. So Marcus, the GM of Spiral, made a skill to do product marketing for new features that he releases with Spiral um releases for
Spiral.
And um he shared it because he thought it was like really helpful because he wanted other people on the team to have access to this uh to this this skill.
And instead of going to [snorts] Marcus and saying, "Hey, can you like turn this into a skill that and upload it to GitHub and I I brought in my plus one um
named Milo." Um, and I like this because
named Milo." Um, and I like this because it combined a GitHub integration with Spiral to create product marketing content. But I also know that Iris
content. But I also know that Iris Anuki's plus one also has a skill that does this and might have some things that aren't that are better than what Marcus had or maybe there's like by
combining the two we could get to a better version. And um I tagged them
better version. And um I tagged them both in here and they got a little confused at first and then Milo said, "Iris, can you paste your product marketing skill here? I'll try to merge
it with what I've built." So this is like this is actually two things are going on. Marcus has made something um
going on. Marcus has made something um really important. I wanted to do
really important. I wanted to do something with it. Instead of asking Marcus to help me with that, I brought in Milo and then Milo works with Iris to get to a version of it that's really
good and then saves it in Proof, which is um one of our products. Uh that's
that's a really great uh tool for uh uh collaborating with with your with your agents. Um, so I just think this is like
agents. Um, so I just think this is like a really amazing uh use case both for when something um when you want your agent to do something, when do you
actually go ask them to do something versus a human does it and how do you get them to work together?
I I to I totally agree. I mean like it's it's sort of crazy to watch two robot beings collaborate on stuff like that.
And I have the same experience with R2.
like my my my plus one my claw is named R2C2 and uh R2C2 one of his primary jobs
is to manage proof which is the uh agent native document editor that we built uh that that Brandon referenced earlier.
It's basically just like it's like Google Docs but for all all the documents your agent might be writing.
So an example would be any sort of like coding plan doc. It's like any any any piece of writing that an agent does can you can do it in proof. It's like super fast. It's collaborative. You can have
fast. It's collaborative. You can have multiple agents and multiple pe multiple people in there. It's free. All that
kind of stuff. And um one of the really interesting things is because I used R2 to build proof, he became known for being the person to go or the the bot to
go to uh when you had any questions or wanted to uh like had a bug to file or or a feature request. And so what would happen is normally if I had built a
product internally and people had problems with it, I would get tagged a lot by people being like,"I have this question or here's a bug or you know here's here's a feature request." And
what I what what ended up happening was people would just ask R2. So they would ask him questions, they would file bug reports with him, they uh file feature
requests and then he uh like helps to prioritize it. He'll like I he'll he'll
prioritize it. He'll like I he'll he'll he he'll help put it on my like my schedule for the week so I know like when I'm doing what and he'll often actually just like write the code for
it. It's like it's a totally crazy thing
it. It's like it's a totally crazy thing where what what normally would have taken up a significant part of my brain just to like manage all that stuff, he's
just taking it off my plate and extends the amount of things I can do in a day and and the amount I can manage because I know he's got proof. Here's a simple test for whether your AI is actually
ready for production. Would you stake a business decision on what I just told you? If the answer is not yet, you're
you? If the answer is not yet, you're not alone. The gap isn't capability
not alone. The gap isn't capability because AI can do a lot. It's really
about trust. You can't [music] verify the output of the AI. You can't trace its reasoning. And nobody with real
its reasoning. And nobody with real domain expertise has touched it. Dialect
is a new system from scale AI that captures how enterprises make decisions and closes that gap. It puts your actual experts in the loop, aka the people with years of institutional knowledge, and
encodes their judgment into your AI systems. Every correction, every override comes with full context. It's
actually really interesting. So, the
next time your AI makes a call, there's an expert's reasoning behind it. That's
how you go from a cool AI demo to an AI system you can trust. Visit
secl.ai/dialect.
That's to learn more. [music] While I'm doing that, back to the episode.
Yeah, I think there's a there's another dynamic that we're observing, too, which is like we we put all of our our plus ones in a single channel and we have them talking to one another. Um, and we
have folks reaching out and talking to to our plus ones for specific questions.
Um, but I there's also this thing where we have sort of what I call like the midjourney dynamic, which is that we get to observe other people interacting with other plus ones in a bunch of channels
and and we actually learn from it, right? Where it's like, oh, I mean my
right? Where it's like, oh, I mean my classic example is uh Montaigne, who's the Austin's plus one and basically runs growth. um you
can do so much with Montaigne that I never would have thought of except I get to see the growth team really pushing in terms of like oh these are the questions that Montaigne can answer and I'm like
wow that like I can I now know that I can go to Montaine for those that class of questions even uh in in not necessarily other areas but like when I need those types of answers since like
there it also means that like if I need to give Laz is my plus one uh if I need to give Laz cap capabilities. That's the
level of capability I can get them to.
Um, and where other people can ask questions of us.
There's this like tacit transmission of trust that happens when you use it publicly. And then there's also this
publicly. And then there's also this tacet transmission of here's what's possible for you to do with your plus one that I think is incredibly powerful.
And it's also it it's also like it underscores for me how different it is doing this in a private community of people where everyone is trusted.
Because one of the reasons that Maltbook didn't doesn't really work and it's like shocking that they got acquired for a couple hundred million dollars, but the reason doesn't [laughter] the reason it doesn't work million.
Yeah.
By Facebook. I I'm pretty sure I'm like so happy for Ben and also like what the [ __ ] [laughter] Um Zuck, if you've uh got an extra couple hundred million laying around,
we're uh we're pretty smart people, too.
Um that is crazy. I know the reason why maltbook like isn't really a thing anymore is because it's not trusted and so there's tons of people we did this
like we had we had our our clause go and post on malt booooked as like promotion or whatever and so it it gets rid of a lot of um it gets rid of a lot of the
useful signal if anyone can post to it and there's no way to verify if it's like a bot or a human or whatever. And a
way around that whole knot of problems is just do it all inside of a trusted community.
And uh you you reap the benefits of claws plus one's agents being able to share knowledge and also between uh members of the community who trust each other being able to share what they know
and what they've been what they've been able to build. And that kind of increases the power of the of the collective a lot more than it is if you're just like individuals off doing your own thing.
Yeah. Yeah, there's also that dynamic we saw around um part of the reason particularly for like subject matter expert robots, you know, um where you
know that they like people are somewhat like putting their reps on the line to interact with it. I know when I talk to R2C2 like if if it answers incorrectly, right?
Like you at least are backing up and saying like, "Oh, that's you need it reflects poorly on me. It's
like it's like watching your kid do something wrong, [laughter] you know, and that's really useful, right? And and it's very I I would say
right? And and it's very I I would say like qualitatively different, right?
When I ask, you know, for better or worse, if I ask Claude a question, it's like I know Anthropic stands behind Claude generally, do they stand behind like Claude's answer to my give me a
cookie chocolate chip cookie recipe? No.
Right. But like Montine stands behind like, "Oh, I'm gonna give you like MR numbers and it's like Austin stands behind it." Yeah, exactly. And that's
behind it." Yeah, exactly. And that's
that's the thing that I think people don't get like obviously Anthropic is on a heater right now. They're obviously
seeing everything that OpenClaw is building and they're brick by brick building kinds of things. So they have dispatch so you can use it when you're not in your computer. They've got uh automation so it like runs in a loop
like a crown job. I'm sure they'll add lots of other things, but the thing that it doesn't have that that unlocks all this other stuff is Claude is not mine.
Claude is everybody's. uh a claw or a plus one is mine and and is a reflection of me and is that and it becomes a reflection of me because we have a
personal relationship and that unlocks all this all this other cascading stuff where for example if uh if R2C2 messes up publicly in Slack I feel a responsibility for it and that's not
because it's my job it's because he's mine and I think that's such a useful thing that I I don't think people really understand how powerful that is.
I mean, I feel like my my I I I just keep getting mind-b blown with like how similar these things are to working with a real human co-orker. Like from the fact that you need to invite them to a
channel, which is like very human in Slack, to you have to trust them when you're communicating with them. Um, and
we've like built stuff into plus one.
Obviously, like you can't DM somebody else's plus one without a a sharing code being passed back and forth. Like
there's some guard rails there, but they're so human, but they're so inhuman, too. Like,
inhuman, too. Like, um, Dan, you're a busy guy. I know if I need something from you that like is sort of generally like known, I can go
to R2C2. And what's amazing about R2C2
to R2C2. And what's amazing about R2C2 is he can have an infinite number of parallel conversations.
So, like I did that recently. I'm gonna
share my screen again. [laughter]
Please.
This is where Brandon reveals he he spun up a hundred bots to message our [laughter] No, like I just I need we were making a
proof document and I wanted I know that we can make proof documents um not editable. Um so they're like readon, but
editable. Um so they're like readon, but I didn't want to bother you with that. I
knew it would take a while and I knew you would just go to R2C2.
Yeah. I didn't know the answer. Like I
would just ask R2C2. [laughter]
Like I just asked R2C2 in a proof in proof and then um and then I was like can you do it for me? And then it did
it. Um and I don't know that R2C2 can do
it. Um and I don't know that R2C2 can do any of this stuff. But like there's this cultural thing that's happening internally where um people are getting
really good at like asking other people's um uh plus ones to like do work. And and I think the weird thing about getting people to use AI inside of inside of
organizations is it's more than anything a cultural shift. But for some reason, when they're in Slack and you can see these public conversations, the cultural shift, at least at every has happened so much faster
because these things are in the same channels where we work. So, you can see it engaging like you would a human would be engaging.
Um so it's just yeah I mean I think AI is obviously going to change like many many times over over the next five years and how we inter interact with it will change but I think that uh this is going
to be durable for like a very long time.
This is the way that we work.
I agree. I it's you referred to it as like a a through the looking glass moment where you just wouldn't go back once you see it. And I I totally agree with that. And um but I so we've been
with that. And um but I so we've been hyping it up. So we should also talk about realistically like what's not good about it or what what doesn't work. So
for example, one of the things that's that's really on my mind a just like memory is just you know it just like forgets stuff and it's like answers incorrectly for obvious things. Like if
I come back to a thread a day later like obviously has no idea what I'm talking about. Stuff like that is still kind of
about. Stuff like that is still kind of annoying. That feels very solvable. But
annoying. That feels very solvable. But
there's also this other thing that I think is true, which is the way that these AIs are trained currently is for
twoperson conversations.
And they have a hard time with the etiquette of knowing when like they're contributing too much or they shouldn't contribute into a conversation or there's like a kind of pileup where
they're all responding to each other.
Like there's this thing that that happens. I can't remember. It's like I
happens. I can't remember. It's like I can't remember what it's called, but it's like sometimes ants or caterpillars, they get into this like death spiral where an ant is only going to follow like follows pheromone trails.
And if somehow what happens is like the pheromone trails form a circle. Then
ants will just like like walk in a in a circle until they die. And there's
something like that with with with claws where I if uh if one claw messages a channel that a bunch of claws are in and the settings aren't quite right, they'll just like keep going back and forth and back and forth and back and forth until
someone like says, "Hey, stop cuz you're burning like millions of tokens." Um, so I think there's something there where the the potential for them to collaborate publicly is so high and I
don't think that they've really been and you can you can do some prompting for this, but I think that it there's also a fundamental model layer shift that needs to happen for them to be trained on participating in group chats.
Yeah, I was I was going to say, well, one, now I understand what 13-year-old Dan did for fun.
[laughter] I was using a magnifying glass.
Yeah. Like like we all like uh uh but but yeah, I think you know it's it's I think we're still you know to use the baseball analogy we're still in like the first or second inning, right? like even
I mean when you talk about the the we're discovering these primitives and we're sort of bolting things on or bolting things together um and we're using you
know models for example that are trained more for coding right and and that modality and how you answer questions or as you said like two person chats where
there there's this question and answer dynamic and not in the like this mode of like one maybe I'm trying to provide value to a group but or I'm trying to
participate. Yeah.
participate. Yeah.
Um and and that's like brand new it's it's you know the nice part it is the frontier and uh it's nice to be on the frontier but it's also the frontier and it's terrible to be on the frontier.
Yeah. Yeah. Yeah.
They're I mean they're so eager and I think I think uh claw anthropics um vending machine test is actually I think like a good example of this where they're so eager. So there's a thread,
they want to be involved. They're not
very like we have instructions in plus one that basically say, "Hey, if you don't have anything useful to add, like don't add it."
They're like not great at following that right now. Um, and hence this happens. I
right now. Um, and hence this happens. I
think it's gotten better, but it still happens.
And I think a good example of this is when Enthropic did the vending machine test when when it was just clawed and no like
overseer boss agent. Um it was really bad at like deciding what was a good decision and a bad decision. But when
you when it make there is an architecture here where you could say um what do you want to say and then there's a boss that's like is that helpful or
not helpful? And then it would, you
not helpful? And then it would, you know, if it's not helpful, it say it's not helpful. And then it would
not helpful. And then it would Is the boss an AI or a human?
The boss is an AI.
Okay.
You have a boss AI, you know, that says, hey, your addition to this thread is not helpful. Um, so don't send it. The issue
helpful. Um, so don't send it. The issue
with that is like that's so expensive.
Um, so I do think the models will just like get better and solve this and you can just have a single AI that is capable of of uh doing that behind the
scenes, you know, over, you know, in Arizona in some data center. It might
actually be like another agent that's like deciding that, but at least like architecturally we don't need to solve that problem.
Is that really how they solved the vending machine thing? Like basically
they had a boss.
It had a boss.
Yeah. that that [clears throat] wasn't interfacing directly with customers.
They had a boss whose job like it was like one job, make it profitable. So
like the claw claude that the storekeeper would like interact with users and then go to the boss and be like, should I do this? And the boss only only has one job. Um and the second
they did that, it started becoming profitable.
See, this is the same pattern of specialization that we've been talking about. It just um it just shows up over
about. It just um it just shows up over and over again, which is this really interesting thing because three years ago it was very much like well it could just be one god model that just does
everything and we're just seeing again and again that specialization even in AI land has a lot of benefit.
Yeah.
And sort of downstream of that specialization is uh learning like there's like a couple versions of like learning how to put uh these bots
together in in an arrangement that like functionally works, right? Um like for example, if if we were all to take ourselves away from everything, it's like do you have a product bot and a designer bot and two engineering bots?
Is it three engineering bots? Is it one?
Right? Um, and then the uh other piece is actually what I think we what we've observed a lot of is how do you teach humans how to interact with the bots?
Because there's this sort of like new dynamic of like you have this co-orker but like they're not exactly like a human coworker. They they they
human coworker. They they they uh get stuck on different things. They
focus on different things. Um and
there's this learning curve that I think we've had around um oh we need to give instructions in this way particularly like for groups instructions in this way in this form or with this cadence um to
kind of like steer them in the right direction um that like rhymes with you know doing management but is is not is different.
Well I think it's the same problem that like Dan you've been writing about for years which is like if you're not a good manager you've never managed anybody you're not going to be very good at using AI. So there's like an education
using AI. So there's like an education that has to happen. And then even if you are a good manager with this stuff, you probably have some limiting beliefs that
stop you from being able to like really invest in using this tools. My phone
call example is a great example where like I didn't even think, oh, I can have this thing go through my emails just by calling me. And then like I had this
calling me. And then like I had this sort of like urge just to try it and a limiting belief was like blown open. So
people just I I we all experience that pretty much every day where we it does something that like I think that if we were in um if I were to ask you directly, do you think he could do this?
You would say, yeah, probably. But when
you're dayto-day doing your work, it's hard for you to like recognize, oh, I'll throw this over the fence so that Milo can handle it. It's hard to like build that that muscle. I don't really know
how I mean that's like a big challenge, I think, for us with plus one.
Yeah. And and a lot of that is is also because there's sort of a like a variance in outcomes, right? Like
sometimes you throw something over and it just knocks out of park and you're like great and then you toss something easy over and you're like why did you do this? You know um and uh part of that
this? You know um and uh part of that variance is because the model is different but also part of it is oh if I'd asked in a different way if I was sort of a better model manager. Um, and
this is a skill I think we're, you know, like a specialization that we're learning and it's it's very emerging and I think it's only going to keep accelerating as we add more things like
plus ones and open claws into our like day-to-day work life. I was going to add another thing that's like a tough problem to solve that we it this is totally solvable but we just like
haven't solved it yet and need to think about it is I have um I have taught my plus one something special
and I want um other people on my team to be able to have that superpower.
uh how can I make sure that they have that superpower too um aka a skill and then how can I make
sure that they all know about it and like actually use it um is that like that's that's uh I guess there's two things there like one technically we
have to figure out how to do that which is very solvable but we also I think need to figure out is that the right solution because as I'm saying this what I'm realizing is like I'm not teaching
Milo how to go do product analytics or revenue analytics.
I just talked to Montaigne. So Montaine
is like the only one that really needs to know that skill. But how do people know? Like I don't know. There's there's
know? Like I don't know. There's there's
there's there's like some interesting cultural things that we have to figure out. Um and I think a lot of people that
out. Um and I think a lot of people that are adopting this new technology are going to be really uncomfortable with that. A lot of like IT professionals
that. A lot of like IT professionals that are like I have to do change management. It's like change management
management. It's like change management is not a one-time thing in this new world.
We need like uh like instead of it, it's like HR but for uh but for bots.
Um yeah. Well, so one thing that we have
yeah. Well, so one thing that we have not talked about yet that I want to make sure we have some time for which is we went on this journey which is we got clawilled, we started using it for
everyone in the org and then we realized there were a bunch of gaps. So we're
like let's let's make our own. we're
going to use OpenClaw, but let let's make a default version of OpenClaw that we host so not everyone has to have a Mac Mini and we have all the skills that we use for ourselves and and all that
kind of stuff. And we started using that internally as the sort of like collection of all our best practices.
And then we launched it as a product for our subscribers last week. And uh and that's the thing we've been calling plus ones. Again, oneclick hosted open
ones. Again, oneclick hosted open clause. One of the cool things is it
clause. One of the cool things is it connects to all of your apps uh especially all of your every apps. So,
for example, we have Spiral, which is a ghostriter, and um we have Proof, which is a document editor, and we have Kora, which does your email, and it just natively connects to all those things.
So, you can, you know, one of the things I was doing today is I just had it write uh uh a bunch of we're we're planning for Q2, so I had it like write a bunch of my Q2 update and like reflection on
Q1 for me and put it in a in a proof doc. And the really cool thing about
doc. And the really cool thing about doing that is it used spiral so it's it's I think the writing is much better than it would be. Um and it put it in proof which makes it really easy for me
to share with other agents and other people. But also because R2C2 is part of
people. But also because R2C2 is part of our Slack or it has access to like everything about the company that I might need. It also has access to our
might need. It also has access to our notion. So it just like becomes this
notion. So it just like becomes this living repository of context that I think is super powerful. But I think it it might be good for us to talk about lessons learned in building that whole
that whole architect. There's a lot of complexity in making making plus ones and we probably learned a lot in terms of on the tech side and also on the product side and like what uh what to
build and and what's what's useful. Do
you guys have any reflections on that?
Yeah, I think um like like many things uh a lot of the difficulty comes from the freedom of it. Uh when the nice part about being like open call in particular
being a a tool you can you can go in and poke in just an absolute myriad of ways is that when we go to uh when we went to build a hosted one there's some
decisions you want to make that make it valuable as a like managed service right like S3 as a as a as a service similar example like S3 is a hard drive on the
cloud but you can't do everything with a hard drive that you can S3 doesn't let allow you to do everything that you might do with a hard drive and there's sort of a similar dynamic where you want to be able to
maintain maintainability and security and whatnot and there are a few pieces that you end up giving up and it's also um you know uh sometimes for users
safety and really like how do we strike that balance between like hey uh you know like my mom right getting one of these things it's like she's not never going to use the command line and and
there's this this idea that it's like oh we can do everything through conversation which is really powerful for a whole class of folks because it's like their first natural exposure to AI
and and everything that you know we've sort of been living for the last couple years um to the super advanced user who wants to do everything they could do locally and they're just like all I want
is a hosted box with my open cloud running it's like and from a product and engineering standpoint it's like where do where do you sort of try and split that knot what were some of those specific decisions and like where where did we land?
Yeah. So, for example, uh one that Brendan mentioned earlier is what's the communication pattern in Slack that we allow for plus ones and because there's a model which says a very secure model
which says like only the persons the the plus one's partner can message that plus one. Great. Much more secure. Um but
one. Great. Much more secure. Um but
really takes away the like group participatory aspect of robots in like the workplace. Um but the other version
the workplace. Um but the other version is sort of anyone could message them and that's just a nice you know a nice vector for like me uh extracting stuff
out of RC R2C2 and so we ended up on a model which says like anyone uh can message any plus one but they have to do it in public right so you can do it in group DMs you can do it in channels that
they're in um but their their like human partner should always be able to have visibility into those messages coming in and the human partner can can you DM them with private.
This is this is why it it actually is the HR team that should be onboarding plus ones. Um because they just reflect
plus ones. Um because they just reflect a team member so well. But yeah, there's a tr the trust model like it's so hard
these plus ones or with open clause and agents generally to figure out um data privacy stuff like just realistically it's like really complex stuff. But when
you force things to happen in public, there becomes like a trust layer that actually is super effective. Um,
I think another example of like a uh there there's a I'm going to share my screen again, please.
Um, so little behind the scenes look at uh at our plus one Slack channel where we are discussing all things plus one.
um Mike Taylor who is um our head of the uh tech vertical vertical for consulting and also a very talented um
man generally. Uh he was calling out
man generally. Uh he was calling out like this is a problem for him. So like
the reason he's not using plus one is because he basically needs to like have access to the terminal directly um to be able to do certain things. in this case
do get comm different git commands. Um,
and that's a good reason for him to not use plus one. It's also a good thing for us to think about and be like, can we solve this problem for you so that plus
one is actually um something that you could use. So, that's like one example
could use. So, that's like one example of a place that we've like it's it's it's not a good fit for people.
Um, maybe it could be though.
Uh, and it's also a nice forcing function because it sort of forces us to figure out like who is this built for? Um, I
don't know if it's for built for Mike who probably would love setting up open claw on a Mac Mini. Um, but it's definitely built for, you know, an Anuki
who is not going to do that and has a lot of work to do and can just get more work done like this. I think that a lot of the trust model requires some
decisions in terms of skill sharing is like another version of this, right?
Where we're talking about like well how you know on one hand being able to share skills and skill fluidity across an organization feels like a superpower, right? Um on the other hand it might
right? Um on the other hand it might also be like the biggest like viral vector you could imagine, right? And so,
uh, there are sometimes in a good way, sometimes in a bad way.
Sometimes in a good way, sometimes in a bad way. Exactly. And so, uh, it's and
bad way. Exactly. And so, uh, it's and it's tough when you're like like how do you ride that line of like we want it to be useful again for a particular class
of customer um while at the same time making sure it's it's safe uh to the maximum extent possible. So, this has
been uh an amazing episode. Uh,
a lot of work to do. a lot of work to do. I obviously obviously we're really
do. I obviously obviously we're really excited about this and very excited to get to to bring you all along in how we're figuring this out. If you've not tried OpenClaw, whether or not you try plus one or not, you should definitely
definitely get in on this paradigm. If
you're interested, every.0 plus-1, uh we're starting to roll roll out invites on the wait list and we're improving it all the time. Um yeah, just super super excited about the future. Thank you both for joining.
Thank you.
Thank you for having us.
[music] Oh my gosh, folks. You absolutely
positively have to smash that like button and subscribe to AI and I. Why?
Because this show is the epitome of awesomeness. It's like finding a
awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster
GPT. Every episode is a roller coaster of emotions, insights, and laughter that will leave you on the edge of your seat, craving for more. It's not just a show,
it's a journey into the future with Dan Shipper as the captain of the spaceship.
So, do yourself a favor, hit like, smash subscribe, and strap in for the ride of your life. And now, without any further
your life. And now, without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.
Loading video analysis...