LongCut logo

From Note-Taking App to AI Workspace: The Simon Last Interview

By No Priors: AI, Machine Learning, Tech, & Startups

Summary

Topics Covered

  • GPT-4 Made AI Real
  • Embeddings Ignore Organization
  • Rewrite AI Every Six Months
  • Notion Shifts to Agent Management
  • Agents Turn Engineers into Managers

Full Transcript

Hi listeners, welcome back to No Priors.

Today I'm here with Simon Last, co-founder at Notion. We talk about their new vision for notion in the AI age as a platform for humans and agents to collaborate, how the engineering and

product org at notion is changing, [music] and these new tools for thought.

Welcome, Simon. Hey, Simon. Thanks for

doing this.

>> Hey, of course. Yeah, it's really fun to be here.

>> Notion's at scale. Amazing platform,

lots of users. You did start quite a while ago. I think of notion as one of

while ago. I think of notion as one of the companies that has really like braced AI quite aggressively. I was told you first got your hands on GPT4 uh at a

company offsite in Mexico. Um is that true? What is the origin story of like

true? What is the origin story of like starting to work on this stuff?

>> Yeah, I think yeah, that year that was 2022. Um I I've been watching you know

2022. Um I I've been watching you know what's going on in general. I've just

been like super curious about the technology and fascinated to to try everything and think about like like how we can apply it. It wasn't until I played with GBG4 that it it became

really really real. So, you know, we when we got access to it, it it was sort of like a a protogt like interface. Um

and uh my co-founder Ivan and I both got access and it was just immediately clear like I would say two big things. One is

that it was just pretty smart. it it

could follow reasonably complicated instructions. It could write things for

instructions. It could write things for you. You could edit things and and the

you. You could edit things and and the second big thing was that uh the scope of its knowledge was extremely interesting. Uh super super deep like um

interesting. Uh super super deep like um and and broad world knowledge. When we

played with it, it became just instantly clear to both of us like okay the the time is now to start thinking about how to apply this. It's only going to get better.

>> We were talking about Mexico TPT4. You

guys saw it was like clearly the time.

Did you start with like a particular vision of like what you should obviously be able to do with AI and notion or did you start pulling people from different teams or recruiting people and say like let's experiment? How did you begin?

let's experiment? How did you begin?

>> I think we immediately had a long-term and a short-term vision. I would say the the I'll start with the short-term one.

The the thing that was immediately obvious was oh it could be like a writing assistant.

>> Um so it could be in your document. You

could like select some text, have it rewrite it. You could have it write text

rewrite it. You could have it write text for you. maybe look something up and

for you. maybe look something up and then uh you know give you like like sources or more information. So that was the thing that we immediately like like got to work on and you know we sort of started a tiger team around it and then we were able to launch it in like like

two or three months after that. And then

the long-term vision that we immediately had was like oh the thing that looks like it may be possible is more of like a general assistant. So what if you could just give it all the tools inside

notion that a human would have be able to like create its own databases, query, manipulate them, create documents, edit them, uh and sort of weave all these things together to do like a longer

range task. And so we we sort of uh

range task. And so we we sort of uh immediately started on both. The the

short-term one we're able to shoot very quickly and then the long-term one didn't really work yet and so that took much longer to get working. Are there

like specific first launch of the AI specific notion features and products was when last year?

>> No, it was uh it was February 2023 is when Okay. It launched. Yeah.

when Okay. It launched. Yeah.

>> My timelines are wrong. Um are there like a few um specific learnings or breakthrough moments you think since beginning to release that are interesting?

>> Yeah, I mean there's been it's it's it's been a slog over many years or over multiple years at this point with with many many learnings I would say. Yeah. I

mean just to give you a timeline of the arc of what what we shipped is you know so the first thing was our our writing system uh we called it AI writer um that's the first thing we launched uh it

was the easiest to get working is it's like singlestep task rewriting editing text uh there's no like retrieval aspect it was just like raw access to the model to write uh uh to write the text the

next >> uh the next big thing that uh that we immediately started working on was Q&A doing a semantic index of the entire workspace and then letting you ask a question and and I can give you an answer that's that's grounded in the

sources. That was also immediately

sources. That was also immediately obvious to us that that'd be super useful. And so we started work on that.

useful. And so we started work on that.

That one we launched in I think it was October 2023.

So we started a beta before then, but then that our our GA was in October.

That was just a much bigger effort to get working. Obviously, we weren't just

get working. Obviously, we weren't just like plugging in the LM. It was actually doing this like real-time updating index, >> right?

>> We had to get much more serious about the the evals and the quality there as well. the the Q&A has been a a

well. the the Q&A has been a a multi-year journey. Basically, what what

multi-year journey. Basically, what what we did is immed uh as soon as we got the notion index working, it it was obvious that okay, we should index everything else as well and and so we index like

Slack and Google Drive and and we're launching new ones uh on a on a regular cadence and and and now we have a uh I would say fairly complete >> one could argue that those are like very difficult problems that you know those

products natively have not solved perfectly yet. So, how did you think

perfectly yet. So, how did you think about taking that on? I don't know if that's like an offensive thing to other product teams, but like it's not working yet.

>> Yeah, it's it's kind of true. Yeah, this

has been something we talk about a lot because it's like, you know, it's like almost like what right do we even have to do this? But but it turns out that most of the companies are pretty bad at making their indexes somehow. It's

honestly kind of baffled us a little bit, >> right? But I I think my take after

>> right? But I I think my take after dealing with all of this and you know working with the team to try to get it working is there's a little bit of just AI pilled savviness that's pretty

important and then and then I think most of it is honestly just like a bit of like like craft and attention to detail.

I think like in in particular with this like indexing retrieval stuff in order to really get it working you you have to be quite empirical and iterative and actually be like like trying queries

like you know like each each uh data source is a little bit special like you know you can't just apply a one-sizefits-all to like quering Slack versus quering Google Drive let's say they're they're completely different

kinds of information and we found that there's just a little bit of like like craft and love that has to go into it in terms of like actually trying a bunch of different queries actually using it every day and constantly iterating and rethinking and and and

tuning how the retrieval works.

>> How did you um think about the diversity of how people organize their workspaces and just I mean even notion is not use of it is not homogeneous, right? Like

I'm probably part of 15 workspaces as an investor and so I look at them and I'm like well mine's a mess and these people are really organized and the workflow is reflected in how their notion works.

>> Yeah, totally. I would say I mean the interesting thing is that with embeddings it almost doesn't matter as much. anymore. The the AI doesn't really

much. anymore. The the AI doesn't really care what the what the the tree structure is. For example, the all the

structure is. For example, the all the AI cares about is that there's a snippet of text that has the the context you need and then it can retrieve it. And

so, actually, we kind of advise people now like don't worry as much about organization. Just just just find a way

organization. Just just just find a way to get it all piped in and like like thrown in there.

>> You still make decisions that could change performance quite a bit like chunking strategy or whatever.

>> Yeah, >> that's super important. But but that's sort of not that's sort of transparent to the user and sort of in independent of their their particular method of organizing things.

>> Mhm. It just seems like still a difficult technical challenge given how different the content bases are.

>> Yeah. Yeah. Yeah. I think yeah that took a lot of iteration. Yeah. The chunk

sizing, how retrieval works, the different like steps in the pipeline of retrieval. Um yeah, there's a lot of

retrieval. Um yeah, there's a lot of iteration on that. Ivan said I should ask you um how many times you've rebuilt Notion and rebuilt your harnesses.

>> Yeah. Yeah. It's kind of a running joke almost. I mean we we we rewrite our AI

almost. I mean we we we rewrite our AI harness probably every six months or so and it and the time to rewrite has kind of been been decreasing just because I think like like progress has been accelerating. I think this is honestly a

accelerating. I think this is honestly a a really key thing and something that a lot of companies get wrong is just like doing one thing and then just like like sticking with it. you really do have to

keenly aware of what the current state of the models and the technology is and then designing the harness the system and the product deeply around that and it basically means you have to rewrite it every six months and um I find it

pretty fun. It's part of the process. Um

pretty fun. It's part of the process. Um

you know you get to get to restart and and and and rethink it. You know we're working on we're about to release a new version of a harness like in the next week or two. Uh and then and then we're

already thinking about the one after that as well. I I think that leads to a a set of questions I had for you on just like how does not as an engineering and product and research organization work

now that you have the power of um coding agents as well because I imagine like your willingness to rewrite the harness goes up dramatically if you're like agents are going to help me do it.

>> Yeah, that's extremely true. Yeah, I

mean yeah, it's been it's been really fun to use the coding agents. I think

the ambition of what I even consider building has has has gone up a lot. What

do you think has most dramatically changed in how you think about how um engineering and product should work at notion over the last two three years?

>> Yeah, I mean it's it's definitely changed multiple times. I mean in terms of the coding agents, we kind of went through multiple eras. There was kind of like the tab autocomplete era and then we and then we got into sort of

inserting rewriting some code u but but it wasn't really until the the agents started working. I I would say like

started working. I I would say like early last year we started to adopt the agents like I started using cloud code I think around April of last year that was a huge unlock like I would say the the

the big shift there is that you know you can really push on getting these agents to end to end you know implement and and verify and maintain stuff but it but it requires pretty significant thought in

terms of how you architect things and what is the verification loop um but but but the upshot is I think if you do it well you can be much more ambitious about what you're building and also make

it much more robust than you could have done uh with with with humans writing it. And then the flip side is if you do

it. And then the flip side is if you do it badly, it's all slop.

>> Does that change your lens of like what teams should look like at notion like size, seniority, anything like that?

>> Yeah, I mean I would say I mean the fundamental effect is that you know everyone's individual impact in terms of their output can be much higher. um and

your output increasingly depends on your ability and willingness to use the tools. I I think that's the fundamental

tools. I I think that's the fundamental thing that's happening. And then like like how does that play out? I think I don't think we've seen that much impact on the the team size really. I think we

we like to work in like smalish tiger teams for the most part. Um I think if you can make teams small, it's almost always better. That was true before and

always better. That was true before and I think it's still true. Uh maybe

increasingly a little bit but but not that much. I I think yeah the main thing

that much. I I think yeah the main thing is to just like like really harness the tools.

>> Do you think something different happens to the median engineer in an organization versus the 10x engineer or the engineer 10x more willing to use the tools?

>> Yeah, I think the the gap is bigger. You

can be like a 100 or thousandx engineer if you're using the tools right now. I

think I think the the gap is much bigger like the the the minimum bar has not changed but the maximum bar has has extremely increased. One impact it's had

extremely increased. One impact it's had internally I would say is like broadly things feel like a little bit more messy and chaotic I would say like but I kind of love that I mean it's like there's there's more pro there's way more

prototypes uh you know people are like for example our uh uh design team made an made an entire uh git repo they called it the design playground and it's

essentially like a simplified notion uh with a bunch of like UI primitives in it >> and they've made it like really sophisticated you it it it has like an

agent in there and like um and it's it's pretty cool because it allows them all the designers can can spin up like super high fidelity prototypes >> really quickly and so it's no longer like like pointing at a mock and being

like like you know like like how will this look like they'll give you like a URL to a prototype that's that's that's been deployed and that sort of thing is true all the way up and down the stack you know for all of engineering just like a little bit more chaotic more

stuff happening um all the PRs are more ambitious >> do you draw a line somewhere about like stuff that is more dangerous to touch or sensitive like ah there's could be risk

of data loss over here or and not or is it kind of you look at it all is it's fair game >> we still do reviews on all the pull requests and and I would say and you know all the pull requests are now

written by agents they're often like larger and and and more complex that's like the worst part but the better part is that they're often like a much better tested and we can demand sort of a much

better testing for the things that merit I never produce a PR that like hasn't been like fully ant tested anymore. And

so it's like you can get to a pretty high degree of confidence that it that it works, but it requires like you're not just vibe coding by by saying the thing you want. You're sort of thinking carefully about like what is the thing I'm like what is the change I'm trying

to make and like and and how can it be verified and how can it be deployed safely and then enlisting the agent to to help you with that process.

>> When you think about where you said the general assistant like doesn't quite exist yet. Um, what's the what do you

exist yet. Um, what's the what do you imagine notions agent agents being able to do like over the next year or two that are still unblocked? They're still

blocked by either capability or your harness work.

>> We struggled for a few years to build an agent. Um, and you know, it always like

agent. Um, and you know, it always like like sort of worked but then you know wasn't that useful largely just it was too early. So we you know we we tried to

too early. So we you know we we tried to to build an agent I would say actually three or four times and then uh we finally launched it uh last fall so like last August September. Um so the you

fuse notion AI now it's like the full agent that has access to everything in notion pretty much. Um so that that that totally works. I would say like the a

totally works. I would say like the a lot of the original vision that we had totally works now. Um and it you know it's like like fully shipped. Last

August or September, we shipped our personal agent. U so it's pretty much

personal agent. U so it's pretty much every user in notion has an agent and it basically it has access to all all the things that the user has access to. So

you know it can create a database for you. It can update things, create

you. It can update things, create documents, it can search search the web, do research and then the second big thing uh that we just launched last week actually was u custom agents. So you can basically you can create a new custom

agent give it a name and unlike the personal agent uh by default it doesn't have access to anything. Uh so you have to grant it access but then once you do it can actually run autonomously in the background. So for example you can give

background. So for example you can give it access to its own database to file tasks let's say and then you can attach it to a slack channel and then it will start responding to people on Slack and filing tasks. That's that's that's one

filing tasks. That's that's that's one use case. Another one is maybe you could

use case. Another one is maybe you could um you could give it access to a database of like weekly reports and then and then let it search the web or search your workspace. And so it's sort of a

your workspace. And so it's sort of a custom agent sort of represents some work or job some some knowledge work task that you want to be done autonomously. One thing I'm really

autonomously. One thing I'm really excited about this going forward is is um we want it to be extremely good at sort of bootstrapping its own capabilities basically from an initial

kernel allowing it to basically bootstrap itself to do anything right.

So even for example maybe u uh building an integration that we don't support yet deploying that and then and then using it.

>> So you imagine that notion agents are actually the broader definition of agent where like writing code is a tool it's pretty close to yeah >> I think it's pretty key. Yeah I think I I think of coding agents as like the

kernel of AGI. AGI will be a coding agent. Um and and and and code is just a

agent. Um and and and and code is just a really really useful uh a primitive for representing like deterministic logic.

The thing that's really exciting about it um we're applying it to to a knowledge work agent is that it can bootstrap a capability you know so yeah like I said if integration doesn't exist

it can build it um if if it needs to uh you know connect itself to a new data source it can do that >> given you have a you know notion is at scale but is operating in a landscape of

productivity and platform players that are at even more scale right um many of these will end up with their own agents lots of people from the labs to the Microsoft world are trying to integrate

other data sources. So you have this like cross attempt to integrate and index like how do you think that plays out? Like what do you what do you

out? Like what do you what do you imagine that notion agents are best at or what they have the right to go do?

>> If you look at the landscape like I I would sort of say there's the labs and then there's maybe the the the software platforms and then there's maybe like infrastructure. In terms of the labs,

infrastructure. In terms of the labs, you know, we see ourselves as kind of like the the Switzerland for models. We

think and our customers they, you know, they don't want to be locked into a certain certain labs model. They're

always uh releasing new versions any given month. One is better than the

given month. One is better than the other. Um so we want to be a a place

other. Um so we want to be a a place where basically you can you can easily get access to all the best models um at any time and you can easily switch around.

>> Do you think open source plays into that as well?

>> Yeah. Yeah, absolutely. I think the open source models are actually getting really good. There's like the four

really good. There's like the four different Chinese models now that are that are quite good. We actually just uh released one of them in our agent uh last week and and we're going to do all four for sure. Um they're they're

actually quite good and they're and and they're way cheaper than the the frontier models. So I think there's

frontier models. So I think there's there's a lot of use cases where where where you'd want that and we want to give that as an option in terms of like the other you know so you know we think of our role as sort of taking all the

best models that we can creating a really high quality state-of-the-art agent implementations where where people can easily and conveniently get access to them and then making sort of a

collaborative workspace that is really good for for humans and for the agents uh to to to coordinate on. I think it's it's something that's that's very needed in the world and we're just trying to do

it in a really tasteful wellexecuted way.

>> You were describing you need the index to make the agents good. Um you give the agents access to the tools that we humans have in notion. How do you think about um the structure of notion and

like where it's like useful or even like not useful or relevant for agents like blocks and databases and such?

>> It's all still pretty useful. Um

extremely useful. it uh there there's been a a challenge to sort of you know we want to make it really convenient for the agent. I think that's that's a new

the agent. I think that's that's a new thing that that didn't exist. You know,

in in the past it was convenient for humans and then we also made APIs convenient for humans writing code >> to use our API. Uh so we essentially have a new customer which is the agent.

At first that was definitely a problem you know. So for example like our our

you know. So for example like our our API uh uses this this crazy JSON format for blocks that by default is like crazy verbose and like like horrible for the

agent. But we basically took on that

agent. But we basically took on that challenge and um designed uh just really convenient APIs for the agent. We

created sort of a markdown dialect that um looks like the default normal markdown but it's sort of enhanced with uh all the notion blocks. Um and the models are really good at it. It works

really well. Uh so so that's how it reads and writes to pages. And then uh for databases uh we we use a SQLite. Um

so so basically it's the guess the speak and SQLite which also works really well.

So the default thing did not work really well. Uh but then we just like like took

well. Uh but then we just like like took that on as an engineering challenge and and I would say now we have like extremely convenient APIs that the agents are are really naturally good at.

>> How did you uh understand or figure out what would make the API better for agents?

>> That's a good question. Yeah, I would say it's a it's a combination of just trying things. It's it's it's very

trying things. It's it's it's very empirical. So, so we're just playing

empirical. So, so we're just playing around and like like noticing, oh, it's not very good at that. Oh, that's way too many tokens. How can we make this smaller? And then a little bit of just

smaller? And then a little bit of just like like first principles thinking of like, you know, what is it the models are being trained on and what's what's in their prior? What do they know? And

what what do we think it would naturally be good at? And and like like how does the agent loop work and like what what would be the convenient efficient pattern for for accessing these things?

Um, and so and then just you know a lot of playing around. I hear user research where the user is actually agent and then you know ongoing eval.

>> Yeah. I mean user Yeah. You just chat with it.

>> The user is always there. It's ready to talk to you.

>> Yeah. Actually, that is wonderful where you have infinite access to it.

>> You have infinite access to it. Yeah.

And and you can you can script and scale the access as well.

>> I assume you have actually I know you do because you walked in. You're like,

"Hey, I need to get access to the Wi-Fi.

I need power. We can't block the agents while we're doing this." Um what do you have running right now? Tell me about your setup. I'm working on a new

your setup. I'm working on a new prototype and so I have a couple agents I'm working on that. Um and then yeah, my setup these days is just um either

claw code or codeex. I like the the the CLI tools. Um they're they're they're

CLI tools. Um they're they're they're super simple and like work pretty well.

I'm I'm pretty comfortable in the CLI.

So, and then yeah, my my >> you don't need my generated game CLI commands.

>> It's it's a very cool idea. Um, I would say, yeah, my my my whole goal these days is essentially to just have as many running as possible and to run them all the time. And you know, so for example,

the time. And you know, so for example, like every night before I go to bed, I'm I'm like, "Okay, I >> Let's go, guys.

>> Yeah. Basically, what I have to do is make sure that I've given it enough stuff that by the time I wake up in the morning, it it will still not be done.

And so I've maximum >> That's victory.

>> Yeah, that's that's victory. Yeah.

>> So, yeah, like I've I've I've done that I would say last last five nights pretty well. My personal record is that I've

well. My personal record is that I've had a a coding agent running for I think it was 13 days straight uh without stopping and just just basically working through like tasks.

>> Well, well prompted. Yes. I I admit to having woken up in the middle of the night at least multiple times this week and just being like, "Are you still going?"

going?" >> Yeah, I know. Yeah. It's it it's kind of nerve-wracking. I I always like there's

nerve-wracking. I I always like there's always like I I'll check it one last time before bed and just really make sure that it's still spinning.

>> What about on the notion agent side?

Like do you have a workflow there that is core to daily work?

>> Yeah. I mean I mean I I use our personal agent all the time. So it's it it has all the context about about our company and everything that's going on, you know. So like for example, last night I

know. So like for example, last night I was asking it about um how the custom agents launch was was going and like like like what the what the signals were getting from it. We're super useful for that. And then for I I have many custom

that. And then for I I have many custom agents that are that are running. U my

my my personal favorite is I have a email triage agent. So it has access to all of my work and personal emails. Um

and it just uh wakes up every day and just archives all the stuff I don't need to see. Train it over time to uh to to

to see. Train it over time to uh to to learn my preferences.

>> Do you actually label data for it?

>> It's pretty to do actually. So all you have to do is you make the agent and then you give it access to email and then you you can make a blank page. It's

like it's memory >> and you let it edit that page and then you just say okay now go look at my emails and then interview me ask me which things you know so sort of it will

like propose things that it thinks it should archive >> and then you can kind of correct it and then we'll use that to essentially generate like a list of rules about like like what it thinks are correct or not.

And so for the first couple days I was sort of like like like uh correcting it on things. After a couple weeks or so, I

on things. After a couple weeks or so, I I I I dropped the approval entirely and it just automatically archives all the things I need to see now.

>> Wow.

It It completely solved my email problems cuz for me, like I don't I don't use email that much for work stuff. Like it's it's mostly in Slack.

stuff. Like it's it's mostly in Slack.

95% of the personal emails and working emails that I get, I don't need to see at all. And so it's just a waste of

at all. And so it's just a waste of time. Uh and so it it it completely

time. Uh and so it it it completely solved that. So now when I have my

solved that. So now when I have my inbox, it's like only stuff I need to see. I've got lots of uh custom agents

see. I've got lots of uh custom agents running. Uh there's another one um that

running. Uh there's another one um that I built that uh uh triages uh customer fe u all all internal feedback and and and bugs. So we have a Slack channel where

bugs. So we have a Slack channel where basically people just just uh post random like like product feedback and bugs. In the past it was it would sort

bugs. In the past it was it would sort of sometimes get answered but then sometimes like like half-hazardly get ignored just because you know there's so many teams where things uh so its entire

job is just to route it to the right place. uh and and it it uses a similar

place. uh and and it it uses a similar sort of like like memory pattern where it sort of learns on the fly uh where it's supposed to file bugs uh and then over time it's built up like like hundreds of roles that it just um sort

of like like learned over time, you know. So for example like if there's a

know. So for example like if there's a there's a bug about the mobile app, it knows to route to the mobile team and then a file a task in their database.

>> Do you look at that um like the generated and updated memory to like because it's legible to you to say like did that make sense to me? I think I did it I did at first. Uh but then sort of

once you trust it's kind of working, you just you kind of ignore it and then if if it ever breaks, I'll I'll go fix it.

Yeah, it it'll break every now and then and then um >> but the benefit reading your email is >> here.

>> Yeah, just not read it. So yeah. Um

yeah, I I mean generally I would say yeah the the general pattern I follow is sort of I I build it as a prototype. I

have it in sort of like an approval mode where I'm sort of, you know, watching it closely and then but then after it runs a bunch of times, you kind of trust that it's working. And then

it's working. And then >> is there anything you do internally at notion to um make sure non-technical teams have the intuition for how to build agents or how to like express that

productivity too?

>> Yeah, it's a great question. I mean, we do uh sort of workshops and hackathons pretty frequently. So like for example

pretty frequently. So like for example like a month ago I did a I did a hackathon with uh the the people team and sort of sort of got them the the people team has been amazing. They're

actually one of the the highest adopters of custom agents.

>> You know they do all these kind of workflows in like Slack and notion kind of like like manual work like that and um and yeah I would say yeah like like people are super excited to to try it and sort of like like maybe just need

like a little bit of a push in terms of intuition and like like getting them started. Um, but then honestly I've been

started. Um, but then honestly I've been super impressed like I I think the concept is like kind of intuitive sort of like like once you get once you get past sort of a little bit of the technical barrier of like what is a

prompt and like what is the agent and how does it get triggered and woken up and like like how does that even work?

But then once you sort of get past that, I think it's actually a very humanlike interface.

>> Yeah. Maybe the maybe the biggest barrier is actually just getting people to try and assuming it's going to work at all. Right.

at all. Right.

>> Yeah. Yeah. You and Ivan originally met on the internet tools for thought community. Um it feels like you know the

community. Um it feels like you know the tools we have for thinking are very different now. Has your like core

different now. Has your like core conception of notion changed over the last few years because of all the AI stuff like what what is the what what thinking does the tool do for you?

Should agents do for you? What do you get to do?

>> Yeah, I mean it's I would say changed quite a lot. I mean, broadly speaking, before AI, our our our our goal was to

create the best tool for humans to directly perform their work.

>> And then now the goal is to create the best tool for humans to manage agents to do the work for them.

>> That's a big shift.

>> That's a pretty big shift. Uh it's it's pretty fundamental. Um but it it turns

pretty fundamental. Um but it it turns out that you need most of the same primitives. uh you actually all the

primitives. uh you actually all the primitives that we built are actually still extremely useful. It's it's more that we just needed some some new primitives like like representing what is an agent and you know how does it interact with your pages and databases

but you know you still need the same primitives. You still need a document.

primitives. You still need a document.

It's an unstructured way to you know to write stuff. Uh agents love to write

write stuff. Uh agents love to write markdown documents. So

markdown documents. So >> yeah, >> it's still very relevant and you still need a database. It's um you still need structured data. you know, if you're

structured data. you know, if you're working with your your swarm of like 100 background coding agents, you don't want to have 100 chat threads. You want a kemb board. It's, you know, the same as

kemb board. It's, you know, the same as before.

>> Makes sense. You still need the uh the coordination structure. What is one

coordination structure. What is one thing that just because you're ahead of the on this stuff and then trying to figure out how to bring, you know, notion and then users along with you.

What is something that's really changed about um how you personally like build even in the last six months? I mean,

it's completely changed. I haven't

written code since like last summer. I

don't type code anymore.

>> Yeah. It's it's it's completely shifted.

I mean, we went from humans type all the code to like we're still typing, but we like tab complete to sort of like we talk to the agent and it sort of does little tasks for us, but we are still in

the outer loop. And then now it's more like I I design a endto-end task that involves like making some change and end to end verifying it. And then I'm just

the the outer you the outer verifier sort of like like double checking at the very end that it that it's correct and if it's going off the rails kind of like like monitoring it. Um so it's a it's a

complete shift is you know I'm I'm now like the agent manager instead of the coder.

>> Amazing. Well um thanks Simon. This has

been a super great discussion about how we're all going to become Asian managers and uh uh hopefully in notion.

>> Cool. Yeah.

>> Find us on Twitter at no prior pod.

[music] Subscribe to our YouTube channel if you want to see our faces. Follow the

show on Apple [music] Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-fproers.com. [music]

episode at no-fproers.com. [music]

Loading...

Loading video analysis...