LongCut logo

How Linear Turned AI Agents Into First-class Users

By Every

Summary

Topics Covered

  • Understand AI before implementing
  • Everyone will have many agents
  • Activity metrics mislead AI productivity
  • Find right problems slowly build fast
  • Humans retain product intuition edge

Full Transcript

Everyone will have many agents and companies will build their own agents.

[music] Linear becomes kind of like a system for guiding the agents and like building this context.

This is the perfect business for this era because it's still SAS. You're the

one who has the sort of sticky interface cuz it's where everyone is kicking things off from and where they're recording all the information. But you

don't have to pay for any of the actual tokens.

[music] Ki, welcome to the show.

Oh, thanks. Thanks for having me.

Really, really great to finally meet you. You are the co-founder and CEO of

you. You are the co-founder and CEO of Linear. Um, little known fact, the first

Linear. Um, little known fact, the first time I ran into Linear, it was because we were using it in 2020 at the very beginning of every to act as our um,

content management system for the newsletter. And at the time, it was like

newsletter. And at the time, it was like this very kind of like hush hush. You

couldn't get access to it, but everyone if you knew, you knew um that linear was like amazing. And we used it for a while

like amazing. And we used it for a while and really loved it. But then um we realized it was made for software, not publishing articles. So we moved off of

publishing articles. So we moved off of it. Um but it was really cool while we

it. Um but it was really cool while we did it and um I've always like admired the the level of taste and craft that you bring to to what you build and also

I think the um the level of thoughtfulness and patience that you build it with. Um, and I think that's one really interesting thing is the the way that you built the company

originally was to keep it closed for a while, not raise too much money, not put too crazy expectations on the company um, and and

and be patient and willing to build something quality over the long term.

And I think that that's also has something to do with how you approached AI. Like you guys are really in AI right

AI. Like you guys are really in AI right now. Um, you know, when I think about

now. Um, you know, when I think about the companies that are uh successfully transitioning into this moment that were started in the pre-AI era, Linear is definitely on that list. You know,

OpenAI came out with Symphfony the other day and the the main thing that it hooks into is Linear and um you've successfully transitioned the product to

be really agent native.

And so, but but like when GPT3 first came out, I I didn't see anything about that on linear. So I'm sort of curious about um that transition for you. What

was that like emotionally to have built this product for a particular way of working and a particular way of building software and then see the world change but maybe not be totally sure if like

this is this was going to be the thing and then essentially eventually be like this is the thing we need to rebuild the product or or or change how the product works in a significant way. Like talk to me about that.

Yeah. Um well first of all thanks for being early user and I think like the the [snorts] thinking I think has always been the same is like we just want linear to be the best product in this

category and like helping companies move move work forward and often like build software products and it's like in some ways like this new AI stuff like it doesn't really like change that mission

like it it kind of like maybe even like improves it and like our goal was always that like can linear take more of the burden of like running this product teams or like figuring out things to do

or like figuring out when to do them and and let the product teams on the individuals actually built the things and and now like it's also like they build it with the AI or the AI builds

it. So I think like in some ways like

it. So I think like in some ways like the mission for us didn't change it actually like I think the AI is making it better because now we can automate more and like take more of that burden

and let people do it uh kind of like use their craft or like use their like taste or thinking or something in it. But yeah

I think like we do have like I personally always have this problem like um like a way of addressing problems which is like I come from a design background. So a lot of times my like

background. So a lot of times my like the way I approach things is first I'm trying to understand them. So this

sounds kind of obvious but then I think like what happens in the tech world a lot of times is like people don't try to understand things. They often like jump

understand things. They often like jump into the like oh I can do this so I'll do it now but like did you think like should you do it or like does it actually help you? So that was kind of

like our thinking with the early AI and and the chat bots like every company is like rushing into this moment. It's like

hey we are now an AI company because we have this chatbot in integrated and we we tried that too internally and then we just realized this is not really that useful like how do you actually use like

what is the workflow where you would actually need this or use this. So we we have spent all this like couple years now like trying to understand these workflows like how do people actually

want to use these things and like we we did couple things well though like I think the the we released this like agent platform like so it's kind of like an open platform it has very good dogs

and like the agents can build the integration themselves using the dogs and because of that we now have most of the like the coding agents or agents uh

out there integrated with linear And like this this is like OpenAI brought their codeex um uh kind of like cloud agent in there because we just had this

available. Um so I think like we we kind

available. Um so I think like we we kind of saw this world of like I don't think there's going to be one agent but they everyone will have many agents and like companies will build their own agents which we're now seeing with like

Coinbase and Bram who are our customers and they they built their own homegrown coding agents which then interpate with Linear. So linear becomes kind of like a

Linear. So linear becomes kind of like a system for guiding the agents and like building this context. Um but it doesn't like we don't try to own everything in this in this world or in this market. We

can like play with other people like other companies too. So I think like yeah like the approach was much more like how do we like understand the workflows like what is actually valuable

and like what people could do use these tools for versus just jumping into like well everyone else is doing this this thing so we should do it too and by the way now we are adding kind of like a

chat interface into linear but it's a lot more like we kind of like there's tools and there's skills and there's more of like understanding we we gather

like how do you should use it like you can use it to kind of synthesize customer requests because that's like linear can handle that linear is a place for customer problems or requests or

other things. So now like linear agent

other things. So now like linear agent can kind of like natively work through those and like see patterns or things like that and that's kind of like the we're trying to bring like a clarity and

context to the organization which they can then use as part of the like AI building workflows. So because like I

building workflows. So because like I think like once the AI builds more and executes more like the kind of like the problem really becomes like how do you

productively harness this like in a good way like you can you can task like a million agents doing something but like should like what are those things they should be working on like probably not

all of those like if you don't think about it like it it probably like a lot of those like work is not necessarily that useful. You need to have some kind

that useful. You need to have some kind of like decision making progress of a process of like is this actually important? Should we do this? And like

important? Should we do this? And like

linear is a way to like do that and like build that intent and build that context and then go to like build it with the agents.

There's a um interesting I don't know if it's a meme or a mind virus or what going around right now, but the stock market thinks that SAS is dead. Um, and I think you're you're

dead. Um, and I think you're you're pointing to something really interesting, which is this dynamic of a couple years ago, a lot of companies, including a lot of SAS companies,

rushing to do chat bots. And I think a big part of that is, well, we know this thing is happening, so we we have to at least show that we're doing something, you know, and I think that the the

market is starting to the public markets are now starting to to look at that and like require that. And I I imagine um when the AI stuff was coming out and you guys were maybe testing AI features but

weren't releasing them, I imagine there was some pressure maybe from investors or from yourself or maybe internally to

do something. Um and you kind of it

do something. Um and you kind of it seems like you waited until you you you

had the fat pitch. Um, and I I'm curious if that if that is true, what that was like, and what you think it means for, you know, all of the public market

companies, all the public market SAS companies that are down right now and whose CEOs are like, well, I guess we really need to launch an agent platform or whatever, you know.

Yeah. I mean, I think there's we we don't really have a pressure from the investors. Like that's like one benefit

investors. Like that's like one benefit of picking the right investors and also like they trust us to like make the right calls and then also we we obviously did talk about this but then we also like had that discussion. It's

like it we just don't see the value right now doing it this way. We need to find actual like real value here that actually helps these companies. And so I think it wasn't like that bad. There was

yeah there's definitely like internal pressure. It's like and now I think the

pressure. It's like and now I think the the the speed of the market has picked up a lot like every month or some like couple weeks there's something changing

and we are like tracking those changes and kind of like try to see where all of this is going. But there's also like this it creates a lot of noise in the market that there's this like oh now

this week someone is doing this loops and then the couple weeks later people like no the loops are bad idea and then like we we kind of like you shouldn't

like I think like those things are like signals that you should like read and understand but like you also need to know that like a lot of this stuff is not tested and like a lot of times like people also testing these things are not

testing it in some like large organization ational context that where things actually matter like if if they work or not and so I think there's that like we haven't tested all these things

so we can't like make these predictions of like how things are exactly going to change. Um I think on the on the SAS

change. Um I think on the on the SAS narrative I do think like the it's probably like directionally correct that you with SAS companies you probably have

to like as an investor you kind of have to there's more uncert uncertainty of the future cash flows that like because if the landscape is changing like you can't expect that everything will stay

the same but I think like the the narrative is kind of simplistic like oh people will wipe coat their own CRM tools and I don't think that's exactly going to happen. But I think like what

might m might happen is like there's new companies that come out or I think like a lot of the public companies are not the most like I don't know flexible or

like the most robust solutions out there. They are the big solutions that

there. They are the big solutions that the big companies use and there's a certain kind of like inertia in there.

So like I would say that the Yeah. Yeah.

Yeah, I think like the public companies probably get hit the hardest here because they are like their modes are kind of like disappearing in a way. Um I

think even for us like we we consider now it's like we need to live in this day one world again where like we can't rely on our previous decisions anymore like we have to like look at these

problems like in a fresh way that like what happens when when these things change what happens when the agent come into this product development process?

what are the new problems that come out of it and like how do we help that so like we shouldn't be tied into the past experience like the past product we have but like see like what the future

product should be and I think like this is harder for large companies and like companies that have existed for like decades so I don't think it's like an easy task and then like I think their

growth companies or startups can can do it a lot better how big is the team now uh about 120 total I think I would say like about half of them like 60 people are on

the product team.

And what was that what has that transition been like? I assume that over the last couple years there have been a lot of divided opinions on is AI coding really a thing? Is it just glorified

autocomplete? Is it going to eliminate

autocomplete? Is it going to eliminate programming as a job? Um and then um how have you how has that change cycle been to actually go change your workflow,

figure out what the new programming workflow is like? How did you get the team in shape to do that? And um what did you learn in that process?

Yeah, I think like there was there was definitely a time in a in a company to like we had to encourage people to use these tools more. I think there's always

that there can be like habits where you always like done stuff this way. So like

you you kind of like less and less like interested in like trying new tools. But

I think like now let's say like the probably all of the engineering and and sometimes our design and BMs also like are now using agent coding or coding tools. We don't like track any kind of

tools. We don't like track any kind of specific to me it's like I joke about this sometimes on Twitter like people now it's like the the the biggest like vanity metric is like how much of your

code is agent like uh written or how many BRs are you merging and I think like that's like not the right metric.

It's it's it's it's measures yeah like output but like what does that output do like does it actually generate value is it like improving the product like you need to have like if you're measuring

these kind of metrics you need some kind of counterbalance like what is actually the quality of this work and like is it actually meaningful and I think like that's like also I think what's playing

out in the market is like we have large companies that are token sellers and then like when when um like you have a lot of incentives like your business

model is like to spend more tokens and like things like our revenue will be higher and like our market share will be higher. So I think there's a lot of

higher. So I think there's a lot of incentives saying people like you should spend more tokens and not saying like well you think about things and like

spend it well. Um, so I think there's that again like I think people are maybe looking at it too like simplistically or like kind of like oh there's a good thing if we just like like spend more

tokens things will be better but I don't think that's ever been the case in in building products like yeah there's some some um value in speed and like making

changes but then like you should also understand any change or addition you make like it can also have a negative impact. So it's like it's not always

impact. So it's like it's not always like activity is always positive like sometimes it can be negative too. What

do you think is a more nuanced metric for you know if you're judging how well how in this AI world are we how well are we um doing our job of figuring out these new workflows and adapting to them

and using them in our own work if you know tokens or number of PRs submitted or percentage of agent generated code is are not necessarily the right metrics maybe even in isolation they're not the

right metrics um what what do you look at or how do you think about it I mean I think it's still the classic metrics of like profits or revenue or um user like love or some of these things

are like what you should be aiming for.

Those seem like lagging indicators, right?

Yeah, they are. Um but yeah, there isn't like a I think like you should still measure like some of these things like token usage per person or like by different teams or something. But you

shouldn't take it as a like to the extreme of like this is the only metric that matters now. You should be like use it as a signal that like are we doing

something and then think like well is our product actually improving like do we have any indication of this product is actually improving like do we get comments on the new features are the

bug is there less bugs and like I think bugs is actually like very measurable metric if you if you run like honest bug tracking pro process where you actually

track bugs and then I think like now I almost feel like with the agent agents and and AI is almost like why do you even have bugs in your product like

you should be like there's no excuse for it anymore and like internally we have this zero bucks uh policy which is like we have a linear team triage and like

any bugs go there then there's a one week SLA that every bug bug needs to be fixed and then now I think with the coding agents the coding agents actually

can do the first pass on it and then like once it's done the fix it will kind of like attack the engineer on it and the engineer maybe doesn't like it or

there's some like changes that they want to make they can do it also now inside linear and they can review the code in linear so there's this like very good workflow for now but I think like it

still starts from the fact that like do we care if we our product is buggy or not and like we have made the choice like we think it's bugs are like kind of

like bad things or mistakes and like we should fix them as as quickly as we can and that's like a priority to everyone.

Um so I think it's still like a it's still a choice if you like care about the quality of the output or you just wanting like more of the output.

What are the ways that um these tools have changed your product building workflow both both personally as and as an or what are the most effective ones

that might be surprising? H [sighs]

yeah I think like on on the product side I think it's definitely a lot better like I think it's I have with with linear and I have this like skill where it's like I fed some of our internal

docs and um blog post about like how we think about product development I made this like a linear way skill and then like it it writes the I tell it like

okay like look at this like help me understand this like feature request like we have this we call this feature requests in in inside linear and then like for example there's a request like

multiple assignees per issue it's like requested by lots of people like hundreds of people and so I can kind of like tell it to go synthesize like help me understand like what are the different reasons people want it like I

don't want so it it kind of starts with like explaining the problem like trying to understand the core problem which is usually what I want to know is like so

this helps me like when I when I see a new request request I might go into linear and say like oh like do we have this kind of request already and then help me understand it and then like it helps me kind of like give an

understanding like which then like helps me like potentially like should we actually tackle this now or is this something we could do later or maybe never. So there's that like before we

never. So there's that like before we start building anything, it's like it's helping me kind of like understand the problem and like in a very quick way like I don't have to like go ask around

or like find people to do it for me. On

the design front, I actually don't personally use it much. I I actually like the manual design process. Like I I still have Figma open and then when I

have a problem or idea, I I just draw it in there. And it to me it's like

in there. And it to me it's like I'm often like my work is often more like that kind of like exploring things.

So I actually don't think the speed really helps there. Like I I actually like the slowness of the manual thing.

Like you draw things manually. Every

time you draw something you have to kind of like check on yourself. It's like why am I do like why am I drawing it this way or like should I draw it different way? But then like the broader team when

way? But then like the broader team when the design team when they they work on problems I think now they are building a lot more like prototypes and we have

this quite robust like build system. So

you can you can actually build it into the like you can make it VR and then it will run the build and you get the preview link to the build and then you

can use it live in the in the product.

So it it helps the testing it uh or the prototyping stage of it. But I I still tell tell the designers so I kind of explore more freely in in Figma first or

wherever and like try to think about like how do you approach the problem like let's just jump into doing it like there's projects like that too where it's very clear what needs to be done but then if it's like a bigger project I

think like they should still spend that time and then yeah like engineering side is probably similar to a lot of other ones that where we can kind of like fix

problems a lot faster uh once we identify them and like decide to do it.

Um we use a Slack a lot and like with our Slack agent like we have a discussion and then we eventually decide like yeah we should do this and then we just tackle linear in there saying like hey can you create the issues out of

this conversation and then we'll do it.

And so it it helps us like track come back to it later and like actually make it actionable right away versus like oh we need to have a meeting and then we start a project and then we start like

assigning people. So I think there's

assigning people. So I think there's like I think it's kind of like I would say like kind of like the pattern in all of those things is like it's shortening the

some kind of loop there and like making it faster like you can do the thing right away versus like waiting waiting like I don't know next week or some other time to do it like you it's it's

very little effort to do it right away which is interestingly sometimes it seems like you're the exact opposite of your preferred

outlook, you know, actually we shouldn't do things faster. Actually, we should take things a little bit slower. Um,

how does having tools that make you go much faster interact with that outlook?

Yeah, and I think it's a good point. I

think it's I think it more like I think like we shouldn't go fast and like deciding things or or just like kind of like speed running the decisions or like not even doing a decisions. Like

I think there's this some people do it now where they just like have an idea then they build it and now we're like now we all looking at this idea that no one really know why it exists and like

should we even do it and it's like it's it's a every new prototype or idea can kind of like seem useful but then like you now like don't have like a good way

of like framing it. it's like how useful this is versus other things like should we spend the time actually like now committing on this idea because we already had like kind of decided on this some of the other ideas so I think

there's this like danger of like you don't have some kind of like decision making uh way we don't have like a lot of processes in in in linear but it's

more like we want to commit on this like once we commit on the thing or the fix or the project then I want it to improve fast like I want the loop to be fast to

work on the problem, but I don't want the problem finding to be fast. Like you

should take the time to find the right problem and like the right approach for the problem and then once you decide that then you can go faster on it.

Here's a simple test for whether your AI is actually ready for production. Would

you stake a business decision on what I just told you? If the answer is not yet, you're not alone. The gap is in capability because AI can do a lot. It's

really about trust. You can't [music] verify the output of the AI. You can't

trace its reasoning. And nobody with real domain expertise has touched it.

Dialect is a new system from scale AI that captures how enterprises make decisions and closes that gap. It puts

your actual experts in the loop, aka the people with years of institutional knowledge and encodes their judgment into your AI systems. Every correction, every override comes with full context.

It's actually really interesting. So the

next time your AI makes a call, there's an expert's reasoning behind it. That's

how you go from a cool AI demo to an AI system you can trust. Visit

secl.ai/dialect.

That's dialect to learn more. [music] While I'm doing that, back to the episode. One

thing that what you're saying makes me feel is I totally get that approach. And

also for myself as a product builder, I often don't know what I'm doing until I do it. and I can't think it through

do it. and I can't think it through until I've done like five different things that I can't explain and then I'm like, "Okay, here's the thing and I understand it." Is is what you're saying

understand it." Is is what you're saying different from that or is it the same just like said differently?

Uh maybe it's different, but I I can see that workflow. I feel like that workflow

that workflow. I feel like that workflow is kind of like kind of like understanding like you're trying to understand like what you're doing and Yeah. It's building it's like making

Yeah. It's building it's like making things as understanding. Yeah.

And I think that's fine. I I think the the problem there just becomes like sometimes it's like you kind of like don't know are you I think like conceptual work sometimes like in design

I consider this like a conceptual work where it's like the output of this is a concept like it it's not like a like we just shouldn't deliver this necessarily but this is like like I made this like I

went through this process of understanding this problem and like I have a concept for it or like I have what's an example cuz I would assume that the output of a design process would Figma, you know, a Figma that you could

export. So, what's an example of a

export. So, what's an example of a concept that comes out of a design process?

Well, I I think like in the in the past, like in a large companies, I've used the concept term to like not to scare people. So, usually it's like like

people. So, usually it's like like rethinking some area completely. And

that's like a concept like it's not like it's like a concept car. So, it's like this car won't go into production, but here's some ideas that could influence the next car. So it's like you're trying

to like like sometimes people I don't know it's this is like partly like a large company thing but I think it can happen in small companies too is that once you see like something like very

different your like fears might start coming up like well if we change this like what else going to h like what is going to happen what's going to break but the point is like not right now to decide

that like we we just decide like does this concept like this new idea have merit and like can we like do we do we think it's like important enough for something to like take it further and

then deal with the like the problems later. So it's kind of like you you're

later. So it's kind of like you you're kind of like trying to divide like the decision like what which decisions you're making now and like I I've used it yeah like in in our company and our

companies like I just like completely rework a surface and say like hey I think the project should look like this like which is completely different from what it currently is and then people like oh that's actually interesting or

they're like well it won't work for this and that and I'm like okay yeah that's fine and it's like it's a way to like um and it's it's maybe like a fment design

or a prototype type. So it's just like I I think there's like even with all this tooling like the output shouldn't always be like we ship something like it it

sometimes the output can be something internal that like hey we just now we have like a better understanding of problem we can like tackle it better and like we can actually make it into a

shippable thing but like we first try to like think about it before doing it right and and to you thinking about it can include building stuff. It's just

the reason you're building stuff is not to ship it the next day. It's to

understand it better. But thinking can be designing, it can be writing, it can be, you know, talking about it, that kind of stuff.

Yeah. And and something like I did have to share with the company recently was that I like we always care about the quality or a lot, but I think like um and like kind of like this thinking process of like are we doing the right

thing is kind of like what we're trying to like uh like decide.

Sometimes now with AI it's actually like hard to tell like it's it's kind of like if the tooling changes all the time like people like the the LLMs are not

deterministic anyway like you always know like like how useful this thing could be and then there's a moment you just have to decide like yeah I think we should obviously we can try this

internally but we also need to try it with customers and you kind of like put it into some kind of beta or something that that so I think like there's definitely nuance to this right now that

there is situations where it's just like and it's always with product building there's a limit how much you can like think about it inside your company until you need to actually put it somewhere to

someone else to use and then you learn from that use case. Um, but again like it's it's more like every stage you kind of have like some kind of goal in mind

like now we've we put it to beta like the the the goal should be like understand the workflows and how people use it and how they want it to be better not to like something else like not to

try to ship it as fast as we can or something like we we should be honest about like what is the actual goal for for this stage.

[sighs and gasps] So we've talked about how AI has changed your internal workflow. I'm also curious how it has changed your product strategy and how you think about building products not like the actual work of

building products but what kind of product to build and what for example should you like AI agents uh connect into your product which I know you've

done versus build your own AI um like into the core feature should you have both um what should they be able to do like yeah how does it affect your your product strategy and your vision for

what a good product is yeah I mean I would say like we are now adding agent like linear agent that can like has context of your work and the

context of the organization and the products you build um that you can use in different ways and the like the PM workflows. Um you can also like as a

workflows. Um you can also like as a designer use it the the way to like understand the problems and and then we will also do like a coding agent where you can actually like start like writing

code with the agent and it will interesting it will um you can see the diffs online.

So it's kind of like a cloud conductor environment where you can kind of like see the changes and you can kind of you can guide it and then we think like the

the strategy has definitely like changed and we are just trying to like like understand like what are the problems of today. We think like one of the things

today. We think like one of the things is like what is changing is that I think historically people thought like issue tracking is this kind of like a like

it's like a ticketing system for the kitchen like and like but engineering so it's like order comes in like someone orders fish so now that fish goes into the kitchen there's a ticket like make

fish and that's like kind of like people think about issue tracking and like we kind of never thought about it that way like for us like linear is more the backbone we rely on like collecting

signals and collecting problems or collecting decisions like we should do this thing. So I think like I think

this thing. So I think like I think there's definitely like a shift we have to like teach people like these products is really meant to like improve your team's workflow not to be this kind of

like a weird ticketing system for like different parts of your organization and that's kind of like probably like going away like with the agent. So like you don't need that anymore like the agent

can do those tickets and like they can also like complete them but like we we think like there's still value of like like collecting that context and like the make make the shaping the work

something actionable and providing agents like good context from the from the environment.

But the the one lesson we learned with the uh with with the agents is that it's it's it's tough when we are not

ourselves in control of it. Like it's

it's like we do want to like support all companies and all agents as much as we can but then if we have ideas for it we can't do it. Like it's it's on them to

do it. So, so now like one of the

do it. So, so now like one of the reasons we are doing this coding agent is like we actually think we see this like a lot more smoother end to end workflow where you start your you don't have to do everything but you start some

of your task in line like you can ask the agent like hey does this thing exist already if not make an issue make like a work stream out of it and then like start working on it and then start like

writing the code and then you can like diff see the diffs coming in you can like review it and you can like merge it or like you can see the prototypes. So

it's just like trying to like you one of the problems I see like when I use this like like claude or chatdb or some of these tools or codecs is that like I

have to really explicitly tell the agent like the the tool always like what like what context to bring and then I think the value with linear is like the context lives there and then if we kind

of inject it like smartly part of the the work stream it's like much more like like natural like we can design the flow that like makes sense and we don't like

spam the spam the context windows or something. Um, and I think we we see

something. Um, and I think we we see this feature as like you probably have this linear as kind of like the the multiplayer or the the organizational

context of what's happening in the product and like what is potential future state of it. You might still run local agents, but there's situations where like you should just automate some

of the like bug fixes or you should automate the small task and like just do it in linear and then like kind of like let it run in the background while in a

sandbox while you like run your own work in in your own computer or somewhere.

That's really interesting. Um, I think from a product strategy perspective, I'm really curious about the decision to integrate your own agents because I'm before we did this interview, I didn't

know about uh the linear agent and I was sort of sitting here thinking, wow, this is the perfect business for this era

because it's still SAS. There's no AI token costs, but it is the place where you control all of the AI. So all the other companies have to deal with all the other coding agents and whatever

have to deal with all the token costs, OpenAI and Anthropic and whatever, but you're the one who has this sort of sticky interface because it's where everyone is kicking things off from and

where they're recording all the all the information, but you don't have to pay for any of the actual tokens. Um, and it sounds like

actual tokens. Um, and it sounds like you're adding it adding a layer where you will have to pay for the tokens and and you you may prefer that. And I think the reason you're saying is because a tighter integration between the two

means you can do more interesting, more powerful things. How did you think about

powerful things. How did you think about that from a business perspective, you know, changing your margin profile that much? I assume,

much? I assume, you know, I don't know. I don't I don't know off the top of my head how much linear costs a month, but I assume there's a lot of interesting discussion there about how adding in token costs

change the business model.

Yeah. I mean like honestly I think it's something like we'll have to see like in the future more. We definitely thought about it and like have some like calculations or thinking on it. I think

on the coding agents like we we do have to offer like usage based billing because it's can get very expensive on the basic like linear agent functionality that's like answers

questions for you. It's like that's that should be more like included into the into the system and like we we'll have a lot more like we we'll have to see like

how much the usage actually is but like in in linear still is going to be like this like for fairly focused platform for like certain kind of things like you shouldn't be running random things here

like I think like you should be still like pretty like clear what you should be doing inside linear like what what what kind of like workflows are you running there or like workflow codes.

Um, so we we're not trying to like build this like very generic like agent platform. It's it's it's just like uh

platform. It's it's it's just like uh more like the product context or the product memory platform where you can integrate those agents and you can use linear agents from other tools too or

you can bring other tools into linear.

So it's it's just like a way to like work around your product and it's kind of like a API into the product thinking versus like um using like more of the

like normal tools where it's like you always have to like tell it to like go fetch this thing, go find this thing because it doesn't have any understanding like what do you generally

do or like what what kind of like context that might be existing already.

Can we see a demo? I'd love to see it.

Um yeah.

Um is the is the screen sharing okay?

Uh yes.

All right. Yeah. So what we have coming up like this is actually my like real uh linear instance and like what do we have come up is like we do have like [clears throat] now if you do a new tab inside linear there will be like the

classic box of like what do you want to do? Um there's also like this other

do? Um there's also like this other interface where you can like if you are inside some context like a project or something you can kind of like do the the work there but like for example the

one um we will have skills and the skills we we will have guidance like organizational guidance and a personal guidance and you can have skill like personal skills or organizational

skills. So for example like what I was

skills. So for example like what I was mentioning earlier is that like sometimes I want to understand problems. So, so um I want to like understand this

problem of like multiple assignees. So I

I made the skill which is like essentially like um um I I fed fed some materials from our blog and it's like act like a linear

product teammate and and then it it has this format of like it starts with the underlying need and it has this like way it goes through the problem. Um and I so

I made this to like kind of like um help my workflow like some just quickly and trying to understand sometimes this feature request. So I can like do it like uh

well let's do the multiple workspaces.

So so we have this like collection of of stuff about like multiple work places and then um it can kind of like go through there. there's like probably

through there. there's like probably like many different requests and like it will try to start um thinking through it. They will look into the customer

it. They will look into the customer activities. It will look through the

activities. It will look through the different things. Um

different things. Um what model is it under the hood?

I I think we'll eventually allow multiple models but now we use claude claude for this sonnet or opus

actually no.

Um so it starts going through it's like okay like there's a it it's it's there's a real need but the uh it's like more complicated that it sounds so so

companies maybe want this like multiple workspaces for different reasons and I think like my understanding generally is that they they want like a one place to have this like billing and governance

but then they might have multiple different divisions in a company and so it's it's not um they would want like divide the the t the the workspace more

but um the they might still have some kind of like overarching control. So it

goes through the kind of like trying to like explain like what is what is missing and like what is good about it.

It also like makes this like few recommendations of the product direction. So like do this or that on

direction. So like do this or that on that. So it's like it helps to like kind

that. So it's like it helps to like kind of like make this something that is like quite like not like I don't know complicated into some kind of like actionable thing and like I can we can

talk about this as a team or something.

Um but similarly like more like a micro example is that like if if there is like a I I want to make a new theme like um

new dark theme. Um

what's a theme? Uh themes are just like in our app like so you can have like Oh okay. Got it. Yeah. Yeah. Like the

Oh okay. Got it. Yeah. Yeah. Like the

way it looks for the way.

So maybe I want to create a new version of like a dark theme like make it just black.

So I can now task like like a coding agent on it and uh it should start like looking into it can look into the code base and it can try to understand code

base obviously and then um at first it's like it's turning it into an issue and then like delegating into uh into an issue. So so I credit this issue it's in

issue. So so I credit this issue it's in progress it's delegated to linear and then now like linear starts working on it. there's a it's spinning up the um

it. there's a it's spinning up the um the sandbox for it and like I think the pro like kind of the one of the benefit of nature is like now people know I'm like doing this so the team knows I'm

doing this and I can say like hey non like FYI I'm like I'm doing this so like he can also come here to look at this like what is happening like and then the

thing is like this agent session is visible to everyone so it's visible to me and to him so I can like once it's it will take a while but like once it's

like does it like we can both jump into this chat and tweak it together if we want to or just like kind of see what happened here. So it's like similar to

happened here. So it's like similar to like what you do on your computer but then now it's like kind of like happening in a shared context and there's more like a understanding like where did this come from? Okay, it came

from me. If this could come from a

from me. If this could come from a customer like discussion or the shared context is interesting like so so two people can be in the same chat.

Yeah.

So I don't really Yeah. I don't have non ready to demo this but like we did have this like instance kind of like accidentally we noticed this like this is actually useful sometimes is that um it

uh anan who was our head of product and like Connor who is our head of design they both like were working on some tweaks on the inbox so they they could kind of go back and forth like a PM and a designer could go back and forth it's

like no it's like it's not quite right like let's fix this thing and then they could both like see the kind of like the the the preview view link. Uh let's see if I have

something here. Okay, so there's like

something here. Okay, so there's like one like my previous pull request. So we

will have like pull request here. You

can kind of see the activity but you can also see the code. So you you you see the code diffs and then if you want to like comment on it or you want to like work on this code with the agents like

no this is not not right like and then I can like work on it and similar workflow works for like code reviews where engineer might come in and say like this

is not right they could just task the agent to like fix it versus like saying like telling the other engineer to fix it. So I think it's like kind of

it. So I think it's like kind of collapses the collaboration loop a lot more. Um and allows like multiple people

more. Um and allows like multiple people use the agents to work on one thing. Um

and then like here this is only like a backend change. So there's no like

backend change. So there's no like client review I can do. But if I had if this would be a client facing thing I could kind of like open the preview link and then like actually see like how does

it look live.

That's interesting. But yeah, those are yeah, a few things we we're like adding.

I'm curious about this.

One interesting thing about this is it seems to increase the surface area of the product a lot. You have to build a lot of

a lot. You have to build a lot of different things that already exist to some degree somewhere else. And

obviously there are things that you can do differently. So you can have multiple

do differently. So you can have multiple people in a chat. You can um it's it's more plugged into linear more generally.

But um you you kind of have to recreate a lot of stuff that's already being built by a lot of other companies. So

how do you think about that and the trade-offs of that and doing that well especially entering something like AI coding where all the big companies are just like going as hard and as fast as

they can to build AI coding agents.

Yeah, I think there there's definitely the question we keep need to keep asking ourselves like what is our advantage or like unique advantage here and I think like we I think like honestly like I

don't think we will solve all the different coding needs but like we don't ne also have to like I think it's it's what we see the value is kind of like sitting upstream where the work is

coming from there's like really like good leverage there that we can offer to companies where it's like work comes comes in or bugs come in, they automatically get spawned into agents

like like given delegated to the agents.

Like engineers never even see them or like if they see them, they see them like once there's like a fix already being built. And so it doesn't like

being built. And so it doesn't like maybe work for all kinds of situation.

It's it's not like it's not the agents like you go through like hey build me a new product like we don't think that's like where we should be like working in.

It's more like you have a large company, you have a lot of things requested from you, a lot of bugs filed in like how do we like reduce that workload for you like automatically and then yeah you can use the other coding agents to do other

kinds of work but this is like where we kind of focus on um that's really interesting but then like yeah generally we've been thinking about the problems like we don't want to be a kid kitchen sink product like we do everything for

everyone and like sometimes companies end up in the state because you have the enterprise buyers and you have the checklist and then like you just need to like get the check mark into the right

spot on the checklist and it it we don't think it's that those things like don't create like a good product experience.

So the way we always thought about it and built the products is like we try to feel like what is like a natural next

step in this workflow. So like if we go from an issue like it's it's like yeah some like the natural next step is like someone needs to fix this and so like

how do we help people to fix it faster?

So one option is like we do this cloud agents and then you they can fix it but now the cloud agents does this stuff like how do you know it's like good. So

then then you need to see the code like you need to see the diffs and like you need to run the builds and like whatever. So it's it's like we are more

whatever. So it's it's like we are more always focused on the workflow and how do we like improve it like how do we

make the make the out kind of like help companies to output better and faster versus trying to like own every surface.

So we don't have to own like every piece of the surfaces but like we we're kind of like trying to like find this optimized workflow for people to do certain kind of like product things. So,

we're almost out of time. My last

question for you is if you had to project how product development will change over the next five years, let's say what will be what

will be different and also what will be the same? I think

the the one difference I I think there's going to be more of this like self-triving aspects of like you can set up some kind of rules or guidance and we even like are building something like

around like a project memory or like a like so like you could have like a like a common workflow what we do is like we have projects going on it's like a project is often like a feature part of

the interface or part of the part of the product and then like we have a lot of like feedback and requests and things coming in. I think there's opportunities

coming in. I think there's opportunities like turn that into like more like the the the product or that feature is it's it's kind of like an agent itself and it kind of like tries to make decisions

based on the input it creates and then it it can still have like maybe ask for certain amount of input but it could run automatically. It's like hey like these

automatically. It's like hey like these kind of I seeing these patterns and these patterns point into this solution and this solution seems to be like potentially something that works for

people and I built the made a build I send it to some customers and they say it's the feedback is good. So it's it's kind of like it gives you like it does

things on its own based on some kind of context and like a rule based system or like some some kind of guidance and then I I I do think like the thing I'm still

like I think people should still think even in this world where agents do some of the thinking and like does run automatically to some degree. I do think

like it makes people have to be like a lot more explicit like what do they want or like how like what is worth doing and like what are the areas we should be

doing and I I think it's so there I think like a lot of this like still like humans having meetings or discussions or writing issues or writing documents I think it's like reading documents like

there's still going to be like a place where like humans need to like understand this stuff too like you can't just outsource the thinking purely to to

the AI or agents. Um, and and but like you should like the more you can clarify your own thinking and the strategy or something, the better it's for your team, but also it's the better it is for

the agents too because then like you can codify some of those like strategies or or thinking into actual like this

autonomous things. So I I I think like I

autonomous things. So I I I think like I personally don't see the future in a way that we are replacing humans and I don't quite

believe in it. Maybe I don't want to believe in it but I think it's I think things will change like the the roles will change. Maybe there's some like

will change. Maybe there's some like movement around exactly like what does engineering do, how many engineers we will need and like what it what is the

job in the future but I still don't see like how the agents like how the AI actually like does all the thinking and like kind of like the choices or decisions. I think product building is

decisions. I think product building is still kind of like a craft or an art.

You gonna a lot of times like we we uh we talk about intuition like we just decide things based on what we how we understand the problem. We hardly use

any data as part of decision making.

Sometimes we use it to look at something but it's it's more like a signal. So I

never personally believed in this like AP testing and and datadriven product development which I think could work well for agents but I think it doesn't work for all kinds of products and I

also think like the best products are not necessarily like being built that way like you still need the human kind of touch of like what is interesting or like what what would make this good.

I love it.

Thanks so much for joining.

Yeah, thanks a lot for having me. This

was great.

Oh my gosh, folks. You absolutely

positively have to smash that like button and subscribe to AI and I. Why?

Because this show is the epitome of awesomeness. It's like finding a

awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster

GPT. Every episode is a roller coaster of emotions, insights, and laughter that will leave you on the edge of your seat, craving for more. It's not just a show,

it's a journey into the future with Dan Shipper as the captain of the spaceship.

So, do yourself a favor, hit like, smash subscribe, and strap in for the ride of your life. And now, without any further

your life. And now, without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.

Loading...

Loading video analysis...