LongCut logo

AI in 2026: Reid Hoffman’s Predictions on Agents, Work, and Creation

By Every

Summary

Topics Covered

  • 9-to-5 Ends with Entrepreneurial Workflows
  • Creative AI Addiction Fuels Superagency
  • AI Backlash Intensifies as Scapegoat
  • 2026 Demands Meeting Agents Everywhere
  • 2026 Shifts to Non-Language Models

Full Transcript

What we will see more of in 26 is a combination of parallelization, longer workflows, and orchestration. People

will experience what it is to have their computer running separately from them, doing something productive for them as they're walking away to go get their

coffee. Whether it's a Mac minis running

coffee. Whether it's a Mac minis running cloud code or codecs for a company to be a thriving going and growing concern and evolving with the times you will need to

be recording every single meeting and using agents on it to amplify your work process.

[music] Reed, welcome to the show.

>> It's great to be back. Uh and and as much as I try to avoid doing predictions, uh you're one of the few people that I will uh essay this with.

>> That is um I I feel very blessed. Thank

you for uh for taking the time to do it with me. You are um I think so this is

with me. You are um I think so this is your third appearance on this podcast and that makes you the the most frequent guest. So uh I'm honored. The feeling is

guest. So uh I'm honored. The feeling is mutual. I'm honored.

mutual. I'm honored.

>> Yeah. [laughter]

>> Um okay. So, uh, we're heading into 2026. By the time this podcast comes

2026. By the time this podcast comes out, it will be 2026. So, for all of our purposes, it's 2026. Um, and I I think this time of year is such a good time to

look back and look forward. So, I want to start with um a couple of uh, you know, pre2026 predictions that you made

and reflect a little bit on how things went in 2025 and what what might be different about how you're seeing

things. So the first one is we dug up a

things. So the first one is we dug up a quote from you in 2017 that said um you thought that the 9-to-F5 work model will

be extinct by 2034. Um how has that view where did that view come from and how has that changed in 2025 as we've moved into agent agentic territory?

>> Well, let's see. So part of it um was kind of

let's see. So part of it um was kind of an extension of a very old set of thoughts of mine which is a startup view which is more and more of work and more and more of career will become entrepreneurial. It doesn't mean that

entrepreneurial. It doesn't mean that everyone is going to start companies or everyone's going to launch new products or any of that sort of thing. But it

does mean that the kind of old career matter career ladder career escalator is no longer the way to think about it.

It's no longer to be thinking about like what color is your parachute, you know, that kind of thing. It's actually be think about your your your your kind of your economic life, your work life, uh your job life as kind of with the skills

of an entrepreneur. And that's part of where that came from. And it wasn't meant to be 9 to5 is like, oh, everyone's going to be working, you know, 996 or, you know, kind of equivalent something, >> which would have been a good prediction

maybe for Silicon Valley. [laughter]

>> Yes. Exactly. Um but it's it's um and and by the way, startups in Silicon Valley have always worked 996. it even,

you know, uh, you know, frankly 997 [laughter] um, for how they operate. And but it's more the fact that that actually in fact the way that you're going to be working

isn't going to be this kind of you know clock your you know hit your punch card at the door you know be there take your lunch break come out at 5 but actually

in fact going to be you know uh running you know clawed code on minis um in you know parallel to what you're doing.

you're going to be uh you know in a crunch where you're doing stuff and all of a sudden you know this week you're doing 120 hour week and the next week you might be doing you know 40 you know

kind of or you know 10 as the case may be and then you know the kind that that this kind of entrepreneurial journey is actually more of what's going to be happening and u and I think that we're

still on track for um you know I think you know here we are and you know 25 going to 26 is you know time of broadcast 26 You know, if anything, you know, when

you begin to see what the impacts of the fact that that we are going to be um you know, kind of h have like all of our work is going to be uh in inshed in

agents um and in parallel and in management and and all these which we'll get into some depth on uh that actually

I think is part and parcel of it's not just 9 to5. Got it. So I think when I when I read that quote I was thinking it's not going to be 9 to5 meaning we might not be working that much but

you're saying it's it's more just like an inontrepreneurial way of working where it's it's suffused throughout your life.

>> Exactly.

>> Um and that actually >> by the way that also can be in some cases you're just not working as much. I

mean it's it's much higher range >> if you're Tim Ferris. Tim Ferris. Yes.

>> Um [laughter] >> um >> he's already been doing that already.

>> I know right. Yeah. He's got to do a new 4h hour work week.

>> Yeah. the future's already here. It's

just unevenly distributed. EJ [laughter]

[gasps] >> um that actually makes me think of one of my hot takes for 2026. So, I think we can we can jump there real quick because I really want to know what you think. Um

we've been on this uh trajectory of talking about um technology and addicting technologies and social media and how social media breaks your brain.

And um I think we've put up on this pedestal the act of creating things as something that is like can be inherently

good and not necessarily addicting. And

my experience with cloud code right now is I'm addicted to it. It's like I cannot stop. I just want one more

cannot stop. I just want one more prompt. And um I think uh shockingly the

prompt. And um I think uh shockingly the most addictive technology of 2026 and the the narrative that we might be talking about at the end of the year is

how addicting it is to just make things.

And what's interesting is there's a certain class of people that know that already and it's CEOs of startups who who have that experience already because they're you're always looking at your you know your chat or your Discord, your Slack or whatever and you're always

like, "Oh my god, I need to do something else." But I think now that's like a

else." But I think now that's like a broadly distributed thing where everyone's just going to be prompting cloud code.

[sighs] So um to one uh I definitely believe it can be uh addicting and I think it's actually addicting for a much broader range of people than normally think and

it's partially because most people just don't have the experience of succeeding at creating and once you have that like the dopamine hit is you succeed at creating and part of the thing that

cloud code actually AI more generally um you know generative AI more generally but it it suddenly goes oh my god I I can create something interesting. Um,

and I think that's the the like it's actually a healthy dopamine hit. I mean,

one of the things that's weird about the word addiction is, you know, you say, well, I'm addicted to breathing, you know, and so you know, it's like, well, actually, in fact, that that's a good thing, [laughter] but it's not. It's,

you know, you know, and so so addiction has this kind of negative overlay, but the fact that you you you get very committed to something, it's like, oh, is it unhealthy for you? And actually in fact in the creation thing actually in

fact it's not unhealthy. And if you're like no no no actually I'm going a little bit more obsessive. I'm going

like I I want to finish this. I want to make this I might make this really great. That's actually in fact part of

great. That's actually in fact part of where we you know we get our we we explore our our our fuller potentials our super agency if you will. And it's

and it's that kind of thing I think is actually really good. And I do think that it's part of the the the kind of generative AI revolution in ways that

people go like you know I think the discourse is generally you know a quite mixed and negative and actually will be in more intensely negative next year because of the of the transformations

and changes but it's part of the reason why it's so important for people to go go oh wait a minute I can be so much more human doing this and we can be collectively together and so we need to

sort about the fact that yes, it's going to be a turbulently created future, but like we can do amazing things. And so I think that that kind of creative

addiction um you know creative commitment you know creative exploration is actually in fact um you know one of the actual really important things and I think people have been discovering it

not just with cloud code but also like learning you know through you know prompting these agents you know creating images and and and you know that's part of the reason why Sora you know kind of

went to the moon in a couple weeks because it's like well Wait, I can I can make something here >> that I think that makes sense. I I kind of want to know um for like one thing

you said earlier is you think that there's going to be sort of a I don't know if backlash is the right word, but negative say negative sentiment towards towards tech will increase in 2026. Is

that is that one of your big hypotheses?

So tell me about that.

So um so we haven't so while there's been a lot of discussion um the actual overall impacts of

AI have been you know relatively more minimally felt and most of the places where they're described as being felt are actually in fact um you know kind of

fictional like for example oh AI is causing electricity prices to rise and really actually in fact I mean a little bit here and there maybe in like certain grids you know certain you know power

stations but really it's old grids old power stations increasing cost of energy you know net impact of tariffs and other kinds of things like if you actually

kind of do an analytic map to say where are the data centers that doesn't actually correlate to oh that's where the places where energy price like you know electricity prices gone up and not

but that's going to be the meme and so the meme is like Oh, college students aren't going to be uh hired because of AI. Uh the meme is going to be

AI. Uh the meme is going to be electricity prices are going up because of AI. The meme's going to be uh the

of AI. The meme's going to be uh the price of eggs is going up because of AI.

Like and so and because there's a lot of of people who go, I'm looking around for something to blame for things being troubled, bad, different than I would like. And you know, it is going to be a

like. And you know, it is going to be a very turbulent year. And so AI, you know, it's going to be almost like the the farmer McDonald's song. you know,

AI, AI, AI is going to be the the the the the way that this is going to play. And I

think it's actually really important for people understand actually, in fact, you know, AI hasn't had any of that impact yet, but it's actually going to start like I think it's going to like for example, it's suddenly it's going to be

like, hey, I used to be really competent at my marketing job, etc., you know, um I think it'll be the hey, I only want to hire when it's part of an AI

transformation. You know, allah Shopify

transformation. You know, allah Shopify and that kind of thing for doing this.

It isn't going to be what a lot of the employment is is a reworking of the COVID disaster and you know, kind of mishhiring misorganization etc. you know for doing this but actually it is

going to start you know kind of impacting and then so it moves from from call it 98 99% fictional to 90% fictional but that will intensify the

desire to kind of say a whole bunch of negative things like you know for example I've been surprised so far at and I think it's just because people don't pay any attention online you know

the creation of a you know kind of a a Christmas record for my friends using AI. I haven't gotten a whole bunch of

AI. I haven't gotten a whole bunch of negative blowback of, oh, this is going to be terrible for artists and terrible for creatives and so forth. I think that will happen. Um, like, you know, I'm

will happen. Um, like, you know, I'm going to create some more records and I think that will be the case and I actually think it's not the case. I just

think you need to adjust to using it and and to creating that as a as a as a kind of a um, you know, kind of as a new basis for your creativity, for your

industry, for your work, and and and that transition is going to what be what's difficult. But I think you're

what's difficult. But I think you're going to have I think next year is going to be much more negative N and AI than actually this year in general popular discourse.

>> So to to repeat that back, I think you're saying so far it's it's kind of a meme like AI is bad. And um the meme and

to a lot to a large extent is making AI a little bit of a scapegoat for just anything bad. If you're laying people

anything bad. If you're laying people off, it's easy to say like because of AI. um and that that will probably

AI. um and that that will probably continue. I I do think that that's true.

continue. I I do think that that's true.

That's just going to continue. And

there's like there will be increasing real negative impacts that people are going to have to deal with. So you're a programmer and you're coming into work and you're like, "Oh man, my job just totally changed." It's like I'm not in

totally changed." It's like I'm not in the code anymore and that's going to be upsetting to people and and that's going to lead to changes in the way organizations are run and who gets hired and all that kind of stuff and that's

going to that's going to make people upset. um what do you think is the right

upset. um what do you think is the right move for big AI companies in an environment like that and and how they should be talking about it, how they should be positioning

and you know to some extent it's it's probably not even desirable to prevent any kind of backlash like it's normal for people to have like bad feelings about new things. Um but yeah, how do

you how do you think what's the right way to deal with that strategically?

Well, the most substantive way is to make it pragmatically helpful

to as many people and as many people as you can. It's part of the reason why,

you can. It's part of the reason why, you know, the podcast you and I are doing and other things is say, "Hey, uh, explore it, get a chance, use it." You

can use it for personal things like if you have a, you know, kind of any kind of serious medical question, you know, if you're not getting a second opinion from, you know, chat GBT or favorite

frontier model, you know, you and your doctor are both making mistakes. Um, and

uh, you know, and then, you know, similarly to okay, how do I use it to help me with my work? Um, how do I use it to help me learn things? How do I help it, you know, help me be creative?

And and if you can't in each of those areas find something where it's actually in fact seriously helpful, you're not you're not trying hard enough. You're

not looking. It doesn't mean it's everything. It's not the Swiss Army

everything. It's not the Swiss Army knife for everything yet. There's many

many limitations, but it is norm it it is enormously um kind of amplifying. And

so I think that's the and that's part of the reason why you know everything from you know not just writing super agency but creating you know holiday Christmas gift records is kind of like showing hey

that's a that's this is the kind of thing we can do now like everyone can do this this not using any not using any tools that do this and by the way not only can everyone do it but by the way

as the the people who get more expert like people who are much better at music than I am which is you know 95% of the human um you know can then do much better

right it's it's an amplifier for everybody and I think that's the the kind of most substantive thing and then I think the question that's the substantive thing and then on the

communications thing you know I think one of the thing that um you know various uh very well-meaning AI creators

are kind of saying it's like oh my god it's gonna be a you know white collar blood bath etc and you're Well, >> I think I I know one one I have one person in mind that you're talking about.

>> Yes.

>> You know, and it's and [clears throat] it's like look, it like I get it. You're

trying to say, "Hey guys, things are going to change a whole lot. Really pay

attention. I'm I'm ringing a bell to start adjusting to this, but that kind of ringing the bell is like, you know, kind of a yelling fire in the movie theater." It's like it's it's it doesn't

theater." It's like it's it's it doesn't create productive response. The the

important thing is to kind of be orienting towards a productive response.

doesn't mean to be papering over the difficulties of transition, but it's like, oh, um, you know, we're going to be going into like this intense, you

know, category 10 rapids and here is here's the kind of paddles you need and here's the kind of thing you should be doing as you're going into it.

>> That's the thing. You want to if you're going to say we're going into the rapids, you want to offer the paddles, too, you know, and if you're just saying we're going into the rapids, that's not really helpful in my view.

>> Yes. Exactly. And that's that's that's I think the comm's part of it for everybody.

>> Yeah. If 2025 was the year of agents, what's 2026?

>> Well, um, by the way, I I think there's an interesting thing on this. So, I

don't think actually 2025 was fully the year of agent. So, a lot of agendic development, but I think it was actually mostly only agents and code, right? So,

you know, cloud code, codecs, etc. of which by the way a relatively very small percentage of humanity actually in fact fully experienced right like if you go to the vast majority of the people you

and I know they're like no you mean mean you mean agents all I asked chat GBT a few questions and had some dialogue and it's like well no no that's not actually agents yes there's a chatbot but it's

not really agents agents is doing stuff and doing it in parallel and doing it in amplification and so forth so I think code had But what I think actually in

fact 26 will be is how we move from this kind of basis of agentic you know coding agents to agents and everything else and

actually in fact what I think that um a there's just going to be a whole bunch of that like for example like call it 10 to 100x of people will experience what

it is to have a uh their computer running separately from them doing something productive for them as they're walking away to go get their coffee and

then and then coming back you know whether it's you know Claude Minis you know a Mac minis running claude cloud code or or codecs um different questions

but like that in applied to a lot of other things because that orchestration then allows the parallel allows you know eight hours of work allows you know that kind of thing and I think that will be

broader and then the more subtle thing which I think will also be a really important part of 26 is orchestration namely like okay if we begin to have

like you know hey when I'm doing this particular form of intellectual knowledge work thinking work cognition work etc and I now have agents working

with me for me etc and then I'm orchestrating them I think orchestration is the thing that will be you know I don't think it'll be March 26 I don't think it'll be more Q4 26 or kind of

growing into that and then maybe even intensively 27.

>> I totally agree with that. I think it's something we're starting to see already.

Um, and it actually it it brings me to perhaps my hottest my hottest take that I would love your input on. Um, and it it starts with coding agents, which is I

think that OpenAI is currently missing the real coding market. um because they are not and they are not really um when you think about orchestration I think of orchestration as being something that's

enabled by tools but it's also enable it's like a it's a new skill it's a new skill for programmers and when I look at the stuff that open AI is producing I

think it's really made for programmers who use AI like senior engineers who use AI which are is different from AI native engineers who are like just in for cloud

code terminals uh and are never looking at the code and it's really valuable like the models that they make are really good. If if I have a really hard

really good. If if I have a really hard technical challenge, it is uh I definitely go to codeex to be like okay figure out this like crazy performance

bug that I can't figure out but um I I I don't see them orienting toward this new skill of it's not vibe coding but it's not traditional engineering with with AI

added. It's this third thing that is I'm

added. It's this third thing that is I'm uh I've got four cloud tabs open. I

never look at the code. I'm thinking

about how to orchestrate. I'm thinking

about how to plan. I'm doing all this stuff. And um and I'm technical, so I

stuff. And um and I'm technical, so I can I could go down to the code, but I never do. And I think that's a really

never do. And I think that's a really interesting thing that uh I'm kind of noticing and I OpenAI is like not used to being behind. Um and

I'm very curious about how that's going to play out. What do you think?

Well, I think um I think it's one of the skills that OpenAI is going to pick up to because you know part of what's happening like you know the the thing that you know this will be great for

media because each month it'll in the horse race it'll be like oh my god Opus 4.5 oh my god you know GB codeex oh my god you know um Gemini oh my god and

because all of them are going to be developing and what that means structurally is that some of the things where you know as opposed to a couple years where it was literally just open

AI blazing ahead and I think this is good for the world and everything else like there'll be areas where for example Anthropic just did super smart stuff and

making cloud code and that and that iteration and took you know kind of as it were less capital and less depth of compute but still made stuff that was

pretty amazing and I think that open AI will this is one of the benefits of of of how competition you know kind of benefits industry benefits society I think that will make them pick it up and go, "Okay, we can't be behind on

this. We got to be learning to do this.

this. We got to be learning to do this.

We got to be making this happen." Um,

and I think that's uh that's what will happen. It'll be painful. Competition

happen. It'll be painful. Competition

frequently actually is kind of painful as you as you push your way on this, but I think that's the that's the uh I have pretty strong belief that that will be

the the the the end result. Now I do think that it's like um you know credit to enthropic that the notion of focusing

on code is not just a code product but an amplification of many many other things amplification of obviously you know AI progress and development but

also an amplification on on frankly every other form of information/nowledge work um and maybe even much more many more things and I think that's one of

the reasons reasons why frankly every um kind of major player actually in fact has to be you know kind of uh um capable

at minimum in code if not leading.

>> Yeah. I mean they they did this it's so it's such an interesting point that they they got to the general purpose agent architecture by just making a great coding agent that had all the right

primitives. Um, and I got to tell you,

primitives. Um, and I got to tell you, like if you look at the software that we've developed over the last month or so at every since Opus 45 came out, pretty much every new thing we're building and I like I built this entire

end toend reading app. We have this AI parallegal we've been doing for a while that has just got a huge upgrade. Every

single app is just cloud code in a trench code and it's just basically UI wired to if you press a button, it hits a prompt that has an agent that has a bunch of tools that does the thing you

want it to do. And it is the coolest way to build software because it's so much more flexible. Users can modify it. It's

more flexible. Users can modify it. It's

just like it's it's just exactly right.

And it's so such a pleasure to see someone figure out those primitives.

>> Yep. And and and massive credit to the anthropic team for doing that. And basically, you know,

doing that. And basically, you know, everyone else, hey, you should be you should be uh learning from it, building on top of it, trying to iterate to the

next generation. Um whole set.

next generation. Um whole set.

Do you have a a thought on why um Opus is Opus 45 is so good? I I'm assuming you think it's that good. I think it's I think it's the best model I've ever used. It's like this crazy leap for me.

used. It's like this crazy leap for me.

Um I'm curious if you agree. And if you do agree, do you have any thoughts on how they managed to do that?

>> Well, I think it's amazingly good. Um I

don't know if it's if it's the everything model for me. I mean, I think to some degree,

you know, kind of I think GPD5 Pro with codecs also is, you know, uh um pretty amazing on a lot of levels. And by the

way, like, you know, Gemini 3 on like science topics and so forth. So, like

it's kind of I still am kind of in a hey, I bring you know, all three of them with me to various things I do. Now that

being said, I am very curious about how they they pulled 4.5 together and and and one of the mistakes that outsiders think is they think, oh, you just apply scale and you know, you press play on

compute and some of it works and some of it doesn't. And actually, in fact, there

it doesn't. And actually, in fact, there is both a lot of uh both science and art to do it. Um, and it's one of the reasons why, you know, obviously, you know, Meta has needed to restart its

kind of AI efforts because you can't just go, oh, I throw a whole bunch of compute at it and it works. Um, it has to kind of like relearn these things in terms of how it's playing. So, I'm I'm I

think it'll because, you know, one of the things is, you know, the the the techniques, you know, kind of spread out very quickly. So, I think we'll learn,

very quickly. So, I think we'll learn, but I actually don't know what the what the what the what the new the new genius was in in Opus 45. Do you have any do

you have any hypothesis?

>> I have no idea. I I think the the only thing that I can think of is recently um we got a view of the the underlying like soul document

of of Claude and the the interesting thing that that I feel from Opus and I agree like I use CHP as my daily driver to be clear. I use it for everything but when I'm building software um except for

like specific performance things or like hard bugs I'm using Opus as my daily driver.

>> Yep.

I think that there's usually this trade-off that you see a little bit with codeex where the better it is at programming the less like empathetic it is like it just feels a little bit more

like a senior engineer it's slightly more autistic or something like that. Um

and Opus they sort of figured out how to make it both sort of humanistic and and understand users and and and what I might want and what I might mean and how interfaces work and what like a good

interface is. and it's a fantastic

interface is. and it's a fantastic programmer and something about a soul document where it tells it this is who you are and like what you care about and whatever it [snorts] it's one example of

I think anthropic thinking about these things in a in a maybe a bit of a more holistic way to to create a being rather than a tool and I I think that that is actually going to be a big deal going

forward >> you know it's interesting um you know this is one of the things that inflection um kind of started with kind of EQ and

actually soul is a very natural you know because the inflection start and there's still a lot of ways in which pi is still you know amongst the leading of the kind

of you know having a a richly textured you know kind of conversation agent like focusing on on EQ as much as IQ like no not slouch on IQ but like putting the two together and actually a sole

document actually I think is is maybe the next you know because this is what we learn and iterate is actually uh great and it's and it it's part of what of course makes Claude uh code work

because it's actually in fact a really good human amplifier and like kind of what what kind of how do you operate that way? Um and then you know you get

that way? Um and then you know you get better performance if you can interact in that way the right way. So I think that's a a good insight. I suspect

there's other thing I think we both suspect there's other things too and we'll we'll hopefully learn them in the next few months.

>> That would be great. Um, so last thing on the coding front. So you you mentioned the horse race earlier and there's everyone's going to be trading volleys and but if we let's say we want to we don't want to be fooled by

randomness. We don't want to like you

randomness. We don't want to like you know uh track every little change. We we

hit the snooze button and we come back at the end of 2026. Where do you see the landscape of who's winning in the coding agent race?

Well, so I think it'll I don't know who will be winning, but I think it'll be what I what I would predict strongly is that um

that the the horses that are leading now will still be like neck and neck. It'll

be like in in the first 100 meters this one's a little ahead and the next 100 meters that one's a little ahead and you know D like I don't think that the horses that are in the race any of them

will particularly stumble, right? So

like you'll go wow I thought you know uh cursor was really was really fantastic and it's just gone like I I think that none of them will stubble. Now I do

think what will interesting is the folks who are not in this at all like you know say like the easy one to pick on Apple right despite the fact we use you know

Max for our various things but the AI part of it is you know uh non-existent uh is well I think uh the gap will be

like even more stunning the fact that you actually haven't gotten what this coding amplification everything else means and I think that will be will be

playing out more but I think they they'll all be in the mix and the thing that'll be interesting will be not as much as which one will have stumbled out but I'm really curious about like what

are the one or two you know like superstars that will really you know get in the mix more um you know will replet be more um general will loable be more

general like like like will those be or will it be something else. I mean, and part of what's um like like the with a with some high probability something

will surprise us here.

>> Yeah. [laughter]

>> I don't know what it'll be, but >> yes, predicting surprise.

>> Yes. [laughter]

>> Uh yeah, I I think that I think that's interesting. One of the things I've been

interesting. One of the things I've been toying with is uh the stakes are so high and programming is such a

obvious use case that is so economically valuable and it feels like everyone is just like it's now a knife fight for programming and I wonder if there are um you know

you've been predicting AI will be used for more creative use cases for a while.

Um I wonder if the uh the surprise entrance comes from a place like that where we don't necessarily expect where it's not actually about programming. The

one like caveat to that is like you said Claude sort of invented this general agent by being good at programming. So

there's you know it's it's hard to say exactly but I wonder if that that is coming like it leaves them vulnerable to competitors from from other places because they're just focusing on programming right now. Well, I

definitely so I do think the programming is part of the architecture for getting to everything else and like for example part of the reason why coding is important is that even when we get to hey how are you going to have a

much better parallegal I love what you're doing among other things um better medical assistant better tutor etc. I think coding will actually in fact be not just the amplifier of it but

the fitness function of you know how do we how do we kind of like you know kind of go hey this is getting better this is amplifying the work better etc that

parallel not just the the the the the foundations of coding driving planning you know longer work parallelization

orchestration etc but also like well how does like a better legal document work will actually in fact also be coming out of it and I think some of that will also be in creative like it's it won't be surprising to me like obviously

everyone's trying to figure out like okay how do we well not everyone a number of people trying to figure out how do we take um you know VO soro etc

and then and then go okay can we create a 30 minute movie off it and you know the coding like pattern will be part of

of of what happens there and so can be in those kind of creative now obviously you know some of the more interesting possible surprises are well um because

there's there's there's a number of different efforts trying to do this too.

Well, could we get you know raw a raw raw ideiation like better at science. So

like we read a whole bunch of science papers and we can do scientific hypotheses. Now, by the way, you begin

hypotheses. Now, by the way, you begin to say, well, maybe that'll also be true of like AI research and ideas for doing this and suddenly it's going generation in this kind of thing. And that's that's

definitely a whole bunch of projects trying to work at that. Um, so the the notion of hey, if you can think a lot better, um, you can then you can then

apply that to this kind of creativity and this kind of new ideas. Um those I think are much more speculative like it's it's it's an interesting hypothesis. There are people who hold

hypothesis. There are people who hold them saying hey uh we've just seen that with scale learning and compute and it's going to happen. And I'm like well look

it's crazy that everyone smart should assign a nonzero hypothesis you know probability to that because that's really amplifying. But on the other hand

really amplifying. But on the other hand I think it's like it's not clear that we're we're we're yet seeing any of that. Even when you see people like you

that. Even when you see people like you know Terrence Tao saying hey I'm using you know um uh generative AI to help me understand where I should be thinking in

my in my math analysis and yes but you know I think 100%. But of course Terrence Ta is one of the most you know genius mathematicians of our age and is providing a ton of the metacognition in

this that makes sense. Yeah, I I think I'm trying I'm going back to uh your your comment about no one stumbling and I'm trying to like one I'm wondering who

would stumble if there was a stumble and I think my current feeling is I would I would guess cursor.

>> Yeah, that's probably the highest likelihood.

>> Not that they go away. They're obviously

going to be a successful company, all that kind of stuff. But I I think that they're caught a little bit in the same position that OpenAI is. But open has more flexibility here where cursor a lot

of their business is built on traditional developers using ideides with inside of big companies with AI on the side and they're sort of caught between that paradigm and this totally

new 2e cloud code type paradigm and they're they they kind of have to do both and I think that's going to hamper their product direction and velocity in a way that I would bet in a couple years

we'll look back and be like that was a interesting era and it's still like a widely distributed piece of software, but it's it's not the next generation thing that we thought it was.

>> That's a I agree and that's one of the reasons I brought it up in the other one. I've been thinking about that as

one. I've been thinking about that as like the the the hardest and in another angle of that is um you know how are we going to be not just integrating the kind of the

application functionality UI but the the underlying model and compute fabric capabilities and you know cursor is is is just beginning to do that kind of stuff and

you know what the shape of that is and does is going to have to be dual targeted like you mentioned or multi-targeted I think it's uh it's a it's a harder slalom race for them.

>> I think the narrative right now is that enterprise AI deployments are not doing as well as hoped as people hope. Um what

do you think the narrative will be in the enterprise by the end of 2026?

>> [snorts] >> Well, I think for sure there will be some intense usage and the one that I've been uh predicting that I think a lot of

enterprises will get out of their way on is just amplify coordination, you know, meetings, etc. Right? So, a lot of them

say like the obvious thing to do now is record every single meeting and run AI agents on it. not just to transcribe it

but to say hey um what are like who are who in the organization should be notified about stuff who who should be asked about stuff um where action items

are following up on you know like a whole set of things what what what what uh you know team of agents should start working on some of this stuff and preparing for the next thing what what should be the briefing for the next

meeting you know off this all that stuff should be done and I think that people aren't doing it because they're like, "Well, I'm worried. Does it does it get the legal liability, you know, um, you know, we never really recorded

everything that was happening in this and someone made an off-color joke and does that does that have a problem?"

And, you know, and I think actually, in fact, part of the unlock to this will be also using agents. So, you go, okay, I'm worried about legal liability. Well,

here's the legal liability check agent, [laughter] right? that that can go, you

[laughter] right? that that can go, you know, because you're not we're gonna scrub, you know, anything that or change it, anything that that we think is

actually in fact a is is a real issue.

Um or um things like that. And so what I would say is yes, it'll be much more intensely positive. And I think it'll be

intensely positive. And I think it'll be positive because we'll have two groups of things that

will be now in real deployment. One is

like I think maybe by the end of 26 if you're not Yeah. Let me state this a little bit more crisply.

If if you um for a company to be a thriving going and growing concern and evolving with the

times, you will need to be recording every single meeting and using agents on it to amplify your work process.

And by the end of 2026, if you're if you're not doing it, that's because you're making excuses. And actually, in fact, it's a little bit like, hey, you know, these cars won't be a big thing.

We can keep doing our horses and buggies. You know, that that that that

buggies. You know, that that that that is, I think, one. And then two is um that you will start systematically

deploying um groups of agents in various problems. Um, and that's part of the reason why I tend to think that, you know, if you said, "Hey, I need to predict what the

next thing is. It's orchestration

because it's groups of agents doing things." And that's part of the reason

things." And that's part of the reason why, like I don't think it'll kick off Q1, per se, but like we'll grow through 26 and then, you know, whether or not 26 is orchestration year or 27 is orchestration year, that's that that's

the reason why you have a high prediction there.

>> I totally agree with you. I think it's so clear to me that agents are going to reshape how we think about doing company operations. And the my one of my big

operations. And the my one of my big proof points for that is just internally we did our 2026 planning with an agent.

And basically uh now we're like 20 people. So it's like the first time we

people. So it's like the first time we have to do like a real planning type exercise for you know every department and budgets and like all that kind of stuff.

And so Brandon, who's our COO, made this agent that anyone in the company, it has access to all of our, you know, all of our notion and all of our data. Anyone

in the company that that has a uh that is is a leader in the company, talks to the agent and it asks them like really interesting questions about, okay, how does this, you know, layer up to the

overall company strategy, which it has access to? What kind of resources do you

access to? What kind of resources do you need? Here are some tough questions to

need? Here are some tough questions to think about, decisions you might need to make. Um and then basically we have this

make. Um and then basically we have this notion page now and it's just like every single department has this like really crisp really clean strategy document

that um someone has gone through and it le it it levers up into the like the overall company strategy and it then you can do all these amazing things like

um the first thing I did was I had Claude be like okay who's not talking to each other that should talk to each other and it found all of these strategies strategy documents that like I needed to get three people in a room

together to just like figure that out.

Um or another one is, you know, you do a strategy document and then you uh forget about it in Q1.

You're making a decision and you forget about the overall strategy or or what you said you were going to do. So, one

of the things I'm going to do over Christmas is I have we have this um this cloud code uh in a trench code running in our Discord, which is we use that as

our internal chat and it's called R2C2 and um I'm going to basically uh have R2C2 listening in and anytime we're making a decision, I can just tag it and

be like, hey, um how does this how does this uh lever layer up to the like 2026 strategy for this department and the whole org? and like how would you think

whole org? and like how would you think about it? And it's a it's a sort of way

about it? And it's a it's a sort of way to kind of make that make those documents um more alive and more like woven into the everyday of how you make

decisions. And I think that's so

decisions. And I think that's so important and exciting.

>> Yep. I think that's exactly right. And

that's that's the broader version of just doing the coordination on meeting is how well how does the coordination in the meeting also relate to you know um

strategy changing conditions in market changing conditions and competitors etc and you know like this this is this is the tangible substantiation of what is

AI mean is that you have intelligence at the scale and price of electricity and so that means that you know previously where you had to be extremely selective about before you applied intelligence because intelligence is always kind of

through highriced you know kind of human talent which by the way I think will continue but then you go like let let's let's let's apply it all in all these other places as well.

>> Yeah totally. Um and and and by the way like once you have that free intelligence you can put the information that you need everyone to consume in lots of different formats. It's like we have a vibecoded 2026 strategy app that

people can like click through and we're going to do a podcast and there's all this stuff where it's like you don't want to read this long document, just listen to it on your run and and it just helps make the whole company get on the

same page in a new way.

>> Yep. I know. Exactly.

>> Uh okay. AGI timelines. Um where uh uh are we going to hit AGI in 2026? If not,

uh when are we going to hit AGI?

depending like whatever your for whatever your definition of AGI is.

>> Yes. Well, this is this you have to start with what is AGI? [laughter]

Um and and and you know my usual joke here is AGI is the AI we haven't invented yet. Um so so each each year

invented yet. Um so so each each year we're not going to hit there because in one sense we have created AGI already.

Like if you say hey if AGI is u that you have a a variety of tasks where the um the AI is is is substantially better than your average human. The answer is

already like for example in writing AI is better than most human beings at writing in various ways in terms of the vast majority. Now you say the good

vast majority. Now you say the good writers no well the good writers it's a little bit more mixed although good writers should be using AI to amplify themselves um etc etc etc. Um, and there's a bunch of areas where it was

already super intelligent. It has a breadth of knowledge. Um, it has an ability to to work at a speed that that human beings simply can't. So, if you say, "Hey, I'd like a I'd like a report

on this or I'd like to understand this kind of thing." It can work at a speed that a human being can't, which is part of the reason why they needs to be used in amplifier. Now, we've always had, you

in amplifier. Now, we've always had, you know, uh, speed multipliers, planes, cars, etc. this is just cognitive so it's weird and new and all the rest now

um so I think you know we've got forms of super intelligence already we have forms of AGI already so they go okay what's the definition for what will be

26 now a little bit of that is I think um what we will see more of in 26 is a

combination of of parallelization longer workflows and orchestration which means that the notion of

um of I now kind of and that's part of the reason why like getting more to the realization of what agents are. I think

we'll see more of that and so it'll play more towards the oh like I don't think we'll have the press button get you know

full human capable software engineer who's like I'm ready to do the thing you've asked me to do.

um which is I think what you know the sci-fi and kind of thing that people are looking for. But I do think you'll see

looking for. But I do think you'll see kind of much more of the hey, I come in as a human engineer and it's like I'm only really capable if I've got my my my

my team agent set tool set that I'm deploying on various things and the way that I do them is not just kind of like looking at the suggestion for inclusion

for the type ahead in my code but as you were mentioning is like hey look I I I set this one and this one and this one and this one and I actually in fact because part of what I have agents doing is I have them cross-checking each

other's code. So, I'm not actually even

other's code. So, I'm not actually even necessarily re uh at like I'm running a bunch of it where I actually haven't looked at it, right? Partially because

I'm like, "Oh, if something breaks, then I'll look at it." Or I'm also expecting to have my my coding cross check agents going, "Hey, you might want to pay attention to this." And I go, "Okay,

I'll go look at that." you know, and that kind of thing is, I think, the the the sort of of of AGI we're going to

have applied to a to a broader range of of topics. And so it it'll be more in

of topics. And so it it'll be more in the hey, this is actually doing real work um in a more broad sense than just the, you know, the the the coding

amplification we've had.

If you if we listed out the holy commandments of AI. So um thou shalt always scale uh compute and data or thou

shalt always align your models uh and make sure they do ex exactly what you expect them to do as much as as much as possible. Um and there probably more.

possible. Um and there probably more.

Which holy commandment do you think will need to be broken or will turn out to be um uh misapplied or irrelevant? So I'll

give you an example.

I feel like um all of the alignment, the way that we do alignment um has turned like has has created models that are psychopantic and kind of do they're people pleasers. They do what we want

people pleasers. They do what we want them to do more or less. And if you really want a good engineer, we're going to find that uh allowing models to have

their own opinions and values and desires that are distinct from humans is actually an important part of creating models that can uh do more in the world and be more autonomous. And the

trade-off is that they don't always do exactly what you want. And that's a that's a new thing that we have that we're gonna have to get used to that I think is against the received wisdom of

of how you should build AI.

>> Yeah. Obviously, that's tricky because you don't want them to, you know, all the old paperclip problem.

>> You don't want them to be misaligned in ways that are serious like in ways that are like, "Hey, I know what you want better than what you think you want and look at what I've delivered is better."

That is kind of what you want. You don't

want the oh what I really want to do is you know like you know uh strip mine your like erase your hard drive [laughter]

you know and so um and like for example say well I think what you really need is more time outside so I'm going to actually lock you out from your computer and your devices for the next three hours just to make sure that you go get

that time outside and you're like no no no don't want that. Um, so, so that's tricky. I would say um,

tricky. I would say um, let's see. It's interesting. I the

let's see. It's interesting. I the

change of commitments. I mean I what I've been my head has been mostly wrapped around is what does it mean like almost goes back to this iconic

Marvin Minsky book society of mind is it's you know tribes of agents and so I tend

to think a little bit about how you get opinionated is is like you set up agents that are deliberately like like debating >> intention opponent processors.

>> Yes, opponent processor. And that's

actually part of how you're solving things.

>> And it's part of how you get that that that more variation.

>> And it I guess I would tend to think that you'd still want the orchestrator not to be sick ofantically aligned, but to have a very good sense of what it is

you're trying to do, right? And even if you have if you're fuzzy about it or you're wrong about it, it's actually in fact drive like helping you be better about that kind of thing as opposed to

ultimately going well I'm going to go in direction X when you think Y. Um, and

so, so I'm not sure I I buy into the orchestrator thing the way you do, but I guess you know what I might um

what I might say is kind of an interesting question is maybe where um

the notion of kind of like um like this might be and I'm a little worried about this one too.

So even like giving this one I'm not sure I would want this one to be exact.

it has a similar shape which is like currently um we have a very uh natural thing where we try to say hey look we're trying to get as much interpretability

of the agents we want like one of the sci-fi worry cases as they start speaking in languages to each other that we don't understand and what what does that mean and that becomes further out of control and may get more in the

paperclip direction and a set of things to kind of pay attention to that and I you know I think those are good questions and should be paid attention into they're not they're not I I don't have the 10 out of the fire the five

fire alarm construction of them but I do think it's an important something could could go seriously wrong and is worth paying attention to now maybe what I think is the thing is to say actually in

fact what we want is we want a speed of coordination between the agents in a communication where this the where what might be tolerable and allowed and

shaped in certain ways is the same way that when you have these generative AI models where you say, well, I can't look under the hood and know what's going like I can't know I can't look at that

and prove that it's not paper clipping the world um or something as a way of doing it. That may also be true of the

doing it. That may also be true of the comm's fabric of how they're coordinating and kind of what the kind of the fitness and and part of because I want the speed of coordination, the

speed of learning between them to be such that I'll accept parameters of lack of interpretability there and that's super scary in some ways. So I'm not

like it's I I say this like ah how would we shape it and what what parameters would it be okay and so but I do think we will tend that direction. So that

would be that would be an area it's a little bit like actually maybe one of the things that also might be is like another commandment was don't do self-improvement don't allow these

self-improving and yet in many ways we are doing forms of self-improvement not just the kind of data modeling but like coding and that wrapping back and so forth and that's going to continue in

certain shapes and so what what shapes is that okay and what shapes is that not okay is I think you know where the commandments at least changing Yeah, we we're we're going to have to do

some legalistic uh you know inter interpretation of of the commandments.

So all of our tool scholars are going to be uh newly employed as AI researchers.

Uh I love that. I think that's so right.

The first >> people to to just take the risk to be like you can communicate in ways we don't understand. I think it it yeah

don't understand. I think it it yeah there's so many gains to that and it's so anthma to AI safety that I think it's it's really been a commandment and and I I bet there are ways to make it like

make the boundaries of that safe.

>> Yes. So we'll need to work on making the boundaries it safe but I think that will happen.

>> Yeah. One thing that I actually think uh going back to the the previous point is um about AI that doesn't do what you say um and and that being kind of my my

contention is uh it's it I think that that actually may be really useful for autonomy and doing interesting things that we wouldn't predict. And I think your contention is that's a horrible user experience. One way to potentially

user experience. One way to potentially square that circle is once you have an orchestrator that is aligned with you and you do trust. It's okay if the orchestrator is using an agent that's a

pain in the ass. Um because it could be like I don't care what you say orchestrator. I'm going to go off for

orchestrator. I'm going to go off for three months and do this thing. And the

orchestrator is like h fine like I'll get most of it done with this other set of agents that actually follow my instructions. But this one is just off

instructions. But this one is just off doing its thing. And every once in a while it comes back with something brilliant. And that's actually valuable

brilliant. And that's actually valuable and important. And having a good enough

and important. And having a good enough orchestrator allows us to move in that direction because the human doesn't have to deal with the >> Yep. That's what I was gesturing at.

>> Yep. That's what I was gesturing at.

That's the reason why the orchestrator needs some deep alignment. But the

orchestrator might have agents that are like, "Hey, I think everything you think is bozo and I'm going to go try something else." Okay, go ahead.

something else." Okay, go ahead.

Don't just go do it. Bring it back to me, but you know, go research it. That's

fine.

>> Okay. We're almost out of time. So, I've

got one last question for you. Um, what

is the most important undersung category in AI that we're not talking about right now that we will be talking about at the end of 2026? And I want to put some um restrictions around this. So, uh, some a

couple of the categories that that may come to mind are like robotics or science or something like that, but I want to get I want to get more specific and have like a really specific concrete reason that you think that that thing

will be um valuable and important and and um uh something that we talk about a lot uh in 2026.

>> Well, I'll choose one that's a little um it's just because I'm close to it. It's

not really self- serving, but it's close to it. I think so right now

to it. I think so right now um the the vast majority of stuff we're doing is is extremely close to human language. So it's either obviously human

language. So it's either obviously human language itself or coding or kind of coolant and I think we will be doing a lot more in-depth models of things that

are not close to human uh language. So

for example biology and this is kind of part of the reason is because of all the work that we've been doing with you know manisai with sedukerji and singh and kind of understanding that and you know

it it's a frequent trope to say bi biology is a language it's actually one of the reasons why I'm kind of focusing on it because if you kind of go world of atoms and bits bio is kind of is kind of

not fully atoms and closer to bits and has a kind of a programmability kind of compute characteristic to it you know exactly how it's compute is still a little bit TBD. you get people like you

know uh Penrose you know arguing what's unique about human cognition is is is quantum um computing effects and so forth and it's an interesting question um and then you know what's the

borderline between being able to simulate quantum and genuinely quantum is you know what is what comes of that is all kind of interesting questions but

I think what this results to is what I think we will see is things that where the generative AI you know model

building out of data and prediction and everything else will be out of call it uh computational sets or language sets that is further a field from human

language and obviously I think biology is probably the most natural one where that would come out of um and obviously you know I I've been working on that and thinking about that intensely of course

because of manis >> and what's the big concrete impact that will have in 2026 that will cause us to be talking about it a lot Well, the one

we're going for is amazing new, you know, biological therapeutics or, you know, kind of understanding. I don't

know if 26 will be the full full hit there. I mean, there's a probabilistic

there. I mean, there's a probabilistic curve. Um but it wouldn't surprise me if

curve. Um but it wouldn't surprise me if you know um you know [clears throat] would you get

the equivalent of kind of a move 37 in something around biology right and might be it's a molecule that makes a you know that makes a massive difference you know

manis trying to cure cancer etc um and we discovered something that was not like what what I would hope maybe as a reasonably high probability is we

discover a research possibility like I oh this might be one of those things this might be it's like you know probability 27% that this is a move 37

in this this arena and maybe that's the 26 >> that would be amazing Reed always a pleasure this is so fun >> likewise Dan I look forward to uh seeing you in the new year

>> sounds >> [music] >> Oh my gosh, folks. You absolutely

positively have to smash that like button and subscribe to AI and I. Why?

Because [snorts] this show is the epitome of awesomeness. It's like

finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster of emotions, insights,

and laughter that will leave you on the edge of your seat, craving for more.

It's not just a show, it's a journey into the future with Dan Shipper as the captain of the spaceship.

So, do yourself a favor, hit like, smash subscribe, and strap in for the ride of your life. And now, without any further

your life. And now, without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.

Loading...

Loading video analysis...