Superagency: Reid Hoffman's Bull Case for AI
By Bankless
Summary
Topics Covered
- Super Agency Elevates Everyone
- AI Democratizes Superpowers Massively
- Bloomers: Accelerate Intelligently
- AI Boosts Existential Risk Portfolio
- Innovation Drives AI Safety
Full Transcript
can you guarantee me that Killer Robots will never be built the only existential risk for human beings is not Killer Robots there's pandemics there's asteroids there's nuclear weapons
there's climate change and the list kind of goes on and so you have to look at existential risk as a portfolio namely it's not just one thing it's a set of things and so when you look at any particular intervention you
say well how does this affect the portfolio my very vigorous and strong contention is that AI even unmodified at all is not I think very positive on the existential risk
portfolio welcome to bank list where today we explore the frontier of AI this is Ryan Sean Adams I'm here with David Hoffman and we're here to help you become more bankless the question for
today will AI give us super agency or will it be used to enslave us we have Reed Hoffman on the podcast today he gives his bull case for AI why it's good why we should accelerate AI into the
future and how it will turn each of us into Super agents and to him that equals more freedom for everyone I think the Bist journey is all about becoming a more sovereign individual that's what
David and I have talked about since Inception and it's increasingly hard to imagine being a sovereign individual without crypto which we've talked a lot
about but also without AI like crypto gives you the ability to own things but AI seems to be the ability to control your own destiny and that's why we're doing an AI episode today with Reed
Hoffman to help stay ahead of the AI curve few things we discuss super agency doomers bloomers and Zoomers what could go right how to use AI American super
intelligence and finally we end with the question to Reed what if this whole AI thing is overhyped stay tuned for that answer super agency is the title of Reed
Hoffman's book uh which is coming out this week uh if you are listening to it at the time of release and so this is all on the backs of that Ryan of the two co-hosts of Bank list read the book uh I
did not and so I'm more along for the ride I'm in listening mode asking a few questions here or there but it's really Ryan in the driver's seat for this episode so I hope you guys uh enjoy the episode with Reed Hoffman but first before we get there a moment to talk
about some of these fantastic sponsors that make this show possible Unis swap Labs is making history with the largest bug Bounty ever $15.5 million for critical bugs found in Unis swap V4 this
isn't just any update swap V4 is built with hundreds of contributions from Community developers and has already undergone nine independent audits making it one of the most rigorously reviewed code bases to be deployed on chain and
with 2.4 trillion in cumulative volume process across Unis swap V2 and V3 without a single hack the commitment to security and transparency is Rock Solid now unisoft Labs is taking an extra step
to make V4 as secure as possible with a 15.5 million bug Bounty head to the link in the show notes to dive in and participate in the Unis swap V4 bug Bounty all the details from eligibility
and scope to the rewards are there the arbitrum portal is your One-Stop Hub to entering the ethereum ecosystem with over 800 apps arbitrum offers something for everyone dive into the epicenter of
defi where Advanced trading lending and staking platforms are redefining how we interact with money explore arbitron's rapidly growing gaming hub from immersed role playing games fast-paced fantasy
MMOs to Casual luck Battle mobile games move assets effortlessly between chains and access the ecosystem with ease via arbitrum expansive network of bridges and on rims step into arbitrum
flourishing nft and Creator space where artists collectors and social converge and support your favorite streamers all on chain find new and trending apps and learn how to earn rewards across the arbitrum ecosystem with limited time
campaigns from your favorite projects Empower your future with arbitrum visit portal. arbitrum doio to find out what's
portal. arbitrum doio to find out what's next on your web3 journey what if the future of web3 gaming wasn't just a fantasy but something you could explore today Ronin the blockchain already trusted by
millions of players and creators is opening its doors to a new era of innovation starting February 12th for players and investors Ronin is a home to a thriving ecosystem of games nfts and
live projects like axi and pixels with its permissionless expansion the platform is about to unleash new opportunities in gaming defi AI agents and more sign up for the Ronin wallet now to join 17 million others exploring
the ecosystem and for developers Ronin needs your platform to build grow in scale with fast transactions low fees and proven infrastructure it's optimized for creativity at scale start building on the test net today and prepare to
launch your ideas whether it's games meme coins or an entirely new web three experience ronin's millions of active users in wallets means tapping into a thriving ecosystem a 3 million monthly active addresses ready to explore your
Creations sign up for Ronin wallet at wallet. Ronin chain.com and explore the
wallet. Ronin chain.com and explore the possibilities whether you are player investor or Builder the future of web 3 starts on Ronin Bank list nation very excited to introduce you to Hoffman he is a Founder investor he co-founded
LinkedIn which I'm sure many of you have uh used in the past he's extremely active in Silicon Valley um particularly over the last couple of decades and more recently he's been very close to what uh
we would call the epicenter of this whole AI thing uh so he was serving on the board of open AI starting in 2018 notably I should mention because whenever someone talks about the board
of open AI a lot of things he'll come up but he actually left to go focus on AI investing before Sam Alman in the ing drama and you guys remember all that um he's also a gifted writer a communicator
I've read several of his books I think one of the canonical books for Tech Founders is this book called Blitz scaling which is just phenomenal on how to grow an internet scale business uh
and now all of this preamble to say now he's written a book on AI called super agency and I'd pretty much describe this as maybe Reed Hoffman's thesis for artificial intelligence and how it will
impact us uh in the decades to come Reed Hoffman welcome to to bankless it's great to be here and I look uh I look forward to not only this conversation but future ones as well yeah I mean I think we're going to really focus on AI
because that's the subject matter of your your book but maybe in a a future episode we get into crypto because I know you have a lot of uh thoughts on that yeah I know I actually think I
bought my first Bitcoin in 2014 um a little late but you know um it's earlier than most yeah yeah it's a
that's that's a good um good seasoning of time to buy Bitcoin for sure um so let's talk about the this book let's talk about your thesis for artificial intelligence and when I heard that you
were writing a book called uh super agency my first question without reading anything further was like okay super agency who's readed talking about like who are the super agents is this uh the
humans do they become the super agents or is he talking about the the AIS themselves do the robots become the super agents so may maybe you could kind of start there could you define what you
even mean by super agency and like who gets it yeah so let's actually start even a little bit earlier with agency and then get to your excellent question which is what is agency agency is the
ability to kind of make plans do things in the world you know kind of make parts of the world you know according to your intentions and desires and to and to kind of you know kind of um uh Express
Yourself um in the kind of The Ordering of the world around you and obviously nobody has perfect agency um you know that's you know kind of theoretical de
deistic like creatures like God has that maybe yeah perhaps and it depends even on what your particular theology is so that's the reason I was being a little bit more vague right so non-
denominational today in this podcast yes exactly and so um super agency um is the the precise term is kind of when uh
millions of human beings get access to the kind of a an elevating technology a transformative technology um what kind of the superpowers not only
they get as individuals but Society gets transformed and so for example a canonical um example as cars so you go well it gives me superpowers because I can go farther I can drive I can get to
farther farther distances but as other people in this in society also get cars you know like suddenly you'd had to go down to the doctor's office to get an appointment now the doctor can come to
you and obviously later instantiations of this is you can get you know in card deliveries and you know all the rest and so super agency is kind of how we all get superpowers and so to your opening
question about are humans the super agents or are AI the super agents to some degree it's both um but it's but the important emphasis is that rather
than us as human beings and Humanity losing our agency we are gaining agency and by the way in a very similar pattern to the way that I gain agency when you
guys also get cars right it's it's it's not just me that gains agency with my own car I gain agency when you guys get cars and so that's the elevation of agency and hence super agency I was
almost thinking about your like book title if you used a different synonym besides super ageny if you just titled the book superpowers right and like and like who gets them that's almost the
same discussion like or maybe maybe um when we get into this term agency is the term superpower and super agency are they kind of synonyms is the short form
version of this to just like we get all of this additional Choice surface area we have new abilities to do things that previous generations could not have imagined that feels like a superpower to
me is it kind of one and the same in your mind well superpower it it's it's it's deeply related the vend diagrams have a high overlap because the the kind of the elevation of capabilities are
superpowers and every you know kind of new major technology gives us new kinds of superpowers now some of it is with a superpower and as lots of people get superpowers you
know individuals institutions societies governments Etc your agency changes some so it isn't like for example like like like the
agency of people who are like kind of driving horse and buggy carriages that changed with cars because it was like well no longer are the streets set up for you no longer can you be doing this
thing that you were you know you had been doing and were planning on doing you know no longer for example was the you know the the the hores transport industry you know kind of as um kind of
Central and so these kind and by the way even like earlier Technologies Like Trains those changed uh in the kind of the ways that people would Express their agency and and be able to to work on it
so superpowers are a way that you get the um you way you extend your agency but when it happens a super agency context it also transforms it and changes it so that's the reason why it's
not 100% the same but but but but closely related read something that we share in common is we actually both have podcasts you have a podcast called possible uh and I think we Ry I sbled on
your podcast and we noticed that you did an episode an interview with yourself but yourself was uh we would might call it in the in the crypto World an AI agent now maybe maybe this illustrates
what you mean by super agency and maybe you can take that metaphor all the way home but how do we know that we are actually talking to the real Reed
Hoffman and not your AI co-host byot that now is with us actually and and the real human Reed hoffen is somewhere else doing work in a different direction how do we know how do we know you're the
real Reed Hoffman well um that will get to be a more and more complicated question um at the moment uh the video avatars are not actually in fact real
time so the the the the read AI discussion has to be a little bit scripted even though it looks like it's on a podcast that it's a completely real time thing there's actually in fact you
know kind of running it through the chat gbt instance that's trained on 20 years of my writing and then more specifically getting the audio and video produced
with the right kind of quality doesn't really you know enable that for kind of a a a a full realtime stack today um but you know part of the reason of course you know I did it put it on possible
because what could possibly go right um was to start getting people familiar with the future Universe just just as you guys are doing you know in the kind of you know all of the technology
broadly but also around of course crypto and what is sovereignty and identity and all the rest of that mean um it's it's it's kind of like herey here's a lens into the future and we don't know exactly where the future is going but
we're trying to get everyone you know kind of participating uh uh navigating well Etc and that was part of the reason why doing read a but there is you know obviously at some point one
could get to that is an interesting question um and you know my own you know Hazard of an answer here is something a little bit more like you know well uh
crypto signatures and identity um is sure what's happening but of course you know given that I'll probably have both the crypto signatures for me and for re AI you know that might still even be
Live question yeah it's it's really interesting though there's something very empowering about the experiment that you're running with with read AI because it it it uh leads to a promising future of like if individuals are
sovereign over their own kind of AI agent twin maybe that AI agent twin could go do work well they're like goofing off they're going like doing something that they enjoy maybe they're watching a movie they're doing art
they're like working out something like that and then there's re AI doing podcasts like while all of this goes on and you know the real Reed Hoffman sort of has ownership over that and somehow
like that feels very democratizing I I want to get back to the through line of this conversation though when we when we talk about um super agency though so your your thesis is we understand what
what a a super agents or super agency is and how that's similar versus different to uh superpowers and you said very emphatically that it's not the robots that get it the AIS do get it but also
the humans get it right your your view here is that it's going to be humans Amplified by AI That's the real unlock here but like I have a question within that subset of humans who get it which
humans are we talking about Reed are we talking about the Silicon Valley Elite in your thesis are we talking about you know the the 1% those that control most the capital in society are we talking
about governments or are we talking about individuals because the distribution of this uh seems incredibly relevant to how we actually view whether
this is a good thing or not so I I think the pattern the path we're already on you know with you know hundreds of millions of people using
chat gbt and you know exposure to you know agents in other context whether it's you know anthropic Gemini co-pilot Etc so I think we're already seeing
hundreds of millions of what you're referring to as individuals but you know kind of call it access from you know kind of the a bulk of at least middle
class uh Western folks although like one of the things that I was very cool that I'd heard about from a friend who was traveling in Morocco recently is that
the taxi driver was using Chad gbt as the translator for you know like where do you want to go for the tourists and so you know it's it's very
broad indeed now that being said I don't want to paper over the fact that we live in a human society that has you know kind of differences of wealth differences of power differences of
position not just between nations but within Nations and you know it that's not going to go away and so it wouldn't surprise me you know if you said well but
actually the kind of AI that the that the that the uh uh the wealthy have access to has some improvements in betterness than you know maybe real-time
you know responsiveness maybe you know number of gpus available etc etc than you know kind of a a lower income percent now that being said part of the reason I'm really optimistic is a little bit
like you know kind of smartphones which is you know three4 of the world today has mobile phones but the smartphone that you know that you know kind of Tim
Cook has or Jeff Bezos has or SAR Pai has is the same smartphone that you know the Uber driver has and and so I think
that the you know the kind of the the the the natural uh Drive in technology which includes AI is is building it for
the very Mass Market you know the billions um and so I I I think that I can confidently assert that superpowers
will be available very very broadly even if you know there's also some differences in in superpowers based on you know country and wealth and you know
kind of access but but I think democratizing will be uh will be the the name of the game so in your world AI is really a democratizing technology it's pretty like you know of
course you know uh if you're in the early adopter curve maybe you get things a little bit sooner but generally it's going to take the form of the way cell phones did where in the 1980s was a
large you know big brick you cost thousands of dollars until then the technology democratized or the way the internet has kind of democratized things it's not because there is this fear out
there Reed that um AI is kind of a uh going to be controlled by superpowers let's say governments uh or a small cabal maybe in in Silicon Valley that
they're going to have the technology and kind of the rest of us plebs like maybe won't but you're saying it'll be more similar to I guess the propagation of the internet or the cell phone in that
it will be fairly widely distributed and actually be like a a technology that's available to the general public yes in short and part of that's also so because
you know the same you know kind of call it Silicon Valley ecosystem uh that built smartphones that built the internet you know and obviously there not just Silicon Valley
but there's a lot of Silicon Valley contribution um is also very similarly building you know kind of AI uh both in the hyperscalers and the large models
but also you know at this point there's so many thousands of startups um that you know they kind of you know uh you could start mapping uh various uh
cryptocurrencies per per startup you know there's similar numbers of orders of magnitude let's talk about uh some the AI religions that exist because I think
this was a fairly fantastic framing in in your book and and one of my chief takeaways so you talk about uh and I'm using the term religion you could say ideology you could say philosophy but
just the point is that each of these categories I think all of them have a an expected out or an Article of Faith because of course the future is unknown um but anyway so the the four categories
uh on your book in your book of you know people with thoughts about Ai and it's useful I think to categorize them to sort of understand the worldview a bit
better one is the Doomer okay the second is the gloomer the third is the Bloomer and the fourth is the Zoomer okay now these are four different categories uh
subsets of of um groups with different perspectives on AI could you just Define those four categories for us Absolutely I'll go
through them in that order which is um uh doomers basically are like AI is the destruction of humanity and you know
it's you know very much like the Terminator robot or other kinds of you know kind of popular Hollywood theves argued in a way that's kind of like well
it'll be more intelligent than us it'll kind of uh want to run the Earth you know it'll look at human beings as either hostile or you know kind of uh
you know ants or a kind of equivalent and so AI should just be stopped gloomer are um essentially look I don't think the AI future is going to be
particularly good I think it'll you know um take away a whole bunch of jobs and kind of dis uh kind of disorder Society um it may lead to much more
misinformation and kind of uh un Balan democracies um it'll have a whole bunch of kind of more information you know kind of surveillance and so their
privacy will be worse and so like I don't think it's stoppable because you know multiple countries and multiple you know companies around the world are building it and you know that's the way
that Humanity rolls and you know company's going to become a lot more productive for this but I think it'll be a a a an unfortunate outcome and it's gloomer gloomer by the way because they
only see the gloomy side if that if that helps people exactly um and um actually I'll do Zoomers before Bloomers because I want to spend a little bit more time
on Bloomers since I self-identify there um Zoomers are um are essentially like no no no this technology is great it's like the Absol it's the the opposite of
doomers and it's like uh everything we're going to build with it is going to be really amazing um you know it you know there the the the the like the sky isn't even the limit in terms of what
kinds of things could be made or you know maybe AI is going to invent Fusion rather than us inventing fusion and and everything that comes out of this is just spectacular Zoomer Zoom refers to just
hitting the gas pedal just go forward go fast exactly yeah and then Bloomers which I describe myself as is kind of a Zoomer but as supposed to just like
maximally hitting the gas pedal in all circumstances you go well drive intelligently uh like avoid the potholes slow down at the curve you know be looking at kind of like oh look this is
a little bit of a dangerous area let's let's go through this with a little bit more care still accelerationist that that the kinds of things that we can build in the future whether they're
medical outcomes or or climate change outcomes or you know kind of human enablement with with with work and with uh education all of that stuff is super
important to get to but you know let's kind of make sure that we're not enabling Rogue States or terrorists or balancing you know kind of you know crime waves or other kinds of things as
ways of doing this and let's make sure that we don't for example um inadvertently create Terminators um you know because it's a little bit of question of how we drive
it's not it's not inevitable and so um so that's the Bloomer category and that's that's the category I'm in and obviously if you said well you can't pick Bloomer I'd be closer to Zoomer uh
much closer to Zoomer than gloomer or Doomer um but it's also part of the reason why the subtitle of super agency which parallels the podcast is what
could possibly go right is because we always as human beings encounter new technologies with like oh my God the world's coming to an end I mean remember all those discussions around crypto
maybe we're still having them right and and and also you know by the way the internet and by the way cars and by the way the printing press it always starts with oh my God this is the end of
society and then when we start navigating we go oh wait if we do that this way we we make Society a whole lot better and by the way we have in every
technological instance in history so far made that happen and gotten super agency through all of them one can argue whether or not the AI technology is uh it is new and unique whether it's new
and unique in that characteristic or not and that's of course why to write the book and go out and talk to people and so forth to show actually in fact the only way you can create a positive future is by imagining it and steering
towards it and so that's that's what we should be doing let's make sure we understand the the these examples of these four categories you like maybe by way of example actually um so somebody
on the Zoomer side of things and again this is not we're not referring to gen Z here we're talking about Zoomers um I I I was thinking in my head another term that bank list listeners might be
familiar with is eak if you've heard that term a read Effective accelerationists of which we've had bef J on the podcast he's he's basically like full Full Speed Ahead like let's
harness energy let's harness Ai and like conquer the universe Full Speed Ahead Mark andreon you know put together a uh you techno Optimus Manifesto that has some eak characteristics Zoomer is
basically the eak group is that right exactly although I I think you might say that that Zoomers and Bloomers are kind of two variants of the eak
group one of them you know where I would characterize it is the CU I also by the way I think you know I started using the term techno optimism some number of years ago like e I'm a techno Optimist
not techno utopian U which is you can build great things with technology doesn't mean everything you do with technology is great right so so do it you know do it with some care I'd say
it's it's the Zoomers are hey anything that anyone's doing with this it'll end up good and the Bloomer is hey most of the stuff is going to end up really good
let's try to like steer a little bit it's it's hard for me too to actually put people in boxes like somebody like Mark andreon I don't know if he's full like kind of everything techn
technological technical is good or how much of this is sort of um you know a a personal choice to just amplify this extreme position might plant a flag in
order to to shift the Overton window move the Overton window right and like I think that's part of the the meme games that that uh people like Beth Jos and and maybe uh Andre are doing but that's hard to speculate on okay so that's the
Zoomer now the Doomer is pretty easy I think we've also had guests on Bank list owski he very much clearly thinks that like everything that we're doing right now in AI like basically we only have
years uh maybe decades to kind of live before uh AI actually supplants us like he genuinely thinks that that's the that's the uh Doomer category so you don't have to go into more detail there
but how about the the gloomer category a little bit more it seemed to me that this is sort of of the mainstream media type of take on things and it might even
be the popular narrative around AI like if you ask the average American what do they think about AI I think in like the 2020s with the current Spirit of the age
I think there'd be some cynicism about AI there'd be some pessimism about AI it would definitely be the the glass um you know half full type of Outlook and I
think that's the the popular idea but who are some archetypes for this uh this blomer category so I do think that it's kind of generally
speaking the you know kind of the the the discourse because the discourse now just like earlier times in history with earlier Technologies tends to focus around like all the things that could
possibly go wrong and so uh many journalists um definitely the vast majority of people in Hollywood um who are like oh my God
this is the destruction of the content production industry and you know when Sora and vo are going you know all of our jobs are going um a lot of it's
focused around job displacement um so worries and concerns about Job displacement um so you know I think it's it's more or less kind of like if you can't put the person clearly in another
bucket they're probably in the gloomer bucket it's probably the the the and that's a little bit like mainstream media and it's the everyone else uh bucket how about uh from a political landscape perspective would you look at
the axess that way uh here because I think a lot of people listening would be like okay Democrats are a bit more on the gloomer side of things and Republicans are a bit more on the uh
maybe not the Zoomer side of things but the Bloomer side of things do do you think that's an access at play as well well I think it depends right because there's also a lot of modern Republican
that's kind of anti- big Tech um you know thinks that big Tech is you know too big for its britches and and and should conform so I you know I think that that there's kind of as it were
gloomer in both sides I think the Democratic side tends to be a little bit more we should be regulating um and the and the Republican side tends to be the No No we should be allowing you know
industry to do what industry does so Reed I think for the rest of this podcast um I think we want you to make the case for Bloomer ISM here like
why why uh is AI going to go really well for for Humanity this this idea of humans really uh Amplified by artificial intelligence and it kind of leads to to
Really positive outcomes um one of the the early chapters in your book talk about some history I actually wasn't familiar with and maybe this is an analog that will be helpful for some it
was helpful for me so uh this is the history of the Mainframe computer and you go back to the 1960s and apparently I did not know this maybe some Bank list listeners also don't know this during
the 1960s when the Mainframe computer kind of entered the the cultural public scene as a as a new technology we had a computers that could uh do incredible things for the time there was a media
hysteria that broke out okay and there there was a concerns about this this new computer that had the ability to recall in a few seconds every pertinent action including all of your failures your
embarrassments or incriminating acts from a lifetime of every citizen there were many comparison to a 1984 the book of course that's in Western uh Canon by
George Orwell just like this orwellian society that would be built out by these mainframe computers there were even Congressional hearings guys so one lawmaker warned the danger of uh the
computerized man which is a citizen that would lose all of their individuality their their privacy basically their agency and they'd be reduced to Magnetic Tape that of course was the technology
to program computers at the time was like literal uh magnetic tape so so give us the history uh of the Mainframe computer in this this hysteria why do
you think this is analogist uh to what's happening today so uh well you covered it uh pretty well um thank you for actually reading the
book uh that doesn't happen that often these days um and so you know I think that the question is anytime that we encounter a new technology and in this case the Mainframe they were like
looking at like okay what could possibly go wrong and they think about well actually in fact this could um you know track everything make all the decisions
take away the agency of people by putting it in kind of government centralized control um you know little bit of the discussion of what's happening with you know AI in some
circles today and um and then make you know kind of uh you know us as as human beings um essentially powerless and
agency um and that's you know of course you know part of the you know a lot of the and we we talk about this in super agency a bunch because you know a lot of it was the 1984 George Orwell worries
where that kind of centralizing Technology became a control over individuals and individuals through this kind of control of information control
of power become you know almost irrelevant cogs in a machine and you know this is you know if you look at it same like well what you know what's AI doing with my data oh am
I going to be able to make decisions because AI is going to be so um you know persuasive and manipulative and advertising systems and information systems you know um am I going to be
able to you know control my life in work um or is AI going to be doing all the work all of those are very parallel not just obviously to the Mainframe discussions which are you know relevant
and close and you know the kind of uh we at least gotten through uh I think punch card the Magnetic Tape before we started having all the worries um so we're not
we're Magnetic Tape not Punch Cards um that that's meant to be a joke uh and so um anyway that that was that was essentially what the dialogue was going
and people forget it now because it seems absurd looking back on it I mean it's kind of like well yeah I don't know why those people thought that I mean look at all the computers we have now and look at you know the the the
smartphone that that everyone has in their pocket is is you know thousands of times more powerful than those main frames right um and you know kind of
everyone has one and and and and and and it's kind of you know working you know throughout the entire place and by the way I think everyone's gonna have an agent too and you know with AI and so I think that's why the parallel of the
discussion to to to say um look we're all we're going through all this energy to imagine like every possible bad outcome when a lot
more of the energy is better put into what are the good outcomes that we should be steering towards and which specific bad outcomes you know that are that are not ones that are easily correctable as we get into it so for
example you can put a car on the road without bumpers it's good to build bumpers later you can put a car on the road without seat belts it's good to put in seat belts later but you don't try to
imagine all 10,000 things that could go wrong before you will put the C on the road you got to put the car on the road and start learning as you're going and that's what the the AI thing and so for
most gloomer to kind of persuade them to switch from call it AI skeptical to AI curious um within the kind of the Bloomer category is to say start using
it and start using it not just for hey um you know what uh you know what I have these ingredients in my refrigerator what can I cook totally good use case or my my my relative is having a birthday
party and I want to create a Sonic for them great but for real things for things like like for example uh and I'll actually give a personal example because I think this might be useful in
particular to the the the bankless community so when I first got access to gbd4 I sat down and said how would Reed hoffen make money by uh investing in AI
um you know because with a as a proxy for you know what what degree of job replacement do I have with gbd4 and it gave me back an answer that
was powerfully written compelling and completely wrong because it gave me back the answer that a business school Professor who was very smart doesn't understand Venture Capital would say it's like first you'll you know you'll
analyze which markets have the largest hand then you analyze you know kind of what the uh substitute products might be then you go find teams that could possibly build those subsidies problems can stand them up in order to invest in
them and you're like yeah that's not the way any capable Venture any venture capitalist who's successful does not operate that way yeah it's like business school slop I guess right yes exactly
and so I was like okay but then you say well then is it completely irrelevant to investing and the answer is no no actually in fact one of the things that AI like like I figured this out by the
next day was hey I can feed in the PowerPoint deck or feed in the business plan and I say what are the top questions for answering in due diligence and while I as an experienced investor I
might have known all those questions and gotten to them all it got it helped me go oh yeah question number three I would have figured that out as the right question to ask three days from now and
it's useful to have it now while kind of composing a due diligence plan and so that kind of acceleration or that kind of amplification you know or that kind of agency super agency as part of the
kind of human agency and so all of this is a personal story to go back to you know kind of the the the you know the bankless community say well start using it for things that matter to you and
even if the first one like how do you invest in you know cryptocurrency doesn't give you anything useful keep trying in different things and you may
find something go oh this helps me with how I can operate at speed and with accuracy and then that gives you a wedge to start learning you know kind of how
you can be of superpower enabled with over 1.5 billion dollar in tvl the me protocol is home to me the fourth largest eth liquid staking token offering one of the highest aprs among
the top 10 lsts and now CME takes things even further this restak version captures multiple yields across car I layer symbiotic and many more making CME the most ficient and most composable LRT
Solution on the market metamorphosis season 1 dropped $7.7 million in cook rewards to me holders season 2 is currently ongoing allowing users to earn staking raking and AVS yields plus
rewards in cook me eth protocol's governance token and more don't miss out on the opportunity to stake restake and shape the future of meth protocol with cook participate today at me e.m. XYZ C
is transitioning from a mobile first EV compatible layer 1 blockchain to a high performance ethereum layer 2 built on op stack with igen da and one block finality all happening soon with a hard fork with over 600 million total
transactions 12 million weekly transactions and 750,000 daily active users cell's meteoric rise would place it among one of the top layer tws built for the real world and optimized for
fast lowcost Global Payments as the home of the stable coins cell hosts 13 native stable coins across seven different currencies including native usdt on Opera Mini and with over 4 million users
in Africa alone in November stable coin volumes hit $6.8 billion made for seamless onchain FX trading plus users can pay gas with erc20 tokens like usdt and usdc and send crypto to phone
numbers in seconds but why should you care about cell's transition to a layer 2 layer 2's unify ethereum l1's fragmented by becoming a layer 2 cell leads the way for other evm compatible
layer ones to follow follow cell on X and witness the great cello happening where cell Cuts its inflation in half as it enters its layer 2 era and continuing its environment Al leadership I completely agree like my lived
experience of using like you know tools like chat GPT is that it does amplify my productivity when I use it in the right way and I you like have to spend uh like time to figure out how exactly to apply
this to my own amplification of of like what I do I I guess when I was reading this section and about the 1960 the Mainframe computer I sort of putting my um like putting my head in the minds of
people at that time and you could kind of see at that time the way compute was sort of playing out it was really controlled by a small number of um a small number of companies and
governments it was sort of like a like I mean the computers were the size of buildings right and and so you can sort of take a 1960s mindset and extrapolate that and get very scared what ended up
happening was of course the personal computer Revolution where everybody got those building siiz computers in their own home as an amplifier for their own
productivity and Society completely forgot the 1960 hysteria around mfree but I can't help but like also wonder if some of the criticisms were sort of
right okay you go back to the 1960s and they talked about you know uh surveillance and kind of the the lack of privacy and they weren't completely
wrong and this is the uh you know we didn't get the worst case scenario of what they were projecting but we did get a lot of good and then some bad outcomes and this is why I sort of want to ask
you about your framing of like do do you actually think the doomers and the gloomer are completely wrong or do you think that there's some probability of like a Doomer style of outcome or even a
gloomer style outcome what what where AI is like not so uh Sun sunshine and rainbows that it actually is kind of negative for society like what you think about that from a probability
distribution perspective and do they have a point so I think smart people always have a point um and so I think it's the question's good cuz it's always to listen to what is the thing that
they're thinking about let me I think the two answers are very different between doomers and gloomer so let's start with doomers who you know another thing you're you know the bankless
community may be uh familiar with is you know X risk and so they tend to be existential risk um predominantly you
know especially yowy and others now one of the the mistakes so it start the thinking starts like this it says can you guarantee
me that you The Killer Robots will never be built either in the hands of humans or autonomously you say well you can't
guarantee that there's lots of things you can't guarantee you say ah so then we have an existential risk that's being added and we should we should we should stop that existential risk because why
should you add any existential risk QED my argument's over like well until you consider the fact that existential risk is not one thing like the only
existential risk for for human beings is not Killer Robots there's pandemics there's asteroids there's nuclear weapons there's climate change and the list kind of goes on and so you have to
look at existential risk as a portfolio namely it's not just one thing it's a set of things and so when you look at any particular intervention you say well how does this affect the
portfolio now my very vigorous and strong contention is that AI even unmodified and we'll get to why steering is good but unmodified at all is N I
think very positive on the existential risk portfolio because when you get to for example pandemics one of the things we've experienced in in our lifetimes and you know obviously if it was a lot
more fatal and everything else it could have been substantially worse than than the you know many thousands who died the um the question is to say well
how do you uh see it you know detect it how do you analyze it and how do you uh both do uh Therapeutics and preventive vaccines at speed in order to navigate
that and AI is the primary answer to that like none of that can work without the speed of AI and then you get to oh well how about asteroids well identifying which asteroids might get to
us being able to intervene on them early you get to like for example climate change you go well actually in fact whether it's anything from accelerating the invention of fusion to have how do
we manage our electric grids better the there's positive contributions across all this so you go okay given all of that I think AI even like unmodified
just just let the industry know exactly what it's going to do is going to be strongly positive in the existential risk bucket and I'll pause there in case you have a a contention on that before I
get to the gloomer category I no I'll just say it in another way where you're just saying the most fully Zoomer the the fastest engine going into the AI
Revolution it drives every it hits every single pothole it's on two wheels as it's going around the corners even under that situation the solutions that it provides to the all alternative
existential risks is still net positive in your opinion exactly yeah y so that's the reason why like I'm very far away from doomers right okay well how how about the gloomer do they have a point
yeah well no and by the way I thought the doomers have a point too which is you say hey by the way we should try to minimize the killer robot risk yes that is something we should be doing and we
can get back I guess your answer would be like through use of AI to help us also exactly okay yes exactly that feels a little recursive but um Hey whenever technology is part of the problem it's
almost always the best part of the solution too okay right okay that that's The Optimist that's the eak in in you're talking I think okay how how about how about the gloomer though but I think I have history on my side which is good
and we can get back to the Privacy you know thread from the main frame things as well so on the gloomer side the primary thing where I think I'm very sympathetic to the gloomer is that
if you look at and we cover this some in sub agency as you know if you look at the the tech the transitions for human Societies in these Technologies we as
human beings adapt adopt and adapt uh new technologies very painfully like the disruption so you go ah the printing press we could not have anything of the modern world without the printing press you can't have science science
scientific method you can't have literacy you can't have you know kind of a robust middle class yet there was a century of religious War because of the printing press when when we as human
beings come to this we we we we the transition period's almost always very painful and I think even with AI we're going to have pain in the process I don't think there's any way unfortunate
around it part of the reason I'm writing super agents and doing these conversations say well let's try to be smarter about it than the times we've done it before let's try to make the transition as easy and and
and kind of um uh more graceful but it will still be painful like in terms of you even if you say hey most human jobs will be replaced by humans using AI that
process itself is still painful people have to learn AI maybe it's new humans maybe the human who couldn't learn AI feels out of place you know is kind of is is is is suffering because of it and
and and that's the kind of thing that I think the gloomer um are kind of putting as it were an intuitive finger on which is hey look all this kind of transition
they they'll project it to Infinity but all this kind of transition boy this is going to be difficult and you're like yes it is right it's not no it's not and we're going to try to make it as good as
possible and that's Again part of the reason why I'm arguing about we should be intentional here about what could possibly go right is you say well and this get back to the technology solution
it's like well okay so we're going to have some some some job uh transitions we're going to have in Transformations we're going to have information flows and misinformation flow trans
Transformations and we're going to have some some expectations of privacy Transformations what should we do and the answer is well I actually think AI can be can be helpful in all of these
cases and like one of them part of the reason why you know inflection and Pi was you know kind of you know something that I helped getting as as an agent for every human being that's on your side
that's for you you know and by you is one of the things that can help you then navigate um because it can be like okay how how do you help me navigate this new world and I think it's one of the things
that's really important for us to provision early that goes all the way back to your democratization question and one of the reasons why I think that's an important thing to make sure that there's very broad access to okay
can can let's underscore this point because I I think some of the reason why the the gloomer sort of are winning right now the narrative war is because like of course fear is a bit more viral and it's easier to imagine it's much
easier to imagine in orwellian future in the 1960s or the 2020s than it is to imagine a a more optimistic future and as soon as you start talking about this optimistic future it sounds like too
utopian it just doesn't even sound real right but we are limited in terms of our imagination but that that question that prompt that that you just uh raised is like a a chapter in your book is the
question of what could go right and the gloomer rarely ask what could go right and I think maybe it's to be fair to them they have some limitations on their imagination so I want to ask you as a
kind of a technov Visionary like how would you answer that question so if if human beings if every citizen in the United States had an AI agent that Amplified what they what they do and we
had this across Society this technology was W widely deployed what could go right like what are the benefits for the average American here so line of sight
namely no technological innovation it's just a question of how we get it built and deployed a medical assistant that's better than your average doctor that's
available 24 by7 in every pocket so you have a health concern it's 11: p.m you
have a health concern for your kid your your parent Your Grandparent your you know pet anything that's in your you know you can begin to address it and it can help you including going oh for that
you should go to the emergency room right now right and so that's buildable a tutor on every subject for every age anything from 2-year-old to 82y Old like
hey you got you you would like to learn this you'd like to to understand more you'd like to in by the way there's obviously economic implications to that that's I think another thing that's
that's available then for you know your to your democratization point there's a lot of services not just medical and access to doctors you know some people have coner doctors most people have to
go through you know kind of their medical plan and some people don't even have medical insurance um even in even in the US there's a bunch of people who are uninsured um you know what what other
kinds of things could be it's like well actually in fact like I'm I'm I'm reading the lease for my rental like how do I understand that what what's important to know about it well the
agent can help you with that too and that's all line of sight today that's not even getting to hey how can it help you like code better how could it help you you know create marketing plans
better how could it help you sell better how could it help you like all of that stuff is also coming but like those three basics for everybody is you know
life transforming what what about the societal level so when those things for individuals kind of aggregate and compound we have better health care we have better kind of like learning uh capabilities we have better things in
all areas of our life like what does that mount to from uh the United States from a societal perspective do we have like more free time as a society does our happiness increase does our GDP like
double or triple do we get those things as well well I definitely think the equivalent of what GDP is supposed to be measuring should should be increased now
GDP has this challenge that it's measured in kind of an industrial dollars for things uh way so like for example all the benefits you get from
Wikipedia are actually deflationary in GDP and so but the the quality of that um I do think that um you know another thing that people worry about with AIS is oh I'm not going to spend time
talking to people I'm going to spend all my time talking to agents um and so loneliness will be increased or this is decreased I think that to some degree that's a design you know kind of choice and I think what we want to both see and
I hope we'll get and we want to nudge towards is you know like when you ask you know inflections piie hey you're my best friend it says no no I'm not your friend I'm your AI companion let's talk about your friends have you seen them recently how would you like to talk to
them you know maybe you could set up a lunch date you know that kind of thing and I think that um could lead to much great greater happiness um for this and I do think that actually you know part
of what I love about you know the the Bhutan you know Evangelical concept is I actually think measuring you know kind of gross national happiness is also a good thing that we should be you know
aspiring to as a society and I think that could be um you know increased with us but I think that the the place where
we'll see it is in being much more like kind of fulfilling lives and the fulfilling live might be um you know kind of like hey I get more
time to do my hobby you know I love fishing I'm gonna have more time to be fishing because I can do my work in a shorter amount of time or for people who because you know American society tends to like to work it's like oh I can I can
accomplish a lot more in my work I maybe still working the same amount but as opposed to putting a whole bunch of time into form entry I can now do the parts that are not just like form entry and do
the other parts of the work in in much more kind of fulfilling and capable and productive ways yeah one way I think about it is like a 100 years from now will it like will the average person in the United
States or wherever this technology is deplo like have a better quality of life I I think of you know the uh 2020s and I compare that to the 1920s and I would
like hands down you like prefer to live in the the the 2020s for all of its problems than in the 1920s but before the Advent of antibiotics like you know look at kind of mortality
rates from that time look at kind of the the amount of society that had to basically like do a grueling agrarian type Farm job in order to just to get by
right and it's like much better for most people now than than it was previously and there's lots of stats we could get into on that but let's just pause and go back to kind of the the another gloomer
objection so they would say read everything you're sound you're saying sounds so amazing but like yeah we've heard it before this is another bait and switch from Silicon Valley okay they
promised remember the the 2020s the Advent of uh y Facebook not to mention LinkedIn um they promised that we would connect the world uh and what ended up
happening Silicon Valley got rich they extracted our attention there's you know the term extraction is you know uh used a lot about this they sold us products they sold us as products to like the
highest bidder and now I'm thinking about even the time I spent with chat GB and it feels really good right now like it's amazing I spend more time with chat GPT than I do with um like Google and
you know as a result I think chat GPT knows me even better than Google I mean a lot about me could be re revealed by my search history but like even more so with chat gbt and I'm getting the point
of of daily use where it's like who who knows me better than Chad gbt like maybe my wife maybe a handful of other individuals but like it knows me and that all feels good because it's
amplifying what I do okay but what happens if things go dark if we get this kind of like bait and switch if suddenly open AI or whatever in insert your
Silicon Valley like Corporation here start saying oh you know all this AI stuff is pretty expensive we're going to have to start harnessing all this data we know about Ryan to like do something
and sell it analytica 2.0 yeah or I or I lose I lose or maybe they they sell it out to the government or something or they they control me in all of these subtle Ways by recommending things that aren't in my best interest it's in their
best interest or some government's best interest okay this is the the Crux of the bait and switch and so address that head-on Reed like how do we know this isn't a Silicon Valley bait and switch
because it feels like that's um that's happened previously with social media well I mean a little bit depends on what you mean by bait and switch because I think for example let's take your
example with Google and AD and AdWords yes Google gets a bunch of data from you and can advertise to you you know better and you know by the way hopefully that means that the products that you're seeing you know are actually things that
might interest you which I think actually think is a feature not a bug um in terms of things you might want to buy and actually has so far the best business model that's been invented
certainly in the media World maybe in any you know part of the world today and they say well what do you get for your data well you get a panoply of amazing free services you
know free search you know free email a bunch of other things and so it's not a it's not a you know kind of it's a voluntary you know something you
participate in you know because you get a bunch of value you know kind of transaction and by the way you'd probably rather be having it figure out how to monetize off your data than
saying oh in order to get our rpu you got to pay us you know 50 bucks a month right for this like no no no I'd rather I'd rather you know get the advertising right and give me all this stuff for
free and so it's possible that you know the kind of AI agents will end up in a similar kind of thing where they say well hey look we could charge you 50 bucks a month like Google could for our
search but actually in fact figuring out a way that is kind of you know transparent and voluntary and engages with you because it shouldn't be deceptive it should be with you with
with kind of your awareness in engaging and using it um that this becomes a uh a positive you know kind of economic transaction for you you um and so U but
it could be other things too it could be you know kind of it could be a subscription model um it could be integrated into the various productivity apps um that you're using it could be any number of things for that but I
don't think that the I think that the the dialogue that's kind of you know kind of very well Capal uh captured in this compelling slogan surveillance
capitalism um is is misleading because it's like well but like I for one like surveillance medicine I like you know the fact that my you know watch is tracking my sleep and health things
because it's for me and it helps me and it it's part of that positive thing and a lot of the uses of these data in these internet systems are as a way of making it free for you um where they have the
economics for uh expanding and improving the free product um and so you know I think that the um I I think I would
challenge the bait and switch uh kind of methodology and you know the last thing I guess I would say is like for example you say well you know whether it's a social network you know and I by the way
obviously think LinkedIn has handled this you know the best of you know all of them whether it's Google whether it's these things these are all voluntary participation questions you might say well it's very hard for me to
participate in modern society without being informed in the way that I could be informed this way and it's like okay um you know like yes I I I myself used
search a lot but I think I can do it in a in a way that is um you know you know maybe maybe you should say Hey Google
should offer uh a paid alternative um on the other hand you know for that to be economically viable at least two or three% of the audience
would have to opt for it right and I'm not even sure two or three% of the people would opt for it right I mean you you'll get individual say I would do it but like that might not be you know
economically relevant unless at least two or 3% of the people were doing it so anyway so I think it's a a challenge to the challenge as it were I think the Bloomer take on this was sort of interesting right which is like
acknowledging that there are some plot potholes and there's some cost right maybe to society and to individuals but also saying that the benefit far exceeds the cost one way you underscored this
was like you said this I want to get you to justify this because it kind of blew my mind and like when I was reading it even if llms get no better that is is no better than today the consumer uh Surplus to the average 20-year-old
living today is millions of dollars over their lifetime yes okay so what you're effectively saying is for an actual Zoomer so somebody in gen Z do you know uh somebody in that age demographic
they're going to be able to harness llms and it's going to deliver millions of dollars in value to them and that's not even talking about the llms and AI of the future that's talking about the AI
of today how some people think about that and like how does that deliver someone millions of of dollars in their lifetime well let's just start with something that's really simple which is
you know legal assistance so you're going to encounter all kind you're going to encounter employment contracts you're going to rental contracts you're going to have you know products and services
you might be uh you know kind of um you know kind of engaging in and today your average person just basically can't afford to pay a lawyer right because a
lawyer is hundreds of dollars an hour well now even today with gbd4 today you can put it in there and get useful
analysis useful kind of participation so if you just took take every single contract that you're potentially engaging in and use that now that get you a lot of dollars
towards your millions of dollars then you say well what about like medical stuff right like Consulting medical or other kinds of things or especially if I like you know in in the periods where since you know in this
country we tend to do you know uh Insurance in kind of challenging ways mostly through employers you know like okay so getting medical advice well
that's another area where you can get a bunch then you say okay well how about amplifying my ability to find and do economic work um that's another place
and so when you add all that up and you add it up for hopefully what is even a longer life because if you're getting you know kind of call it pre-critical medical advice about how to to
preventively stay healthy and preventively avoid certain kinds of you know catastrophic health conditions or navigate like early signs in ways that you can you can do it before you're in
critical condition not only is that hugely economic but that should also lead to longer lifespans and so all of that is part of how you know we get to
hey it's going to be today it's already worth millions to you read this I'm just doing a quick Nar pause and a time check so um do we have five more minutes or do we have 10 more
minutes 10's fine okay amazing so we have uh an opportunity I think to talk about one more of these objections and I want to talk about AI America and then
and then I think we're good okay so Reed this gets us to kind of um regulations and I think the the gloomer camp has one take on how we regulate Ai
and like um the the EA and and the the bloomers and and the Zoomers have a have a different case uh take on this but generally what I'm seeing coming from
The Establishment uh government is like um breaks it's no gasoline on this thing it's all breaks uh they have this precautionary principle which is like they think about what could go wrong and
how to prevent all of the things that that could go wrong you make a different argument I think that your argument is that Innovation is actual safety so you're making the argument uh I want to
hear how this makes sense but you're making the AR I think that actually hitting the accelerator on AI is how we make this thing safe and that feels very
counterintuitive what's that claim based on why do you think Innovation is safety so part of the thing like for example when you get to like how are
modern cars able to go these speeds and able to go them much safer than earlier cars were doing them is that as you iterate and deploy them you realize it's like oh actually in fact
we could put in anti-lock brakes oh we can put in seat belts oh we can have crumple zones oh we can have bumpers and that's an Innovative path to making the
car safer and the car can then go faster and navigate circumstances because uh you've you've innovated safety into the car as part of the Innovation with the
car and the parallel with that is essentially doing you know kind of like well what are the future features of AI and what are the things that we could be
doing that make them much safer from these kind of aligned circumstances so you like okay so can we make the
AI really enable um you know people who are who are trying to figure out um stuff with like their health and other kinds of things but also make you know
any efforts at terrorism you know kind of much more difficult and much harder and by the way this is of course what you know red teams and safety and Alignment groups are already doing at Microsoft and open Ai and thropic and
others you know for doing this because they're aware of these kind of safety things but it's that Innovation into the future that is the kind of really important thing and the
way you discover that is by iterative deployment by actually like like making it live and then seeing what things
needed to be modified now obviously on really extreme things like well okay terrorists who are creating weapons of mass destruction we want to make sure
that that's as little possible in any field as is as absolutely the case and you know for example safety groups more or less use as their minimum Benchmark
let's make sure that they these agents are not any uh more capable of doing that than Google search is today right and obviously we want to drive both of
them to the lowest but that's what you know kind of innovation to safety means and the kind of historical and easy to understand car example and what it means in terms of technological features for
building future software last objection here I think comes up is that this idea that um AI kind of kills human autonomy like uh this is a control technology
it's not a freedom technology basically so like and in this AI world that we're all moving towards I mean where is my agency I mean you title the book read uh you know super agency right it's like
but I feel less like have less agency if the AI is making all of the decisions for me I want you to address that too because um it kind of ties into this concept of of freedom and I I sort of
wonder how much of this is also like boiling the Frog like we just kind of get used to it and maybe that's okay but maybe it's not so if you went back to the 1960s and pulled those same people
and you told them hey in 2020s um most adults they would actually meet their future spouse or mate or partner uh by computer algorithm it's basically computers decide that's actually the
lived experience of how most people meet and get married today is like they meet via social network of some sort they meet on like you know Tinder or whatever dating site you know like you know whatever dating site they subscribe to
it's kind of the algorithms that are that are almost matching them that sounds dystopic in the 1960s now it's like oh I've kind of gotten used to it I've met many couples in healthy
relationships and they sort of met online by route of computer anyway back to this this argument that AI agents making the decisions and Outsourcing that part of our intelligence will
actually restrict our freedoms what do you make of this or do you think that there's some Merit to this argument so I think one of the things that I said at the very beginning is agency changes it isn't just new superpowers but it also
as you get to Super agency it kind of it changes some things around and so for example we have different um you know kind of think of
it as kind of different kind of tactile perceptions of what it means to be kind of human and human gauge in life like when you first make a technology it feels kind of alien and then it then you
know um you know fire and Agriculture and you know glasses and computers and phones like you know starts feeling like you know kind of everyday life like our grandparents use phones now too even
though at the beginning of the kind of smartphone era I was like ah this is one of those new fangled things I'd rather just you know you know go get on the the hard line and you know and and call my
you know grandchild whatever and so I think that the the the kind of the it does make changes and part of the iterative deployment and learning about
it is is how do you make those changes such that when we get to the Future State we go oh yeah this one's better and you say well is our current state
just just adapting and actually the fact the previous state judgment was correct well if you kind of look at it like take your 1920s and 2020s like you you know do you actually understand what the
world past you know kind of penicillin and antibiotics and all the rest of the stuff really fully looks like and what the consequence of all that is and why the portfolio of it is so much better
and so you actually have to take that that that state that you learn into it's kind of like think about it as you know kind of the judgments that you make as a child the judgments you make as an adult
you go look there's there's a certain you know kind of Innocence to to the children thing but like we get wiser we learn and we get experience and we use that for the Viewpoint of of of kind of
making good judgments and that's part of the reason why I think it yes you'd say hey you're meet you're you're you're you're you're M you're you're meeting
your life partner now on a internet service like whoa that seems really alienating but actually in fact it's the okay how do we make that a lot better
than the lottery of college or the workpl which was very Li yeah and what you had before and so I think that's the uh and again it's it's an iterative process it
doesn't mean that there aren't still some things that are broken in the internet you know kind of uh dating things but it's one of the things that we say we know how to continually improve it and that's one of the things
that we continue to work on Reed as we begin to close this out I want to ask a question about uh the United States and America and we you know different based on your different religious prefer
preferences for for AI you might decide to kind of uh regulate this thing in in One Direction or another and the question becomes okay how do we implement this technology Across America
across Society there are some that get to this stage of the conversation and they're like well the the Doomer take and even the gloomer take is not sustainable because we live in a multi-polar world with many different
actors and this is kind of an AI race and so if not us then our adversary you know doubles their GDP and we kind of stay stagnant and that leads to a world that maybe we don't like so I want to
ask you this question what do you think uh America should do here like what should our approach for AI be well one of the things that I started doing is
calling uh artificial intelligence American intelligence for precisely this reason which is it's really important that we Embrace this cognitive Industrial Revolution because the
societies that embrac the industrial revolution had prosperity for their community their children their grandchildren and and kind of made you know kind of the modern world and I
think the same thing is true for the cognitive Industrial Revolution with artificial intelligence or amplification intelligence or American intelligence and we want the kind of the the the
Spree DEC cor of American values the American dream the the empowerment of individuals the ability to you know kind of do your best work and to you know
kind of make progress from where ever you start in the rungs of society to to take more economic control over your destiny and I think it's one of the
reasons why it's it's particularly important that American uh values are uh deeply embedded in this and that it's it's an empowerment of American society
and it's part of the reason why I think that our regulatory stance um needs to be much more you know Bloomer Zoomer and accelerationist than it does you know
putting on the brakes because I think that's part of uh the future of the world as we can help make it become as we close this out then I just
a a final question is there any chance in your mind that all this AI stuff is kind of overhyped we basically like we're we we Flatline here that we have chat G pt4 and the Innovation really
slows to a crawl that like none of this matters that much because it'll happen very slowly over time um I think there's zero chance of
that right so so um I think that already we see enough in the scale compute and learning systems that are just only beginning to get deployed um like we
like like part of what 2025 is going to be the year there we see the acceleration of what happens in software coding um across the board and that software coding is both going to enable
a bunch of other things like all of us as professionals are going to have a coding co-pilot that helps us do our work in various ways um but it's also it's a template for how you advance a
bunch of other functions um of all of this work so I think even if you say hey gbd 5 is only going to be like you know 10% better 20% better I think it'll be a
lot better than than gbd4 and that that the progress of that of the of the increased cognitive capabilities slows
down I think the implications throughout the cognitive Industrial Revolution the technology is already visibly present um it's just a question of how we build it configure it deploy it
integrate it um and I think that's that's part of the reason why you know American intelligence there you go guys uh from Reed Hoffman 0% chance that all of this
stuff slows down so into the frontier we go uh and uh we're going with you bankless Nation Reed Hoffman thank you so much for joining us here today it's been a pleasure my pleasure as well I look forward to the next yeah we'll have
to to talk about crypto in the next conversation so uh everyone listening the book is called super agency it is out now we'll include a link in the show notes a fantastic book with uh Reed's
entire thesis around this distilled got to let you know of course crypto is risky so is AI you could lose what you put in but we are headed west this is the frontier it's not for everyone but we're glad you're with us on the
bankless journey thanks a lot
Loading video analysis...