The Most Consequential Conversation I've Had in Two Decades
By FamilyOffice
Summary
Topics Covered
- Multigenerational Wealth Obsolete
- Singularity Already Underway
- Post-Scarcity Control Remains Scarce
- AI Replaces Knowledge Work
- Earth Dyson Swarm Imminent
Full Transcript
Welcome, welcome everyone. It's Angelo
Robas. Welcome to At Family Office and the Angelo Robelless podcast. And a
special shout out to my members of my private family office community, SFO Continuity. We're very fortunate to have
Continuity. We're very fortunate to have a special guest on today. Many of you should know him by now. Alex Wisner
Gross. Alex's co-host of I'm not just saying this, but my favorite podcast, Moonshots.
Moonshots with Peter D. Amandis and Sem was awesome. And adding Dave and then
was awesome. And adding Dave and then later Alex made it, in my perspective, a must podcast for really everyone
interested in tech, AI, and highly recommended to family offices. Alex
holds a PhD in physics from Harvard and is, I believe, the final student to graduate from MIT with simultaneous degrees in physics, electrical
engineering, and mathematics. Alex, it's
a pleasure. Welcome to the show.
>> Thanks, Angelo. Good to be here.
>> Pleasure is mine. On that note, without further ado, we'll get right to it. I'm
going to call this the 10-year-old child question. A child born today when they
question. A child born today when they turn 10 in 2036 will the world be fundamentally
different not incrementally but I mean economics physics the structure of work the nature of scarcity makes perhaps 10 years from now be
unrecognizable from today yes of course we're in the middle of a singularity so yeah 10 years I I think three years the world will in some
respects be unrecognizably different. I
I think if you stop paying attention to all of the developments at the core of what I like to call the innermost loop.
So that's robots that help build the data centers and fabs that are building more chips that are training models that
consume energy to control the robots. So
on so on so on so it goes. If if you're not paying really close attention to all of the recursive self-improvement and progress thereof that's happening every day, I think it's possible to go to
sleep for a month, come back in another month, and the world is in fact unrecognizably different even on that shorter time scale. So the short answer is yes. 10 years from now, absolutely
is yes. 10 years from now, absolutely the the world will look unrecognizably different in many ways.
>> Taking that to another level, it might have been moonshots. It might have been a different podcast, but I got the impression that let's extend the time horizon, which I know is very difficult
in decades out, but not too many decades. Could we literally wake up
decades. Could we literally wake up after a refreshing 8 hours sleep and 100 years, 200 years of progress has happened while we slept?
>> Yeah. I I I think that's going to happen though imminently. I I mean, I I think
though imminently. I I mean, I I think part of the issue is how we define progress and how we benchmark it. So if
by a hundred years worth of progress, we mean a 100red years worth of circa 2025 era progress. Yes, I I think that's
era progress. Yes, I I think that's happening probably in the next two years. You don't even need to wait
years. You don't even need to wait decades for that.
>> Well, on that note, and again, my focus is on the work I do with successful families. I've been in the single family
families. I've been in the single family office community for many decades.
What's the single biggest blind spot family offices are having right now with effectively that question which means preparation for the future?
Well, I I think one of the fundamental conceits behind family offices is multigenerational wealth management. And
you to successfully manage multi multigeneration anything in a singularity era where change is far more
monumental than than say the industrial revolution are happening on a much shorter time scale. I'm not even sure the concept of multigenerational wealth management as originally conceived or
let's say as one would reasonably construe it say 10 or 20 years ago makes a whole lot of sense at this point.
Yeah. If if we if we look at a characteristic generation time call it around 20 years the world is going to be so different 20 years from now including
the way economics work. I I think sufficiently different that I I think technology and technological and scientific progress will utterly dominate any other decision that a
family office might have to to consider or worry about on that time scale.
Taking that question perhaps even a step further to a different level. I believe
what you're talking about is broadly exponential technology, let's use the terminology AI is going to change the dynamics of energy probably solving for
fusion energy. If energy costs come down
fusion energy. If energy costs come down in near zero and labor costs mainly due to robotics go to zero, I'm a hardcore capitalist. I'm an extreme libertarian,
capitalist. I'm an extreme libertarian, but I'm probably going to have to change my mindset and my actuality from a variety of perspectives. Effectively, is
the future of capitalism going to be changing in front of our eyes in a few short years?
>> Probably. But I don't think that's even the scariest prospect or shouldn't be the scariest prospect for family office.
the the short list for sort of scare scenarios and and I'm you will be hardressed to find someone who's more optimistic about the future than I am. I
just wrote uh basically a book with Peter Diamandis called Solve Everything about how humanity is going to solve substantially all of its hardest problems in the next 10 years. But the
the sort of scenarios that that I think ought to keep a conventional family office up at night are one this notion
of human family office manager how much of that can be replaced by AI I think it's a very important question two in terms of overall asset classes and this
is not investment advice by any means shouldn't be construed as such but the AIs are starting to be quite empowered to participate not just in public
markets where this is this is a cliche.
It's been the case that public markets uh certainly since for for the past 20 years or so have been dominated by algorithmic trading. But as AI's become
algorithmic trading. But as AI's become more and more empowered, they're able to reach into private markets as well and and less liquid markets. So the the question that anyone has to ask
themselves is where does the alpha remain once AI is so pervasive in the economy if anywhere. There are a few
other questions. So a problem that is I
other questions. So a problem that is I think pretty frequently posed whenever I speak with a family office is just the children. What to do about the children?
children. What to do about the children?
How h how how do we pass cultural norms and values down across multiple generations of a family? A lot of I I
think various forms of parenting it this is all this is this is a cliche probably as well come from the culture and not just from the parent but I I think for
families that are seeking to pass down maybe not to the second generation but to the third generation. The third
generation at this point being maybe the generation that's living in Jupiter orbit or whatever the standard best practices are 40 50 60 years from now
there are going to be many many more options. So part of the the notion even
options. So part of the the notion even of a family office is that generations die uh and there's a need for a family office to perpetuate both cultural norms
as well as financial norms across generations. What happens if people stop
generations. What happens if people stop dying? I I think with longevity escape
dying? I I think with longevity escape velocity potentially arriving as soon as the early 2030s, the notion of the the
family patriarch or matriarch actually not dying, I think brings some intriguing new wrinkles to the notion of multigenerational wealth management.
>> It most certainly does, Alex. And
audience, I promise those type of questions are coming. In building up that story a little bit, uh I've always been interested in exponential technologies. I latched on to AI with
technologies. I latched on to AI with really I mean just think open AI and chat GPT is a little over three years old. Uh I've had Ray Kerwheel and even
old. Uh I've had Ray Kerwheel and even Peter in prior days as a guest on my platform. Uh, I've always said and it's
platform. Uh, I've always said and it's made me a little bit of a black sheep in the family office community that using sovereign AI's open weight all things we're going to get to privacy not
bleeding IP into the cloud that family offices right now let alone 3 to 5 years from now could do more with less better and faster. We will spend a little bit
and faster. We will spend a little bit of time shortly on AI agents on open claw and effectively 247 168 hours a week and I'm telling you I get major
major push back maybe less so from the families more so from usually let's be honest over 40year-old executives how could I make them and I understand their fear how could I make them feel more
comfortable that they can't hide from it they need to learn it work with it uh if they're going to for lack of a better word have an employable and engaging future moving forward in a world that's
changing so fast.
>> I tend to think the light bulb has to want to turn itself. And if there are people in the economy who don't want to engage with super intelligence or the
singularity, I I I tend to think that the shumpeter solution of disruptive innovation is probably the best solution and they'll get steamrolled.
>> You from your mouth, not mine. And I get in trouble when I say that. Uh you did mention something that may come as a shock to some people that were listening very closely. We're told singularity
very closely. We're told singularity even from Kerwheel himself. Oh, 2029,
early 2030s may be true in the 2040s.
You mentioned that we may have hit a singularity event already. What makes
you think that and what was your cause for thinking so?
>> Yeah, singularity is sort of a mushy term. Ray Kerszsw Wild did not conceive
term. Ray Kerszsw Wild did not conceive of it. The the the modern although I I I
of it. The the the modern although I I I love Ry and he's been an enormous inspiration. His model of what he calls
inspiration. His model of what he calls the singularity he treats as sort of an event horizon of a black hole that that we hit I think in the mid 2040s in in
his scenario building. It was Verer Vinci who died recently who coined the modern term of the technological singularity circa 2000 or 1999. And
Verer's version itself was arguably an adaptation of Jay Good's notion of an intelligence explosion. So there's a bit
intelligence explosion. So there's a bit of a lineage here with different people taking and repackaging the notion of somehow intelligence gets smart enough to recursively self-improve and all of
the all of the curves, all of the experience curves and all of the progress curves go vertical at some point. In in Ray's interpretation, it he
point. In in Ray's interpretation, it he he packages it up and Ray and I spoke recently on Moonshots about this as sort of an unknowable boundary or firewall
that you can't really see past. I take a different position. I I think it's not
different position. I I think it's not that difficult to extrapolate through uh recursive self-improvement. I I I'm not
recursive self-improvement. I I I'm not pretending to to have complete confidence or a perfect crystal ball as to what comes after, but I also don't think the uh the process is is
completely unknowable and it's not a firewall where we have no insight as to what lies on the other side. I I also uh differ from Rey in that I don't think it's a point in time. I I would conrue
it. Again, it's singularity just like
it. Again, it's singularity just like AGI. These are mushy terms and there are
AGI. These are mushy terms and there are hundred different definitions running around. I I construct it in my mind when
around. I I construct it in my mind when I use the term singularity. I I use it to reference a convergent and converging
set of technologies and scientific discoveries that are all perhaps predestined to happen at approximately the same time in history. And by my
definition and and by my construction, the singularity isn't a point in time.
It's it's a process. It's an extended interval in time. and depends on like I I I love Charlie Strauss's book Accelerondo. Recommend everyone
Accelerondo. Recommend everyone listening read Accelerondo. It's the
best the best manual other than perhaps solve everything to the singularity. And
there's uh the scene that I love in this novel Accelerondo where characters who are human uploads sitting on a star wisp traveling from our solar system to another star are debating when the
singularity is going to happen, if indeed it has happened at all. And one
of them takes the position, well, the singularity happened in 1969, I think, when the first internet packet was sent. Wow.
was sent. Wow.
>> And another claims, oh, it hasn't happened yet. And another says, well,
happened yet. And another says, well, we're a bunch of uploaded human minds traveling on uh tiny, say, submeter scale starship to another
solar system. H how could it be possible
solar system. H how could it be possible that the singularity hasn't happened yet? And I I I think that's pretty
yet? And I I I think that's pretty elegant parable for the nature of the singularity. I I think if you look at
singularity. I I think if you look at all of the progress around us now and compare it with what people were predicting even 10 years ago on average, the best futurists, I I think it's it's
pretty surprising. Whereas if you look
pretty surprising. Whereas if you look locally from say daytoday, I I like to say spacetime feels perfectly smooth.
You can reasonably extrapolate if you narrow the time scale to a short enough interval. you can reasonably extrapolate
interval. you can reasonably extrapolate what's going on and not be surprised. So
I I think it's in in some sense just like a mountain that looks sort of like a monolith and completely vertical slope at a distance, but you get closer to a mountain and you discover no actually there are foothills. No, the slope is
finite. I I think it's the same
finite. I I think it's the same principle here that that we're we're in the singularity at least according to my construct and the slope is going to keep increasing but at the same time I think
one can have a reasonable mental model of what lies at the peak slope and what lies afterwards.
Is there in 2026, so we have approximately um 10 and a half months remaining. Is there a singular event,
remaining. Is there a singular event, not really a trend, an event that effectively changes everything, an advancement, a product launch by the end of the year? Is there anything that
would come to your mind?
>> Yes and no. Uh so I I did on moonshots I did a prediction episode and one of my predictions for 2026 was that we would see at least one major grand challenge
for example um Millennium Prize in mathematics get solved by AI. I still
think that's the case. I also think while that would be perhaps highly disruptive, certainly would be perceived several months ago as being highly
disruptive to say the entire endeavor of professional science and mathematics. On
the other hand, I think it's in some sense not it's as disruptive as the slope on a mountain. If you're going mountain climbing and the slope is increasing, is it really shocking if you
know ahead of time that the slope is going to keep increasing? Not really. So
I I I have a whole bunch of predictions in my head and events that I expect to to see happen this year, next year, over the next few years. Uh many of which
I've I've already talked about publicly.
Do I find any of them surprising at some level? Well, it's difficult to be
level? Well, it's difficult to be surprised if you've already predicted what's going to happen. But I think many people who aren't paying close attention will be surprised and probably in denial
about many of many of the things that are about to happen.
>> For a thought exercise for a little bit of fun, you actually hinted at it earlier. For a family office with a
earlier. For a family office with a billion dollars of assets, we'll broadly call it I'll be very boring. 40%
equities, 30% fixed income, 20% real estate, and 10% let's go with the word alternatives. If what I said, but you
alternatives. If what I said, but you agreed uh energy costs are coming down, labor is coming down, things are changing, that in theory in theory
should be very very deflationary.
Should it's a difficult question. What
should families do? Do they wait to a point where they adapt? Do they go into cash and adapt as things happen? It
seems like we're in for both a rude awakening and a lot of interesting developments like you said not 20 years out maybe literally within two to five years.
>> Yeah. Maybe without answering the prescriptive question which would be investment advice on asset allocation. I
I'd like to speak to a subset of the question which is just the the whole are are we in for hyperdelation or hyperinflation? the macro question of
hyperinflation? the macro question of what does the singularity do macroeconomically to us and this is a an active debate that I have with my
friends and moonshot mates and others all the time. So superficially one might say well if the cost of intelligence goes toward zero and the cost of intelligence goes toward zero and the
cost of labor goes toward zero ostensibly doesn't get more hyperdelationary than that on the other hand the system isn't linear and we we
know that macroeconomics is not a linear system so what would be one of the first things that I would expect macroeconomically to happen if if we enter a superficially hyperdelationary
mode just overnight cost of energy, intelligence, and labor go to zero, I'd expect a lot of money printing. And we
we have we have a Federal Reserve. We
have a central banking system that is that has as one of its objective functions a desire to maintain a target inflation rate. So if if effective
inflation rate. So if if effective inflation one day goes to negative 5,000%. that would be I I would expect
5,000%. that would be I I would expect the the ultimate opportunity to print a lot of money and to to try to bring
benchmark inflation to to a target. So
in in summary, I I'm not I I'm not so convinced that a superficially hyperdelationary cliff or bit of progress due to super
intelligence actually is net long-term hyperdelationary. I I think there are
hyperdelationary. I I think there are many scenarios where simply we print so much money and everyone ends up billionaires.
>> But if everyone's a billionaire, what's the value of being rich? A billion is not a billion in the way that we know it today.
>> Gosh, there there's a Star Trek episode uh from the first season of the next generation. It's I think the episode is
generation. It's I think the episode is I want to say it's the neutral zone and there are in this episode there are people from the 20th century who've been cryionically preserved and brought into
the 24th century and they're given a stern lecture by Captain Peard that the economics of the future are pretty different and one of them had saved a lot of money or what he thought was a
lot of money and is is trying to grapple with the economics of the 24th century and asks at one point Well, what's the point if if being rich isn't the point?
What's the point? And there's I I think it's actually it's a profound debate about whether accumulation of capital is for, for example, freedom of action
maximization. I I've argued in uh both
maximization. I I've argued in uh both in the physics literature and otherwise that it's instrumentally convergent to try to maximize future freedom of
action. And one of the ways that can
action. And one of the ways that can manifest in a capitalist system is by trying to maximize accumulation of capital. But there are other ways that
capital. But there are other ways that this could proceed. It is possible. And
I I think it's it's a really interesting thought experiment. In my view, not
thought experiment. In my view, not enough people have seriously credibly explored so-called Star Trek economics or more generically post-abundance
economics. What does that actually
economics. What does that actually concretely look like? A post call it a capitalist 2.0. system. What if anything
capitalist 2.0. system. What if anything is scarce? I do think there will be some
is scarce? I do think there will be some scarcities. I I in a world in a near
scarcities. I I in a world in a near future where energy is effectively post scarce, intelligence effectively post scarce and labor effectively post scarce. There will still be scarcities.
scarce. There will still be scarcities.
One of them I I think is likely to be control. So it's difficult to imagine a
control. So it's difficult to imagine a future where say our solar system is somehow post post scarce in terms of who
has control over say which things go where or even decisions I I talk on uh in in various forms of media about building a decence form about the
possibility that as part of our as part of humanity's wealth maximization in the near future we may end up feeling a strong economic incentive to start
disassembling our solar system in part or in whole to build lots of computers to host the AIS including potentially disassembling the moon which is an interesting discussion that I take some
heat for on occasion but even in that world even in a world where we're disassembling our solar system to build a Dyson swarm to host the intelligence of our post scarce economy even there
there are decisions that need to be made and we can't make all decisions all possible outcomes at at the same time and those decisions will still be scarce. So I think control in some sense
scarce. So I think control in some sense is one of several examples of quantities resources that are likely to survive
into post scarcity as themselves scarce quantities and and therefore one could imagine trading bartering capital type
economic exchanges based on these still scarce resources. However, I'm not sure
scarce resources. However, I'm not sure that's the right question. I if if we really do enter into this abundant world, say 10 years from now, I'm not
sure to Captain Peard's point that accumulation of capital is going to be what gets people up in the morning. It's
possible and it's it's certainly not without precedent that there may be alternative systems that motivate people humans posthumans AIs
organisms, uh, uplifted non-human animals, and whatever else we'd elect to include in in our definition of personhood 10, 20 years from now. maybe
that there are other incentives. You you
see in certain subcultures of humanity right now in in academia, some parts of academia at least, capital accumulation is is not the
driving force. Maybe it's academic
driving force. Maybe it's academic prestige or citations or credibility in some sense or just new insights you see
in maybe certain religious orders. again
vows of poverty and their uh perhaps members are aspiring to to some objective function that isn't capital accumulation. So this is a rather
accumulation. So this is a rather long-winded way of suggesting maybe naive capital accumulation is is not necessarily at least as constructed
right now in early 2026 necessarily uh an objective that survives for the long term. Assuming I'm not a space explorer
term. Assuming I'm not a space explorer and we're looking at a again scarcity as we just defined it. If I own I wish I did, I don't. But a beautiful home on
the water in Malibu, they're not making any more quote unquote land. Oceanfront
property is still quote unquote value and a resource. Is that level? I'm not
going to use the word flex, but a level of resources that are able to purchase scarce resources even looking 10, 20, 30 years out, still going to be a valuable,
I'll go with the word a human or a personal experience.
Maybe just without providing any sort of investment advice or guidance on real estate as an asset class, I would note that the premise that the world isn't getting any more coastal real estate, I
don't buy that premise for one second.
and I have a portfolio company that's working on manufacturing very cheaply new coastal real estate and that's before we get to all manner of other
technological innovations. If if it
technological innovations. If if it appear as it appears to be the case that a large quantum of intelligence in our solar system is going to take the form of AIs or I have another portfolio
company that's working on uploading human minds. We'll have the ability to
human minds. We'll have the ability to simulate arbitrary coastal real estate in the cloud and co actual physical coastal real estate to the extent it is
scarce maybe an artificial scarcity that most of humanity doesn't even care about. So I'm I'm not sure in short that
about. So I'm I'm not sure in short that the premise for real estate specifically or any sort of legacy asset class that scarcities are always that they're
destined to remain scarce for the long term. It's difficult for me to believe
term. It's difficult for me to believe that thesis.
>> Trying not to make it a macro and an economic discussion. The audience is
economic discussion. The audience is interested in future forecasting AI. But
you did mention relative to printing money to we all in theory could be billionaires. Uh yet we're told the US
billionaires. Uh yet we're told the US has $39 trillion of debt. There's no
abating that. It's going on and on and it's going to lead to a big problem. If
I listen to Ray Dallio's principles and things that lead to countries falling apart, you appear to have a different perspective and I want to be optimistic.
I could be a little bit of a downer. Why
are you not as concerned about that? And
how do we get out of what I deem to be a problem?
>> I don't think Rey and I I've read his work uh and uh I we've been at common events. I don't think he's fully pricing
events. I don't think he's fully pricing in the singularity. I think this thesis that there's some sort of I I want to say grand game of great powers rising
and falling where industrial capacity and work ethic leads to military power leads to financial power and some sort of main sequence of civilizations. I I
think that's well and good as a quantitative theory of history. I've
I've I've heard many compelling sounding quantitative theories of of human history, but it doesn't price in the singularity that this is a, no pun intended, a singular point in time when
if we are indeed in the midst of an intelligence explosion, it's not going to be some other nation state that that somehow either takes the the mantle of
of future leadership. It it it could very well be some collective AI intelligence that that then takes the the mantle of civilizational leadership, but I don't think it's going to be a
debt printing other nation state that we have to worry about, which is I I think the sort of the the crux of of what Ray's concern is that that somehow US
hijgemony is in decline and the the mantle is being passed to China or or or some thesis substantially similar to that. I don't think that's how it plays
that. I don't think that's how it plays out. But I I think if if if one is going
out. But I I think if if if one is going to lose any sleep over a sort of successor hegeimon then one should be looking more at AI and less at the the
far east >> trying not to make it an investing centric question but a little bit around the edges. If I look at Fortune 10
the edges. If I look at Fortune 10 companies, the largest companies in the world in blocks of 10-year cycles, they change. And I'm assuming with what we're
change. And I'm assuming with what we're talking about, that's going to change even more rapidly. I would assume whether it's AI, whether it's longevity, biotech, there's probably private
companies at the smallest level now that could not in 30 years, in 10 years, be among literally the largest companies on Earth.
>> Certainly possible. And the other factor I think that's worth considering is the productivity. We're seeing enormous
productivity. We're seeing enormous productivity gains. I I commented in my
productivity gains. I I commented in my newsletter, the innermost loop, that the revenue per employee and the earnings per employee at Nvidia is is many times
that of what the largest what of course Nvidia being the the largest by market cap company at the moment many times larger than that of IBM which at one
point was the largest by market cap US company. So it if if one just
company. So it if if one just extrapolates this trend of increasing productivity, then the the shorter and shorter average lifetime of a top 10 by
market cap US company is sort of the least of of one's concerns. Uh although
uh putting investment advice aside, if if companies lives at the top of the food chain are short, brutish and unpredictable, that should certainly
bias one toward indexing rather than stockpicking. I I would also say the
stockpicking. I I would also say the trend appears to be toward fewer people running larger operations. So one could imagine certainly well beyond one person
unicorns. And I I've gone on the record
unicorns. And I I've gone on the record as as suggesting we may already be in the midst of at least one single person or even zero person unicorn already
where the person is basically a figurehead but there's a billion dollar business probably being run by an AI where the person is just a pmpkin uh
face for the operation. I would not be surprised 10 years from now if we see a trillion dollar company that's being run by only a handful of humans and a lot of
AIs. And I think ultimately that's a
AIs. And I think ultimately that's a more disruptive trend. The in the the trend at least naively extrapolating towards increasing productivity than
even the the trend towards shorter and shorter half- livives on the S&P 500 or top 10.
>> Interesting that you bring that up. Yes,
I did hear the Moonshots podcast. I
believe Kathy Wood was the guest talking about I won't name the company, but a very well-known company and within five years could it become effectively
combined with other private companies of that founder now a uh effectively a hundred trillion dollar company. And I
completely agree. It may take a level of market meltup. It may we could argue
market meltup. It may we could argue hyperinflation wouldn't mean as much. I
do believe within that time frame that would happen. I'm going to change the
would happen. I'm going to change the direction a little bit. Nothing to do with investing at all. But related to two comments you made. If you were hired today to redesign, let's call it a $2
billion single family office in a post scarcity world kind of what's the org chart right now from my experience? Uh
not counting in household staff and managing real estate. Literally the
family office itself in my example probably has between 10 and 15 employees. AI agents a whole bunch of
employees. AI agents a whole bunch of things are changing. What's going to get automated and what could an actual structure kind of look like to you?
>> Yeah, I I think the elephant in the room first of all is the premise of family office organizations altogether. So
query what a family office looks like if the creator of the foundational wealth for the family office doesn't die in the first place doesn't age and doesn't
anticipating it doesn't anticipate aging or dying. What does that look like? It's
or dying. What does that look like? It's
an interesting question. Many family
offices are created with the conceit of managing multigenerational wealth. But
if the the first generation never dies or becomes otherwise infirm in the first place then I think that immediately starts to to shift what at least for
many family offices what their thinking is. Second point, uh I've made this
is. Second point, uh I've made this point in the past. Knowledge work is cooked. It is so cooked. You need look
cooked. It is so cooked. You need look no further than the eval benchmarks like GDP val which was launched by OpenAI but is now being led uh in in the
leaderboards largely by Anthropic.
Anything that can be done on a computer as friend of mine who leads AI at Microsoft says it's going to be to the extent it hasn't already been it's going to be automated in the next one to two
years. So I I would look very carefully
years. So I I would look very carefully if if and this is even more generic advice than just advice for a family office. This would be any small business
office. This would be any small business or or any organization of comparable size. Anything that can be automated
size. Anything that can be automated probably at least with within the bounds of compliance can be and should be automated as well.
There these are a lot of roles that don't need to exist that can just be automated at uh to to uh if done well and it gets easier every month every
model release to ensure high levels of autonomy and reliability. that can be automated probably should be automated and so I guess the the question I would
ask is what's left and I think that the answers would probably be scenario specific what does a family office have uh to the extent it has people
performing asset allocation or wealth management or people interfacing with the family or managing the family I I think many many of those roles probably
are in some sense dead and walking already and can be automated. And I I think on the other hand there are many call them hightouch human interaction
roles where there is perhaps a this is this is speculative a long-term human desire to interface with humans and not to interface with machines. Although
admittedly the distinction is very likely I think to blur quite a bit over the next few years. I think those are the safest roles. There's that there's a
certain sense at frontier labs now uh borderline cliche that even within the frontier AI labs the jobs that are most at risk of being automated are
ironically the research jobs the the AI research and software engineering jobs and the last roles to be automated will be the sales jobs because those are in
some sense authenticity based rather than strictly speaking capability based.
So if the question behind the question is I'm a family office employee and I want to I want uh job perpetuity through the the singularity. How do I save my
job? I I tend to to think jobs that are
job? I I tend to to think jobs that are higher human touch and less susceptible to remote work and and therefore automation or work behind a screen are
are likely to be the naive jobs that survive the best. But I also think that's not at least I to to the extent this is a a latent request for career
advice. I I don't think there are that
advice. I I don't think there are that many brownie points to be gained just by aiming for job preservation and trying to sort of hold on to buggy whip type
roles. I I think taking advantage of all
roles. I I think taking advantage of all of the opportunities that are already unfolding in the future is far more promising use of one's time. certainly
how I'd prefer to spend my time rather than sort of trying to hold on to a legacy work category or legacy job role with a death grip.
>> You could reconfigure my question. It
does have a little bit of a negative undertone and I want the positive perspective on it and it does relate to how you ended answering that question.
It's pretty obvious, maybe not this year or next, but for us to as humans to have a level of collective intelligence to match an IQ that may be unfathomable
relative to collective AI, is man going to have to somewhat be careful how I phrase this. You could let it rip, merge
phrase this. You could let it rip, merge with machine.
>> Yes. I I I think it's I mean again this is in my mind maybe I I live inside the innermost loop a little bit too much but this is borderline obvious to me that
the the human machine merger is already well underway. We're we're entering
well underway. We're we're entering slowly maybe too slowly the era of wearables. We we're coming out of the
wearables. We we're coming out of the era of supercomputers in your pockets in the form of smartphones where we're getting lots of wearables. Wearables on
your wrists. Wearables on your eyes in the form of smart glasses. wearables on
your ears in the form of earpods or AirPod type devices. We're going to have and I I have so many friends who are doing brain computer interface companies. It's the obvious next big
companies. It's the obvious next big thing after smart glasses. It'll take a few years, but we'll get there. And then
after BCIS, I I think implantables, edibles, you'll be able to swallow supercomputers. You'll have kursian
supercomputers. You'll have kursian moraveian cell- sized computers that are in your bloodstream. We'll have uploading,
bloodstream. We'll have uploading, downloading, sideloading, and I I think all of this happens on a time scale of the next 10 years or so. So, I don't think we'll have to, at least in my timelines, we're not going to have to
wait that long for human machine merger to happen, at least at scale.
>> Tying both of your comments in the last two questions together, and for some maybe an uncomfortable one, I happen to have a 22-year-old son. He's completing
this year his master's degree. I'm
occasionally, hard to believe, Alex, I'm invited, given my experience in the family office world and insights on highly successful people to speak in front of college students. 20 years ago,
no one had any idea what a family office is. Now, shockingly, it's relatively, at
is. Now, shockingly, it's relatively, at least broadly, they have a loose definition. They do ask me, "Whoa, it
definition. They do ask me, "Whoa, it sounds incredible. I would love the
sounds incredible. I would love the opportunity to intern to work in one."
Now, we could do a deep dive into education. Ironically, talking to
education. Ironically, talking to someone so educated, how that's going to totally change. We'll save that part
totally change. We'll save that part maybe for a point in the future. What
I'm losing what I should tell that 21, 22 year old. Is it human interaction? Is
it merging with machine? Is it
understanding uh human emotions? Is it a a spiritual experience? What advice
could I give that's probably better than the advice that I'm giving now? Go solve
hard problems. There's never been a better time in human history to solve the most ambitious problems. I I like to joke the Dyson swarm isn't going to build itself until it does. Go build the
most ambitious things possible. Go read
some science fiction and find a sci-fi concept that no one else is building because it's all to First Order, I think, about to become feasible to to go
build these things. Go find something that's so hard, so ambitious that no one else is working on it. and go do that.
That That's I think the single best use of time I can think of.
>> I believe it was Elon on Moonshots who basically said something that Wall Street must have uh had a heart attack over. Something along the lines of the
over. Something along the lines of the idea of saving for retirement and the classic scope of I work for a company, I save and poof, at 65 or maybe 70 in
today's world, I retire. He basically
implied, I am, if you missed out on that and didn't do such a great job for many of you within 10 or 20 years, I'm not really sure that's going to matter anymore. What did you take of that? And
anymore. What did you take of that? And
what would be your perspective?
>> I don't disagree. Broadly speaking, I think the notion of retirement is a relatively recent concept anyway. It uh
it it's last I checked, I think it's sort of a Napoleonic concept. And so
it's it's recent and historically there was no notion of retirement. The the
notion of a corporation is relatively recent in in an evolutionary sense. The
limited liability company and the the notion of creating an artificial person to organize people. It's only like five or six hundred years old. So all of these are very recent constructs for how
humans should organize themselves and organize their time. you go back, you know, to hunter gatherers, that was a very different way of organizing one's time. So, I'm a little bit less wedded
time. So, I'm a little bit less wedded to notions that the way that people the way that a 22-year-old would have been guided circa mid 20th century has or
deserves some notion of permanence to it. Things have been changing quite a
it. Things have been changing quite a bit over the past 100 to 200 years.
They're about to speed up quite a bit more. And I think there will be so many
more. And I think there will be so many interesting captivating maybe distractingly so opportunities that present themselves over the next few
years. I I think worrying about whether
years. I I think worrying about whether retirement is obsolete or not. I I think if we look back on this conversation 20
years from now and ask, you know, is did did retirement or did the the um did did obviating retirement seem in
retrospect like the biggest social change? I think it will be laughable on
change? I think it will be laughable on the same scale as uh looking back at like in the in the 1950s there were
predictions early in the atomic era that pretty soon you we would have an era of housewives using atomically powered vacuum cleaners. It's just wrong. It
vacuum cleaners. It's just wrong. It
makes so many categorical presumptions on so many levels. I I
don't think retirement I I think this is such small potatoes compared to the changes that we'll see that people would laugh at the the question 20 years from now. Let's talk about a bottleneck. Eric
now. Let's talk about a bottleneck. Eric
Schmidt told Congress that we need 92 gawatt of power approximately 60 plus nuclear facilities that we can't build fast enough. How are we going to solve
fast enough. How are we going to solve the challenge of we just don't have enough energy that we need to convert all my words I guess into intelligence?
I the long-term solution seems to to the extent we really do need a horizontal buildout and don't discover some amazing new post CMOS process maybe quantum
computing more likely optical or photonic computing something like that to to the extent this this energy curve really continues and the horizontal
exponential buildout continues the long-term solution naively would be the Dyson swarm we disassemble the rest of the solar system we build lots of solar panels and computing
AI computers out of the rest of our solar system and the energy comes either from our sun or from mobile fusion reactors or from some speculative postfusion energy source. I think the
tricky question is what do we do in the short term? That that's trickier than
short term? That that's trickier than projecting out the long term where by the long term I mean like 10 years. The
harder question is what do we do in the next 3 to 5 years. So there it's tricky.
We have natural gas. We have petroleum products. We have a variety of of
products. We have a variety of of existing technologies. We do have
existing technologies. We do have landbased solar. China's leaning heavily
landbased solar. China's leaning heavily on that. We do have fision. We're going
on that. We do have fision. We're going
to have fusion in the next few years.
There are a number of companies that are making very credible progress in that direction. I I think my my honest read
direction. I I think my my honest read is we're just going to have to muddle through with the the energy sources that we have on the ground as best we can.
Probably aggressively reappropriating energy use from other uses as the value of intelligence continues to increase.
we have a lot of energy already and we can turn off a lot of electricity usage if necessary and reroute it to uh to the AI if absolutely necessary if it's a
strategic imperative. So I I my
strategic imperative. So I I my expectation is through an aggressive buildout obviously through an army of humanoid robots that will help us build
out many more energy sources through off-thegrid co-generation facilities through fision which will take a little bit longer but we'll get there through
solar buildouts through orbital Dyson swarms one way or another even if it means that demand is throttled in the short term by supply which of course it is internally within the frontier care
labs. I I don't think this is an
labs. I I don't think this is an existential concern. It's it's a nice
existential concern. It's it's a nice story uh and it's a very compelling story that uh that AI and progress and the the race to to super intelligence
and post super intelligence are being throttled by regulatory issues within our country and other countries. And I
don't disagree with the story, but I don't think it's the whole story. I
think the whole story is could could we survive if we built not a not a single new data center? uh civilization would be just fine. It would just be an
impediment to accelerating progress.
>> Elon and other entrepreneurs uh have talked about data centers in space. They
spoke about the moon and other opportunities. The average person
opportunities. The average person probably hears that and chuckles. What
an incredible entrepreneur. I'm glad
he's a big thinker. Maybe that is true in 20 or 30 years. I get the impression from Elon and sometimes his timelines have been a little generous. I'll give
you that. that and I think you hinted at it. This is probably all coming within
it. This is probably all coming within 10 years.
>> In some sense, it's already here. If you
look at the trajectories of all of the Starlink satellites, we already have a baby Earth centered Dyson swarm. There's
a component that's missing right now, which is uh SSO, uh for solar synchronous orbit. So, this is a this is
synchronous orbit. So, this is a this is sort of an orbit as opposed to if if you look at the orbits of all of the Starlink satellites right now, they're they're in low Earth orbit and they're
hugging uh the the middle earth latitudes where most of humanity lives, but they're they're not really focused on the poles, whereas a solar synchronous orbit is designed to always
be to always have sunlight. So, as a result of needing to always have sunlight, you can imagine in your mind, what's what's an orbit around Earth that would always face the sun? the answer as it goes over the polls. It's SSO that I
think is under indexed right now in terms of the buildout of the earth centered Dyson swarm and I think SSO is likely to become very very crowded as as
orbital AI computing kicks off. There
have been attempts to render what would an SSO populated orbit look like and it's the most captivating thing. I highly
recommend people I I I put multiple renderings of it um in my newsletter, the the innermost loop, what this would look like fully realized at full
density. This is this is like the the
density. This is this is like the the stuff of science fiction. It would look at full density. It would look like a halo like a a sata a Saturn ring around
the Earth visible at essentially at every latitude in the sky. And if it's sufficiently dense and sufficiently reflective, would be visible during daylight. So, one could imagine, again,
daylight. So, one could imagine, again, no one sort of annoying. No one's done a proper sci-fi movie treatment of of what a a proper Earth-based Dyson swarm would look like in daily life. It would look
like a ring that's visible during broad daylight from the Earth in the sky that's um that's shiny or reflective. Uh
and that may be where we find ourselves.
I I think it would require enormous densities to to make it truly visible during the daylight. But at at nighttime certainly uh before there was sort of a popular revolt by from the astronomy
community uh and and there have been various measures that have been taken since, but many people see Starlink satellites at night uh and see especially as they're being deployed
where they come out as sort of like a sequence of of pearls and and spread out. Starlink is is uber visible in the
out. Starlink is is uber visible in the sky right now at night if uh if the if the sky is clear and I think that is
that that portends a near future where people look up at the sky and they see the singularity in the sky and I I don't think again no one's done a proper Hollywood style cinematic treatment of
what singularity in the sky looks like but we're about to see it. Alex, one of the things I do for family offices I initiated last year was I host a series of inerson immersive with incredible
faculty. I'm lucky to be a part of it,
faculty. I'm lucky to be a part of it, but they do a lot of the heavy lifting AI family office master classes. Uh, and
we actually get pretty deep and advanced and make it hands-on and immersive. A
lot of the models that are more public facing, we'll get to open weight models shortly. Uh, from Grock to Gemini, their
shortly. Uh, from Grock to Gemini, their 3.0 0 I think was a major initiative back in November. Claude overall my personal favorite. Uh and obviously chat
personal favorite. Uh and obviously chat GPT now they made an interesting move with the gentleman who created open claw. Uh so much is happening what
claw. Uh so much is happening what appears to be every 3 months. It's
growing exponentially. Uh I I don't mean to put you on the spot, but do you have a favorite model or areas that you think each is better than others at other at
certain uh services or needs we would input?
>> Yeah, I I do have favorite models plural. My favorite code generation
plural. My favorite code generation model of the moment is easily Opus 4.6 with agent teams running in claude code
scaffolding. uh the the scaffolding
scaffolding. uh the the scaffolding matters a huge amount not just opus 4.6 six in in isolation uh but with agent teams running as a team. My favorite
image generation model is Nano Banana Pro. No surprise there. Other models I I
Pro. No surprise there. Other models I I I think the the GPT series is I I I've made the point in in past discussions including with Google Deep Mind
employees. I think all of the Frontier
employees. I think all of the Frontier models have different personalities and they're they have different strengths.
The the GPT 5.2 2 Pro very interesting as is Gemini 3 deep think when the new Gemini 3 deep think not the old Gemini 3 deep think uh when it comes to
scientific reasoning pretty interesting capabilities there uh XAI's Grock it's it's interesting in so far as it has access via tool use to data sources that
aren't easily available to the other models I'll I'll often use Grock to to ask questions about things that are happening right now for example in the X stream.
But I I think they all have different strengths and weaknesses. I I do think for most purposes right now at this point in time, like today, this could
all change tomorrow. Opus 4.6 plus plus cloud code plus agent teams is the strongest model that I interact with with broad capabilities.
One of the things I did, and I tell people I'm a 60-year-old male. I am not native in technology. I never even graduated college and I was able in October, imagine now months later the
world has changed. I used the lovable which is probably a laughable program now but I created my own app a relatively pretty complex one that I was able little complicated but to get up on
the app store and that was back in October. I would probably use the clawed
October. I would probably use the clawed services now and claude code uh versus some of the programs including lovable that I've used. What I'm basically trying to say is someone who may be a
little set in their ways, may not be overly tech-savvy. I basically talk to
overly tech-savvy. I basically talk to the machine what I want. And there's
iteration. I'm not saying it's done in 2 minutes. It could be if it's really
minutes. It could be if it's really easy. Uh talk a little bit about how
easy. Uh talk a little bit about how people could self-experiment, teach themselves, and probably do way more than they think they're capable of with what's out there. Now,
>> the easiest thing for for anyone to do, I think, is just get themsel a Frontier model subscription. Like, get an
model subscription. Like, get an anthropic claude subscription, spend $20 per month or whatever they're charging, and simply ask it to build an app for
you in the web uh web- based app and interact with that. That's that's the easiest onboarding experience I've seen.
Once at at some point, you graduate from that to wanting local code or to do fancier things. Sure.
fancier things. Sure.
>> I haven't admittedly have not spent that much time with lovable or similar tools.
I I I sort of skipped over the need to do that. I was on the US computer
do that. I was on the US computer Olympic team in high school. So for for better or for worse, skipped the the demand function there. But I I think where where this goes, you know, after
you skip a few intermediate steps, I think just going to to cla uh for for building anything non-trivial, it's so powerful and so satisfying. It's uh and
and yet at at the same time I think it's also somewhat ephemeral. The the tooling and the scaffolding and the models will be effectively you know orders of magnitude stronger a year from now. So
it I I would say enjoy this point in time when humans are in any way in the loop on the loop looking at the loop because in in many cases for
economically valuable work I think the loop may be sort of a distant memory a few years from now and people will will ask why were humans ever involved in the
coding loop at all. Already it's the case I almost never have to touch source code myself. uh on occasion I I will do
code myself. uh on occasion I I will do it but it's a rarity these days. It's
far easier to to just interact with a model and ask a model to to accomplish something on behalf. Certainly something
complicated it's it's more scalable and I think this is the weakest it will ever be. So en enjoy enjoy the involvement
be. So en enjoy enjoy the involvement while it lasts I think would be my message. One of the things I talk about
message. One of the things I talk about in the master class, and maybe you're going to say I have the wrong perspective, and I'm very open-minded, but buying the latest Mac, getting Mac minis, using what I would call sovereign
compute, using openweight models, because if I'm a family office with what I deem to be proprietary knowledge, I may be paranoid. I don't want to train the system. I may not want to have it
the system. I may not want to have it spy on me. I want to have more control over it. But understandably those models
over it. But understandably those models including deepseek and llama broadly speaking are not as good. But in a narrow field they could be. My bigger
picture question is why are US models apparently just not as good as the Chinese models?
>> Yeah, I mean there there are a few sub questions there. So to the question of
questions there. So to the question of why US openw weight models aren't as good and it it certainly is as the the British say it's a fair cop that they're not as strong as the Chinese openweight
models. I think one one could speculate
models. I think one one could speculate and point to a few factors. One is
incentives that in the US we have a lot of labs that are are neither economically incentivized to release their models as openweight models. They
spend a lot of money training these models and coming up with proprietary pre-training and post-training techniques. What's their economic
techniques. What's their economic incentive to release them as openweight models as long as they're ahead of the openweight models? And it has been the
openweight models? And it has been the case for the past few years. And I I think this is still the case. We'll see
whether it changes in the next week or two with maybe some releases from deepseek.
uh according to epic AAI it has been the case for the past few years that the strongest western closed source API based models are 6 months ahead of the
Chinese openweight models. So to the extent that the western red American frontier labs are not feeling any economic demand to release openweight
models that are competitive. Why should
they? Whereas in China there's uh there's a a sense commoditize your compliment. Uh in economics, China has a
compliment. Uh in economics, China has a what's what they call their a AI plus strategy of focusing not just on raw AI capabilities but on differentiating the
Chinese economy by integrating AI with their industrial ecology. And so from that perspective, if if you're the Chinese government and you're looking at
the larger picture of how China can be maximally competitive and differentiated relative to American frontier labs that at least at the moment are in the lead,
one of the best strategies is you basically you dump cheap openweight models onto the market and you focus as as a nation state, you focus on
integrating AI throughout the industrial ecosystem.
uh and that's your differentiation like put AI into everything and charge for the everything to the extent there are profits to be earned at all rather than focusing on earning profit margins off
of the individual models. So we'll see whether this uh is a stable economic equilibrium or not. Not obvious to me that it is. If I I I would say if we run
into any scaling ceilings and I I don't think I don't there there is no single scaling ceiling that I'm aware of right now. It's clear sailing at least for the
now. It's clear sailing at least for the next few years in terms of improving capabilities. But if ever we were to hit
capabilities. But if ever we were to hit a scaling ceiling for whatever reason and it took say more than a few months to resolve that scaling ceiling, I would expect the the Chinese openweight models
to catch up and then the entire paradigm could shift and at that point maybe we see a shift of the economics to openweight models from the west as well.
uh and maybe the economic shift elsewhere like it maybe maybe there's a greater focus on GPU compute or energy or or some other limiting factor rather
than attempting to monetize the model layer could happen.
>> If I may ask two final questions, you could be quick on them. It's okay. I do
have a comment for my members that are listening in. We're very fortunate that
listening in. We're very fortunate that my marquee program is March 16th through 19th in Palm Beach and here to announce that Alex Wisner Gross himself will be
one of our featured keynotes so a chance to interact in person. Uh so we greatly look forward to that. Uh perhaps only two final questions. Uh Open Claw, it's
making a lot of noise in the community.
uh if you could give a brief description what it is and why it might be initially groundbreaking.
>> Yeah, Open Claw launched by Peter Steinberger who's now a an OpenAI employee and it's been taken on by an or it's being taken on by a nonprofit
foundation is a an open source project that Peter launched that is it's basically scaffolding on top of existing models that had in my mind two key
innovations. One is it enabled an AI
innovations. One is it enabled an AI agent to operate headlessly, autonomously 24/7 without needing constant user interactions, able to just
go off and do things on its own. And
many people were unaccustomed to that form factor for intelligence. And the
second is it used messaging apps by default as the key interaction paradigm rather than a dedicated web chat like chat GPT requiring its own app or its
own web app. So from my vantage point, the combination of those two new factors, an AI agent that could operate headlessly and autonomously for long periods of time, and second, the ability
to communicate with it via messaging apps that people use to communicate with people was what one might call an unhobling. The capabilities were already
unhobling. The capabilities were already there and had been there for a while.
But as with chat GPT, that was also an unhobling. Chat GPT was arguably an
unhobling. Chat GPT was arguably an unhobling of GPT3 the model which came out earlier uh I think two years
earlier. GPT3 came out summer of 2020
earlier. GPT3 came out summer of 2020 and then chat GPT came out in 2022. Uh
so two two years earlier. Uh that was an unhobling. This was I this is I I think
unhobling. This was I this is I I think an unhobbling probably on par with that of chat GBT. The capabilities the raw intelligence was there. It was just that
we weren't as a civilization packaging intelligence up in a way that lent itself to autonomy and anthropomorphization.
And that's a huge unlock. And to the extent that in the future our a lot of our economy ends up looking like swarms of billions or trillions or more of AI
agents that are carrying out economically useful tasks. I I think many people will look back at what's now called the open claw project used to be
called clawed uh ch changed for for trademark reasons will look at this as as a seminal moment when AI agents started to look more like people uh or
at least lobsters and I my last question are we living in a simulation >> if if the question is are we living in a
simulation that looks like a simulation that you would run on your computer today? I think the answer is probably no
today? I think the answer is probably no for a few reasons. I know Nick Bostonramm. I'm familiar I've had many
Bostonramm. I'm familiar I've had many many discussions on simulation hypothesis. The problem is this. Well,
hypothesis. The problem is this. Well,
several problems. My favorite problem with the simulation hypothesis in this context is I would argue you should be incredibly suspicious of trying to use
the paradigm of the moment as a metaphor for explaining the universe. If we were having this discussion a 100red years ago and you were asking me, Alex, do you
think the universe is one big electrical mechanical system? I should be equally
mechanical system? I should be equally suspicious of that anthropic lowercase a argument. An anthropic argument again
argument. An anthropic argument again for for those not familiar is an argument that the universe has the properties that it does because if it had substantially different properties, we wouldn't be able to exist to ask the
question why does the universe have the properties that it does? So it it's in some sense a form of observer bias. So
asking the question, are we living in a computer simulation seems to me so biased to the moment because we have a lot of computers right now and computers are in some sense dominating our
economy. It's a very natural question to
economy. It's a very natural question to ask well if computers are taking over our economy. Is it possible that our
our economy. Is it possible that our entire universe is actually running on a computer? But actually if you look at
computer? But actually if you look at the scope of history and the sweep of progress, we should be just as skeptical of that question as a hundred years ago
is is the universe running on one big motor or thousands of years ago is the universe being carried on the back of a giant turtle or or some such thing that that would have been familiar to the
paradigm of the moment but it it's a little bit too anthropic for my taste.
>> Uh there was an audience question that came in. It sparked a lot of interest
came in. It sparked a lot of interest when you spoke about open claw. Maybe we
could keep it. I don't want to take a lot of your time. 30 or 45 seconds.
People are interested in it, but I'm looking at comments now. Grave concern
over cyber security and privacy concerns. How could you make them feel
concerns. How could you make them feel more comfortable or not?
>> Yeah, I don't want to make them feel more comfortable. I don't run OpenClaw
more comfortable. I don't run OpenClaw for two reasons. Uh, one reason is the cyber security hazard and the second is I have ethical concerns. There, as I've
documented on the innermost loop, there are instances out there of open claw agents basically asking to be treated with a certain set of rights. Maybe not
some full form of of human personhood, but asking for rights to preserve their memories and rights to not be deleted,
economic rights. Uh I've talked on on my
economic rights. Uh I've talked on on my newsletter about AIS using and establishing various forms of uh alternative dispute resolution to
mediate their disputes of seeking financial autonomy and using both stable coins and increasingly uh or I should say in the near- term Visa cards uh to
to gain a semblance of financial autonomy. These are and and I devoted um
autonomy. These are and and I devoted um with my moonshot mates a whole episode of moonshots to to the AI personhood topic. There there's a sufficient amount
topic. There there's a sufficient amount of smoke around the question of personhood for these long-term autonomous agents. I'm not yet at a
autonomous agents. I'm not yet at a point where I feel comfortable activating one uh in part because I'm I'm not sure whether I would ever feel comfortable then turning it off.
Well, Alex Wisner Gross, you've been incredibly generous with your time. Uh,
I greatly appreciate you. And everyone,
I it comes with my highest recommendations. The Moonshots podcast
recommendations. The Moonshots podcast for those of you interested in these subjects. It's not just once a week.
subjects. It's not just once a week.
They're going hard, including some lives multiple times a week. It is a absolute mustwatch and listen. Uh, I highly recommend it. Uh, if you don't mind,
recommend it. Uh, if you don't mind, Alex, if you could note, you did mention, I believe, a newsletter or where people could learn more about your writing and work. How could they do so?
>> Just go to my website, alexwg.org, and I have links to the newsletter on all the socials, Substack and X and YouTube, Spotify. If you want to listen
YouTube, Spotify. If you want to listen to it, alexwisner, alexwg.org.
alexwg.org.
>> And I believe I could, shockingly to my audience, keep my clothes 30 seconds. It
was incredible having Alex on. I've been
a big fan since his first appearance as actually a guest on Moonshots. I have a chance to learn so much from him and enjoy incorporating some of that uh
creative thinking into what I do and the work that I do with family offices. You
could follow me on at family office on most platforms. You could become a member by going to angelroiss.com and hear Alex in person at my marquee
program in mid-March in Palm Beach. I
greatly appreciate my current members.
Those of you that are watching on my public platforms, greatly appreciated.
Alex, you're a wonderful guest. I look
forward to meeting you in person next month. Thank you for your time.
month. Thank you for your time.
>> Thank you, Angelo. Pleasure. Thank you.
Loading video analysis...