LongCut logo

Matt Fitzpatrick on Who Wins the Data Labelling Race & Why AI Needs Forward-Deployed Engineers?

By 20VC with Harry Stebbings

Summary

Topics Covered

  • Enterprise AI Adoption Lags Model Gains
  • External Builds Twice as Effective
  • $25M Agent Flop Reveals Eval Flaws
  • Prove It Free Before Paying
  • Human Feedback Trumps Synthetic Data

Full Transcript

MIT just released this report that 5% of geni deployments are working in any form. You've seen Gartner saying 40% of

form. You've seen Gartner saying 40% of enterprise projects will likely be canceled by 2027. And I think the reason for that is externally driven builds are 2x as effective as internal team builds.

I don't think that that discipline exists in the same way in internal builds. They spent 25 million bucks

builds. They spent 25 million bucks building agent and what ended up happening was a couple months later they shut it down and move back to a deterministic flow. We don't actually

deterministic flow. We don't actually sell anything. We meet a customer, we

sell anything. We meet a customer, we say we will do it for free for 8 weeks and prove to you that works. The minute

you had to bring in FDs in a SAS context, your economics broke instantly, right?

>> Are there any other big misnomers that you think are pronounced in the industry?

>> Look, I I think the biggest one is just the view that synthetic data will take over and you just will not need human feedback. It's interesting from first

feedback. It's interesting from first principles that actually doesn't make very much sense if you think through it.

In the AI world at least, strategy is a somewhat overrated concept. And what I mean by that is it ready to go.

Matt, I am so excited for this, dude. I

think Invisible is one of the most incredible, but also I'm sorry to say this, I underdised businesses when I look at the incredible achievements that you've had over the last few years. So,

thank you so much for joining me.

>> Thank you for having me. I really enjoy the show. Can you just talk to me about

the show. Can you just talk to me about how does like a 10-year McKenzie uh stool warrior become CEO of like one of the fastest growing data companies in

tech? How does that transition happen?

tech? How does that transition happen?

>> Yeah. So um I would say my Mackenzie journey was nontraditional. Um I spent 12 years there. I was a senior partner and I led a group called Quantum Black Labs which is the firm's global tech development group. So about 10 years

development group. So about 10 years ago, Mackenzie actually started hiring engineers like and I was a big part of this in a pretty big quantum and I think we went from I when I started we had about a hundred engineers total in the

firm by the time I left we had 7,000 uh I oversaw about a fifth of that group uh and all the application development all of the data warehouse infrastructure and all of the uh genai builds globally and

so uh that journey was was really interesting and it and you know over the course of it spent a variety of my time competing with other large enterprise um AI businesses and I got to know the

found the founder Francis really well um about three years four years ago now uh we actually met totally not workrelated in a uh kind of a social context where we were discussing it was a it was basically a forum called dialogue I

don't know if you heard it but you basically talk about different ideas we bonded over >> I keep getting invited to this it's in like Hawaii though >> it's in many different locations >> far >> it's actually a great it's a I really

enjoy it because you actually don't talk about work at all you're not allowed to talk about your job you spend I'm talking about history, politics, technology.

>> What does everyone from San Francisco do?

>> They put they don't talk about it for for two days, which is >> a silent retreat.

>> Exactly. Exactly. Uh but but I actually think it's one of the few uh events I've been to where people are not talking their own book. They're not trying to convince you of anything. And you just really actually I've made a bunch of really good adult friendships out of

that. And so Francis and I got to know

that. And so Francis and I got to know each other from that four years ago. Um,

and there had been another uh CEO kind of in the two years before before I joined who was actually based in Australia, interestingly. And so when

Australia, interestingly. And so when the business got to a certain scale, it was just time to have a US-based CEO that could help take the business to the next level. And uh, you know, it was

next level. And uh, you know, it was actually Francis approached me and kind of pretty directly said, "You want to be our CEO?" And that was kind of that was

our CEO?" And that was kind of that was kind of how it happened.

>> Was it a no-brainer?

>> Look, I I think when you walk away from a really stable job that you really enjoy, that's always difficult. Uh, and

I think that um, you know, Mckenzie actually the the sliver of Mackenzie that I was doing I found to be one of the most intellectual day-to-day jobs ever. I was working with all the Fortune

ever. I was working with all the Fortune 1000 on every different AI topic daily.

And particularly in the early machine learning days kind of 10 years ago, I think we built some really interesting stuff. But um, but yeah, it was I think

stuff. But um, but yeah, it was I think it was kind of a no-brainer in some ways because I think when you when you think about it, I think this is the most interesting time to run a company on a

topic that has probably existed in our lifetime. maybe maybe the 2000s, but to

lifetime. maybe maybe the 2000s, but to run a company in I right now is is fascinating. The the rate at which you

fascinating. The the rate at which you can build, the people at which you can recruit, the interest of customers in this topic. And so I felt like I'd spent

this topic. And so I felt like I'd spent 10 years learning one topic and now I had a chance to run a business and build it the way I wanted to build it on that topic. And that's just something you

topic. And that's just something you can't pass up. And even though, you know, walked away from um a fair amount, but I think that uh I'm I'm much more excited about building something for the next two decades out of this. When we

think about like decision-m frameworks, I always have one which is like find someone who you respect and admire. So

for me, it's Pat Grady who's the head of Sequoa. I've known him for like 10

Sequoa. I've known him for like 10 years. He's a great father, investor,

years. He's a great father, investor, and husband. Three things that I care

and husband. Three things that I care about. And whenever I have a tough

about. And whenever I have a tough decision, I'm like, what would Pat do?

And most of the time, I get to the answer by asking that question in that framework. If I were to ask you, what do

framework. If I were to ask you, what do you ask yourself? How do you find direction when struggling with a decision? I'm not a particularly

decision? I'm not a particularly materialistic person. You know, I think

materialistic person. You know, I think when I was coming out of college, for example, everyone was focused on going into large finance jobs, which at that time were pre pre-inancial crisis, obviously where where a lot of that was.

And I don't I think a lot of what I think about is doing work day-to-day that I really enjoy with people I really enjoy and then building something. And I

I do think I really enjoyed the decade I spent building at Mackenzie. I think

that was an incredibly interesting experience to stand up something of that scale within within a an existing institution. Um, and then I I do think

institution. Um, and then I I do think about, you know, I read a ton I read a ton about everything from military history to current entrepreneurs to um enterprise executives I really admire.

And then I have a group of kind of a small group of people whose opinions I ask pretty regularly. And you know, I think probably the most telling piece of advice um my girlfriend and my main mentor, both of them when I asked within

two minutes were like absolutely do this. Um my main mentor is a guy named

this. Um my main mentor is a guy named Sesh Khana who um had been a senior partner Mackenzie for a long time is is on the board of a whole variety of different companies today. And um I remember we got lunch. I walked into the

opportunity. I said, "Listen, it's big

opportunity. I said, "Listen, it's big risk." And he goes, "The only risk is if

risk." And he goes, "The only risk is if you don't take this and the amount of regret you'll have not giving it a go."

>> I I totally agree with that one. I I was once given advice that whatever you think you should do, hold that close and then let your girlfriend tell you what you should do >> and that's why you still have a

relationship. It's a great piece of

relationship. It's a great piece of advice. That was from someone who's been

advice. That was from someone who's been married for 40 years and so it's worked well for him. Um we were chatting before and I said, "Listen, where do we have to go?" And I just I always think that the

go?" And I just I always think that the best conversations are led by passion.

The first one that you said was there's a gap or a chasm between model performance and adoption. When we break that down, can you explain to me what you meant by that and how we see that in action?

>> Yeah. And and let me set the context.

I'll go into more detail later, but Invisible is an interesting business in that we both train all the large language models with reinforce learning human feedback and we built we're we are at the core a modular um software

platform where in enterprise context we deploy all different enterprise use cases. And I think the cognitive

cases. And I think the cognitive dissonance that occur has occurred over the last couple years is model performance has increased exponentially.

I don't think anyone would doubt that.

If you look at all the public benchmarks, models have increased 40 to 60% in performance over the last two years and consumer adoption has been also exponential. So you know I think uh

also exponential. So you know I think uh KPMG just released that 60% of consumers use GI weekly now but the enterprise has not you know I think in the enterprise

uh MIT just released this report that 5% of GI deployments are working in any form. You know, I think you've seen

form. You know, I think you've seen Gartner saying 40% of um enterprise projects will likely be canceled by 2027. And I think the reason for that is

2027. And I think the reason for that is deployment of the enterprise is a lot more than just models themselves. It's

the data infrastructure to support those models. It's the redesign of workflows.

models. It's the redesign of workflows.

It's the uh process figuring out which operational leader takes accountability for that. And most importantly, it's

for that. And most importantly, it's trust. It's observability. all the

trust. It's observability. all the

things that you know I spent spent a decade building things like in you know credit models in in banking and in those cases you need to go through model risk management testing training validation and so I think that whole process is in

the first inning in the enterprise I think it's going to take a decade not two years and and I do think that is the core mission that that we think a lot about is I actually think the evolution of

uh kind of deployment of AI will be where what the model builders have done for the last couple years you'll you'll see banks and healthcare firms start to do the same sort of testing and validation over over this period and then the rest of the enterprise will be over the next five six years after that

and that's the journey that we're we're focused on.

>> So I was speaking at one of the largest banks in the world. It's an absolute joke that they get a university dropout like me to speak at their like largest retreats.

I find it very fun. Um but I left and I messaged the team and I just said, "Oh my god, they're toast." And they're toast because I said about the amazing tool they should implement internally.

And the CTO laughed at me. He was like, "Dude, there's no way that we can ever adopt your offtheshelf, you know, search engine optimization for your LLM tool

because of data, because of security, because of permissions." And I was like, "Wow, everything that you just said there, I listened to." Yeah. But that

was once you got in the door. Are

enterprises even open for business? You

see Goldman Sachs is developing a huge amount of their own tools. Are they open for AI business?

>> Yeah, it's a great question. Um, I think it depends a bit on the sector. I think

for there are sectors like banking that are very focused on building this internally. I think that is a reality.

internally. I think that is a reality.

Um >> do you think that will work the internal build for them?

>> So it's interesting if you look at the MIT report uh what I which is the one I mentioned that says 5% of models are making production right now. They

actually cite a stat that intern externally driven builds are 2x as effective as internal team builds. I

actually think there's an interesting kind of 10-year pattern on this, which is 10 years ago, everyone bought software, right? Like that was your tech

software, right? Like that was your tech team did not try and build anything and you started to buy and you bought, you know, often you bought way too many apps, but you bought 15 different apps and that was what the technology team

did. And then I think with the advent of

did. And then I think with the advent of cloud, you started to have a world where the technology functions started to start to think about building things like maybe they started to have more some custom applications that wrapped

around that. I think Genai has 5xed that

around that. I think Genai has 5xed that where now an internal team is given this enormous budget and said kind of go go have at it and I think that's complicated because I think when you

hire somebody to build any vendor of any kind you're pretty disciplined about what are you delivering on what timeline what's the ROI of it what are the milestones how does that and I don't think that that discipline exists in the same way in internal builds I also think

that the talent levels often the internal teams have are challenging and so >> when you say the internal team builds are challenging there are some things that you can't say but I can that the perception from external or from general

kind of tech crowds is that internal teams for I don't know you name your boring large enterprise it's just really low quality you're not getting the top tier AI engineers you're not getting top

tier devs is that true >> look I think the the amount of talent that knows how to do this well is not large right and so that that finite

group mostly works in AI startups of various forms right and and large tech companies and so I do think there's real risk to the process of figuring this out from first principles and enterprises right and I think that's part of the

cycle that we're going through right now is uh a a lot of internal groups have gone through the process of saying we must do this all internally but the reality is if you think about that this is an open architecture ecosystem and

you're going to adopt things like MCP or you know all the new tech the new voice agent that comes out you actually want a modular open architecture where you can use all the best tech available and

figure out how to link it other and I think the the desire to shape that all internally has been challenged like I'll give you I'll give you one of the more interesting examples I can discuss. I

was talking to an e-commerce retailer that had built an agent to handle their returns process and they spent 25 million bucks building this agent and at the end of it I said well how did you

how did you define this was after I met them after they built it and I said at the end of it how did you define if this agent worked or not and they're like well we built we built our own eval tool

it's not a joke uh and we basically analyzed a mix of speed of call resolution and sentiment the problem with that is what if the agent hallucinates and says, "Here's $2 million." That actually gets resolved

million." That actually gets resolved quickly and the person's happy. And so

they built this entire system from first principles and what ended up happening was a couple months later they shut it down and move back to a deterministic flow. And that's not surprising to me at

flow. And that's not surprising to me at all. And so I do think that's a little

all. And so I do think that's a little bit of the adoption curve we're in is I think over the next two years you're going to see the CFO function put different guardrails on how this stuff is built and say what is the ROI? What

are you investing in? What's the metric?

what's the return and that will change the adoption curve but right now there have been a lot of science projects I think that is a realistic >> okay I'm you know we have hundreds of thousands of listeners and many of them

are CEOs if you are a CEO thinking about your CFO being equipped to buy and to manage in this new environment what should they be thinking about and do we

have the right CFO talent pool to manage this new environment >> yeah so I think one misconception is that that leader has to be highly technical to make that decision and I would actually argue they don't at all.

They just need the same muscle memory they've looked at in the past which would be to go through it. What do you need to get to get a geni initiative working? You need good data that you can

working? You need good data that you can work off of for that specific initiative. Clear milestones and

initiative. Clear milestones and outputs, clear line ownership of the initiative. And then probably most

initiative. And then probably most importantly, you want to actually anchor it in milestones and outcomes where you pay as it works.

So I think the other the other interesting context for a lot of this is what I would call the accenture paradigm of the last 20 years right which is a lot of times the way that if you think

about the rapper that's been around software for the last 20 years you know the Francis our founder Francis Raza has the founding principle of invisible was if there's an app for everything how come nothing works

>> and it's an interesting concept right because what ended up happening is you built you you bought 50 apps you had Accenture come in and you paid them $200 million over two years to try and layer

them all together. And often you ended up a couple years in with no working data, no linkages between them. And that

that that kind of layers of sediment has been how the tech paradigm worked for the last uh in the enterprise for the last 5 years. And I I think what's different now is if you're thinking about a specific Genai initiative like a

contact center, let's say, you don't need to operate that way. You can think about what are the operational metrics you want in your contact center. You

want uh you want to think about call resolution, call performance, cost per call, uh routing logic and you know you can then look at both internal and a set of vendors who will deliver those

metrics and make an evaluation and if the vendor doesn't work, you fire them.

And I actually think there's a very clear way to get ROI in this, which is figure out the list of three to four things that move the needle for your business. Focus on those three to four.

business. Focus on those three to four.

Don't don't spend money on a thousand science projects. Take your best four

science projects. Take your best four operational leaders and put them on those four things. Don't locate it in the tech function. That's the main advice I give people is your Geni initiative should be led by the business

and figure out that could be your head of call center, that could be your head of operations, but each of those people with clear operational KPIs. We'll get

the stuff working and there are a bunch of companies that have, but it's just a very different approach than I'm building geni as an example.

>> It's really interesting you said don't invest in a bunch of science projects.

Do three to four initiatives. Okay,

let's do three to four initiatives again. has put on that CEO hat. Contact

again. has put on that CEO hat. Contact

center, it's just a big one that is homogeneous across everything.

>> Matt, there's so many players in the contact center space. I'm a CEO. I'm not

I'm not a Silicon Valley guy. How am I meant to understand whether we go for Sierra or Dagon or Zenesk of old or intercom or or any of the other players that we've seen in the space? How do you

advise the bigger CEOs on buying in a wave of new innovation? I think this is the other big challenge of geni adoption is you have you're an average CTO COO

you've got 250 vendors a week pitching you all of them sound pretty similar in fact I was with a customer yesterday who literally started the meeting by saying how are you different than the other 250 people that have pitched me this week so

this is this is the dynamic of we have an oversaturation of companies that all sound relatively similar relative to Asians I think the the to make your question even more um pointed a lot of

them don't work. You know, I think you've got you've got a fair number of the enterprise agent companies that you know, like Salesforce um AI research released this report that if you take um

if you test a lot of the out of the box agents on single turn and multi-term workflows, they're about 58% accurate on single turn and 33% accurate on multi-turn workflows, which means they

don't really work. And so you've got this challenge of 250 agents, 250 companies a week pitching you. um you

don't really know how to select it and you you you're worried you're going to pick someone that's effectively Charlotte and it won't work and and the more you have a a market where there's a lot of excitement the more you do have

that risk right um so I think the simplest advice I give and by the way this is how we sell quote unquote is start with proof of concepts start with um we call solution sprints don't pay a

dollar until you prove the tech works so like we don't actually sell anything we meet a customer we say we will we will do it for free for eight weeks and prove to you the tech works And that's a very simple way. If your

tech works, you'll show it.

>> Uh >> it's an expensive way to do business.

>> It is and it's not. But so let me give you an example of like how one of our deployments works because I think it it um fair enough if the answer is that you know it takes you two years to build anything. But like I I'll give you an

anything. But like I I'll give you an example. So we're serving um we have our

example. So we're serving um we have our AI software platform is effectively five modular modular components. So Neuron

which is our data platform brings together structured unstructured data.

um Axon which is our AI agent builder, Atomic which is effectively a process builder. We can build any custom

builder. We can build any custom software workflow and then we have a meridial expert marketplace which is we we have 1.3 million experts a year on any any topic you can imagine that we bring into those workflows and then

Synapse which is our valuation platform on all of it. Now we can take those five things and configure them to almost any different enterprise context. So just an example, we serve food and beverage,

public sector, asset management, um agriculture, sports, uh and you know, a whole host of other diff oil and gas, a whole host of different sectors using that same modular architecture. I think

we we end up scaling pretty materially once we show the tech works. We're

working with a company called um uh Lifespan MD, which is a concier medicine business uh and across the US and and and internationally. And you know what

and internationally. And you know what we're doing for them is we're building them an entire uh tech backbone where they have an enormous amount of

fragmented data across um EHRs, CRM, uh ERP systems, notes, everything else. All

of their data sits in a pretty fragmented format. And so we're using

fragmented format. And so we're using Neuron to bring all that data together.

Uh we do that very very fast. So if

Accenture would take two years, we can usually do it in two to three months.

Um, we're then on the back of that building a lot of different intelligence and reporting so they can look at things like patient journeys over time, labs, genomics data, um, uh, I don't know how much you use like the Aura ring or

anything else like that, but they want to look at wearables, how all that content is looking. So they have a lot of detail on what any patient is doing at one time. And then on top of that, we layer things like uh, we have

conversational the ability to interrogate data and ask lots of different questions like let me look at who's used peptides. to male between 36 and 50 and what have been the results.

So we're using Axon to build all that and then we we build um and and and to fine-tune the model to do that and then we actually do also on top of that build lots of specific custom agents for things like scheduling. So what you get

at the end of that is a transformed tech- enabled business with all of those different components. Now that does take

different components. Now that does take us a little while to to stand up but once that is there it's effectively hyperpersonalized software and that is my view on where where this whole

industry goes is you move from SAS out of the box SAS to much more hyperpersonalization using the specific data of an individual customer and that is what we do. Do you think you can work

with enterprise today with Gen AI and with AI implementation without an intense fully deployed engineer mechanism? I don't think you can. So we

mechanism? I don't think you can. So we

are we've doubled down a huge part of what we do is for deployed engineers. So

we now have eight offices in many different city kind of eight cities um 450 people. We're fully focused on

450 people. We're fully focused on forward deployed engineering and I can tell you from a decade in my prior life you just cannot do this with out of the box SAS. it does not work.

box SAS. it does not work.

>> What do the economics of FTEES look like? Obviously, Palanteers made it the

like? Obviously, Palanteers made it the most sexy thing ever. I love the way tech crowds work where it's like we all just kind of get super excited by like an acronym. It's like this is the

an acronym. It's like this is the coolest thing. But what do the economics

coolest thing. But what do the economics look like?

>> Well, one thing I'll say is for deployed engineering has come to mean a lot of different things. So, uh, a lot of for

different things. So, uh, a lot of for deployed engineering, I think, you know, across the broader market is more like kind of solutions engineering where the people that kind of answer your questions and show up at your office, but they don't. I think for deployed

engineering done well is executing a very specific workflow build. So you're

effectively configuring a set of core platforms to build something hypersp specific for that customer. Um and

usually one of the questions is it depends on how good your platform is because for example you could argue Accenture is forward deployed engineering right but that that build may take three years and and in our case

I think we've built modularity and built a lot of the new software workflow develop development workflows into what we do and so usually our four deployed engineering motions are about three months so we will come on board

customize everything to the hyperspic way a a customer wants it and then hand it over and then h build something on a basis works and it does require ongoing fine-tuning. So that's the other big

fine-tuning. So that's the other big difference that people should acknowledge, right? Is that you can't um

acknowledge, right? Is that you can't um you can't fine-tune a model in an enterprise context and just leave it for four years and hope it continues to work. I could give you 100 examples, but

work. I could give you 100 examples, but take take healthcare GLP1's launch. You

do need to fine-tune the model for the new context of the market. And so we we do view it that way, but >> I'm I'm very naive, so forgive me on this. So do they pay additional for like

this. So do they pay additional for like FDAs to come? Do you pay additional in terms of ongoing maintenance just on the economics of it?

>> For many of our competitors, they do charge. Uh we do not charge anything for

charge. Uh we do not charge anything for FDS.

>> Why not?

>> Uh I think it goes back to my general premise that the best way to differentiate in this market is to prove that your tech works. And so the way that we do this is we say you will you will pay when the software is up and

running. And we're able to do with one

running. And we're able to do with one to two person small FD teams a lot. And

so once that's stood up and running then you know we we do have ongoing software that that is you know I think the the paradigm that we're evolving from is um over the last 20 years you had kind of

the system of record layer was where a lot of the values at and what we're building is hyperpersonalized system of agility layers kind of what sits a top that you know I don't want to I I think the accenture paradigm is what people are

afraid of and it's very hard to convince somebody you're going to pay time and materials until it gets working and so that is I spend less on seller sellers and more on for deployed engineers. That's my

simple math.

>> If I'm a I always think, you know, the biggest mistake that people have is they don't put the hat on of their customer.

Uh and yeah, I think the reason the show's been successful is because I put the hat on of different customers. A lot

of the customers that we have is startup founders who create amazing products and they everyone wants to sell into enterprise. It's where the money is.

enterprise. It's where the money is.

Yeah. If I'm a startup founder thinking, huh, >> do we need FDS? How do we do FDs? How do

we move into an FDE model? What would

you say to them that they should know if they're thinking about starting that model or potentially needing that model knowing all that you know?

>> I think it depends a lot on the nature of the business and what you're trying to build. So um you know if you're

to build. So um you know if you're trying to build a uh knowledge management system of public filings for finance for example, you don't need FDS because what you're building there is um

a repository of information that people can access. You've sim similar things in

can access. You've sim similar things in healthcare for example. If you're trying to change workflows, you do need FDS. I

think that's the simple paradigm difference in my mind is if you're building something where the hardest part is getting adoption and workflow embedding and you need to actually change the way a company works, then

yes, for deployed engineers are the only way to do it. It's interesting. There

aren't that many uh area folks that have expertise doing that. So, it's it's a hard thing to train and learn, but I do think it is the only way to get the enterprise working. You've said several

enterprise working. You've said several times, hey, don't pay until you prove that it works. And you said earlier, pay as it works. That's not the SAS business that we've been trained on, Matt. And

I'm a SAS investor. How does the pricing model of the future look in this very new environment?

>> Yeah. So, I think let me let me step me back for a second. And I think an interesting thing if you look at the economics of SAS and enterprise 5 to 10 years ago and I think it's an interesting uh look at any large public

enterprise software business and then look at how much of their revenues actually services and I think you could kind of argue that out of the box software has always been a lie to some

degree. Uh it's a weird thing to say but

degree. Uh it's a weird thing to say but they always had a ton of configuration and they just dressed it up to some degree. And I think SAS was even more

degree. And I think SAS was even more challenging than that because often the unit economics of SAS, you're selling a much smaller uh kind of um you're selling a much smaller cost per

customer. And so the the SAS business

customer. And so the the SAS business that worked was actually about selling something where the out of the box setup was quick enough that you could make it work with the sales team where you didn't have to do lots of configuration because the minute you had to bring in

FDS in a SAS context, your economics broke instantly, right? And and what I'd say then on the enterprise side the way people made it work was you that's why Accenture grew so much that's why

Cognizant that's why TCS grew so much is I'll give an example like if you take insure techs as an example right every one of the major insur techs like a duck creek like what they have is a is a set

of core data schemas a series of analytical logic and a front end and the ones that did really well had momentum and push from the SIS that got them going and so their economics were

geared by having somebody else do all your your your um services around what you did and you got something up standing up at the end that worked. I

think the challenge with GNAI is that that motion doesn't really work because what ends up being built at the end of the day is something that is hyper specific to that customer. Like if you actually think about the nature of a

fine-tuning an LLM or creating a knowledge management system, it's not a box. It's not it is something that uses

box. It's not it is something that uses a lot of different consistent tooling, but it has to be customized. And so um the way we do that is we stand that up.

We get it working and at the end of it usually two to three months in the payment happens when we pass user acceptance testing and validation and it works. And here's the other thing I'll

works. And here's the other thing I'll say is we're we use SAS as a paradigm because that's how software has worked.

But machine learning has been around the enterprise for I was building machine learning models 10 years ago. That's

always been a motion that looked like this. So what's happening now is we're

this. So what's happening now is we're starting to realize that the genai adoption paradigm in the enterprise works the same way that ML >> totally get that and so when we look at the different products that we have

today the expert platform is one I think that gets a lot of attention how much of the business today is the expert platform but I find companies are lumped into

categories it's easier and you have your mccor your surges your invisibles and you're all kind of put in this like are you all just talent marketplaces and no one wants wants to be a talent

marketplace it seems and I'm like how much of your revenue is the talent marketplace and why does no one want to be a talent marketplace?

>> Yeah. So, so let me think about that in a couple different ways. So, I actually think the AI training space has many different players that do have many different business models within it.

There's four to five, but actually they're all quite different. Um, I think of us much more as an AI training platform than just a talent marketplace.

Meaning we have 1.3 million experts that come through the marketplace. But a lot of the expertise we've built over the last 10 years is the ability to Here's the simplistic question I think that AI

training asks. You have to be able to

training asks. You have to be able to source any expert in the world in 24 hours notice. You have to be able to

hours notice. You have to be able to source a PhD in astrophysics from Oxford, put them into a digital assembly line in 4 days later generate perfect statistically validated data that will

be compared headto-head to somebody else's data and make sure that that that is perfect at the end. That is an incredibly difficult thing to do. And so

actually a lot of what I saw when I when I took over invisible was that motion was incredibly applicable to actually the next phase of the enterprise as well which is um the fine-tuning motions the

training the the ability to statistically validate for an enterprise use case like claims processing. It's

the same motion like I actually think AI training will be used next in banking and and healthcare and then after that in in many other different enterprise contexts. And so the the historical

contexts. And so the the historical business I took over in 2024 was pretty materially weighted to the um AI training side of the house. But I I came in with a thesis that uh enterprise would be a huge source of growth and I

think as you see next year evolve like you know I think we've confirmed 12 enterprise deals in the last 45 days. So

we we see pretty good momentum on that side of the business and I think that's where we will evolve is to doing both. I

think the five core platforms we have are allow us to serve a whole host of different end markets and I do think that's very different than the other AI training players you mentioned. I think

we're the only player that spans that broad-based view in the same way.

>> Can I push on the talent marketplace side? How much of the business is that

side? How much of the business is that today? Then

today? Then >> I won't say an exact number, but it was a pretty material percentage of 2024.

>> Okay, got you. So, it's a pretty material percentage. The the one thing

material percentage. The the one thing that's also striking is the concentration of revenue to a couple of core players. When you look at other

core players. When you look at other providers, yeah, it's like two players that make up more than 50% of revenues for pretty much every provider.

>> Is that the same for you? And how do you think about what that revenue makeup will be given the enterprise diversification that you're talking about?

>> Yeah. Um, I do think for this is a space where there are not that many players that are that are actually building LM.

So by definition the whole space has concentration. I I think I would not uh

concentration. I I think I would not uh disagree with that. I do think that's one of the really interesting things for us on the enterprise side is we have materially more diversification now in the number of customers we serve on a

whole different range of topics. Um I

also think you're you're seeing more uh kind of early stage model builders as well that are building hyperspecific topics. Um and so that's that's the

topics. Um and so that's that's the other part of where we see expansion in the total customer base. when you come to negotiations with a client given the revenue concentration how do you play

that staring contest because essentially they go we know that you we are one of your core customers and we will squeeze you on price and you go I know I'm one of your core data providers I will stand

firm how do you handle that negotiation because it is a staring contest of sorts >> I think people are willing to pay for good data that's my simple firm if you think about the importance of these

models if If you think about the cost of compute, that is actually a huge chunk of the cost space. If you think about one week of bad data burns a lot of compute, um I I think what we've seen, the reason the same players, it's been

the same four to five players in this market for a couple of years now, is it's really hard to do well. And so

people are willing to pay for good data.

And so I think we we have a very collaborative dynamic with all of our customers on that front. And um you know, I I I think that uh when you provide a service that's helpful, people are willing to pay for it. And if you

provide a service that doesn't work, people don't pay for it. And so the interesting thing I would say on that front is a lot of the time in these they're not um the discussion topics anchor around again proven value. So

we'll get a topic that'll come in like a multimodal audio model for example and we'll go headto-head with somebody on that that week and at the end of it we win or we lose. And so if you win and your data is way better, people are

willing to pay for that.

>> Totally get you. I had a chat last night with a board member of of another of the companies in the space and he said two what two things that really stood out to me. He said, "I'm just drastically

me. He said, "I'm just drastically shocked at the lack of price sensitivity for the core customers. Like they're

willing to pay pretty much anything."

Is that the case or is that a bit of an exaggeration?

>> I think that's an exaggeration. I think

it's an exaggeration. I mean, look, I I think that there is a fair price. I

think in any if you think about like classic economics, people are willing to pay a fair price for good data. And so I don't think we we um operate in a model of trying to give anything unreasonable.

I think there's actually fairly standard price bounds across all the players here.

>> Is data commoditized? When I think about like pricing power, I'm a massive fan of Hamilton Helmer's seven powers. Amazing

book. Yeah. When you think about like pricing premiums, you get that through not being a commodity through owning supply of a rare asset.

>> Is there commoditization of data? And

we're kind of in a race to the bottom on the pricing of that data. or do you own the supply of vet workflow data for

surgeons in Oklahoma? That's very

>> Yeah. So, so let me take that. I'll

actually start with the market context and then I'll actually use seven powers.

It is a great book and I'll use one of his frameworks for that. Like I think the market context that is somewhat misunderstood here is the way that human data becomes more and more important

over the next decade. And I think the reason for that is if you thought of um the different types of things you could train off of. So synthetic data gets mentioned a lot but like most of the time synthetic data is useful for things

like let's say base truth information like math where there is a clear output that is right or wrong. Now let's take all of the different reasoning tasks like a multi-step reasoning task like I

mean even a simple one like what movie would I select based on you know these five preferences.

>> Legally blonde.

>> Exactly. Um well and and then let's take that question and add into it audio, video, multimodal, language, the ability to do it in 45 language language

context. So the ability to think about

context. So the ability to think about computational biology in Hindi versus French versus English versus English in with a southern accent like that that

that paradigm is actually incredibly hard to train on and we're still in the first inning of a lot of those permutations of complexity is what I would say. And so for a multi-stage

would say. And so for a multi-stage reasoning test that requires a PhD in multi different languages and like human feedback is going to be important in that for the next decade. I have a strong belief on that and that was

actually one of when I chose chose to take this job that was actually one of my core convictions is the enterprise is going to need that too because actually a lot of if you take legal services for example a lot of the way you're going to need to validate that is with legal

expertise. There's no corpus of

expertise. There's no corpus of information you can train from. So I

would start with the idea that I think the market tailwind for the next 10 years we're actually in the first inning because there's the LMS then there's the more sophisticated enterprises and then there's everyone else that needs to

train validate and move to fine-tuning.

So again contrasting there's like the pre-training and LM work but then to fine-tune a model to a specific context uh most companies don't even know what that is in the enterprise yet and that whole process we're in the first inning

of. I think the market demand is going

of. I think the market demand is going to continue to grow prematurely for a decade or more. Um I think that the the Hamilton Helmer framework is an interesting one because he my favorite

example is uh he talks a little about what he calls institutional memory. So

uh he mentions the Toyota production system as an example, right? Where

Toyota would literally say to people this is exactly how my how our factories are set up and nobody could replicate it, right? I think the interesting thing

it, right? I think the interesting thing about this space and why you've had a consistent set of folks doing it for a while is to go through the process of every week having to spin up. We have 26

thou so we have 1.3 million active agents or kind of uh experts that come into the pool at any given week. We have

26,000 of those that we've selected that have to start in 24 hours and produce perfect data. Think about the the

perfect data. Think about the the challenge of scaling an organization that for five years can do that at really high quality and consistently turn and and evolve to the different

permutations of the market, new new ideas of training. It's really hard to do. And I think that was what I what I

do. And I think that was what I what I what got me most excited when I took the invisible job was the question of can you make AI work in a really complicated context. Very few companies know how to

context. Very few companies know how to do that on the enterprise side or on the training side of that for that matter.

So I thought that was a really unique institutional memory context. It is a digital assembly line no different than than an autoactory and I think that is a hard thing to replicate.

>> The other really interesting area that this board member said to me was he very much agreed with you. He said exactly the same words as you in terms of first innings of data in terms of just how much market size will increase. He said

the other thing that I really didn't under didn't understand when I made the investment was the specialization of data and how we are moving into the acquisition of this kind of insanely

niche data supply pools where it's not like cat hedge zebra crossing zebra crossings a what do you guys call it a pedestrian pathway or something I did not see the specialization in the

unbundling is that something that you see too in terms of these very microniche specialized data your requirements.

>> Absolutely. I think you know five years ago this space was what I would call cat dog cat dog commodity labeling. I don't

think anyone and and I think there was a lot of Google Sheets in that era and you've seen some comments on that like this this this era this sector has evolved the same way most technology sectors do where it started with Google

sheets and cat dog labeling and it's evolved to real digital assembly lines huge velocity of expertise and incredibly specific expertise. So like

you know we have to give a funny example. We have to be able to validate

example. We have to be able to validate um uh an architectural expert on 17th century French architecture who speaks French. I mean that is a that is a

French. I mean that is a that is a complex thing to do on 24 hours notice, right? And so the ability to source,

right? And so the ability to source, assess, validate. And I think one of the

assess, validate. And I think one of the advantage for us is because we have five years of data on who's been good at what task, there's real institutional data memory and how you do that selection and assessment. I think that's one of the

assessment. I think that's one of the core advantages we have from that. How

important is pay? You know, I think a couple of other providers, you know, kind of have said that bluntly it's about how much you pay. You pay more than the others, you'll get the good talent. You know, look, so a weird

talent. You know, look, so a weird analogy. I think of our business like

analogy. I think of our business like Uber. And what I mean by that is um we

Uber. And what I mean by that is um we source talent at the price at which people will do do the work that is asked of them. Right? So the the same way I do

of them. Right? So the the same way I do that if you're standing standing on a street corner, your question is can I find a ride that will pick you up at this moment within three minutes? And

that matter that's a different price if it's raining that's a different price if you're in you know Rio Rio Rio di Janeiro versus London right the the price depends on the market context and the specific place you are I think extra

pay is the same dynamic really a lot of what we're doing is what I call price discovery and so the nuance I would add to what you're saying is you can overpay a really bad expert and that is a total waste of everyone's time and so what I

think our customers appreciate is we can tell you between a $150 expert and $130 expert the difference in expertise you Do you think you have control of a

finite supply of data uh providers? If

you look at the seven powers in Hamilton, one of them is like acquiring finite supply.

>> Um I don't I so I actually don't think finite supply matters. Uh and what I mean by that is I think the expertise needed varies so much monthtomonth that if you tried to do a world where you bottled up whatever supply it is, it

would change in three months. And we

actually uh relish that concept. I

actually think the dynamic again why I would use Uber and Lyft you could use um you could use Airbnb and VBO as the same context is I don't think people I don't think experts go on five platforms right

I think actually what you want to be is this is a two-way marketplace where you need enough demand for people to be interested and you need enough expertise that many experts and I think the reason we get 1.3 me 1.3 million in bounds is

because of that kind of supply demand balance so I don't think this moves to a world and I actually I would never say it moved to a world where there is one player coming out of this. I think

there's benefits to everyone to having numerous players that do AI training and so it's a question of being one of the players that has that balance. You said

there about kind of the switching of preference of like oh 3 months ago it was this that you want now it's something completely different.

Switching cost is another data providers in this way. Are there

inherent barriers to switching? Is there

any loyalty?

>> Yeah. No, I I think that if you've learned how to do a certain data task really well, there's incredible value in that. And that's what I like the the way

that. And that's what I like the the way and let's take the enterprise context again because I do think it's a good one. So, um you know, we are um I'll

one. So, um you know, we are um I'll give you an example. We're doing a lot of fine-tuning on some pretty interesting topics. So, we're um one

interesting topics. So, we're um one example we worked with, um SIC, Vantor, and the US Navy on fine-tuning a model for underwater drone swarms just to give you an example. And so the question on that if you think about

>> niche >> very niche this is why I use as an example to answer your question. So if

you thought of in that context you've got a bunch of underwater uh underwater unmanned vehicles and they're getting in all the drone and sensor data from the the interaction patterns of those

vehicles. And what they want to know is

vehicles. And what they want to know is you know an object is in the water near them. What do they do? Do they react? Do

them. What do they do? Do they react? Do

they pull back? Do they alert another drone? Do they engage? What are the

drone? Do they engage? What are the topics of that? So fine-tuning a model to take in all that complex sensor data, fine-tune it, train it, and build a decision-making framework for those

drones, there's a lot of logic built into that. And I think that's why it's

into that. And I think that's why it's been a great partnership with SEIC and Vantor because we built logic on how to do that. And it's, you know, I think

do that. And it's, you know, I think that there is real um sustainability and expertise you you you build up. And so

the way I think about like our our enterprise motion, for example, is every sector is led by somebody with deep deep sector expertise. and we do build real

sector expertise. and we do build real logic on those topics and I think the same is true for multimmodal video and audio it's true for legal um I actually think a lot of the training work even at the model builder side now one one

interesting view I have is people talk a lot about the public benchmarks that tends to be one question you get a lot is like are we reaching a point where models are not improving I actually think it I think about it very differently which is the models are now

all moving down hypersp specific things where there's not a public benchmark for them by definition right like they're moving to more very specific tasks that

are you know very different and not something you can publicly benchmark in the same way and that's where we do see more and more model improvement every day but both in model builders and enterprises on these specific tasks

>> you said about kind of the benchmarks I'm just so interested you Gemini 3 killed it it's the best ever and then you yesterday Opus 4.5 killed it it's

the best ever next week Sam's going to release one does it matter like are we in a world of such transient and flux where really we should detach ourselves from these bluntly updates to last for

days.

>> Look, I I think the benchmarks are a useful framework for society to gauge progress on this topic and it's a very it's a very often discussed topic. So

people want a way to say to answer the question of how are the models improving and I can tell you like unequivocally the answer is yes. I mean I think by every measure you look at um they are and you know they're not only including on the ben improving on the benchmarks

but even on specific tasks like research for investments for example you can see the models are much better at doing certain tasks and I think what you're seeing start to happen is people and

we're doing this as well are building very specific workbased benchmarks to calibrate certain things like how well does the model do on building an LBO model for example and you're going to see more and more benchmarks cited now

the complexity then becomes if you move from five main benchmarks like Sweetbench and others to 600 benchmarks then you kind of lose track of what's doing who's doing well on which things.

Um, but I think my my my interesting view on that would be I'm not sure the benchmark progress is what determines enterprise adoption. And what I mean by

enterprise adoption. And what I mean by that is if you take the fact that the models have improved exponentially over the last couple years and you say consumer adoption has been massive, right? Like um

right? Like um KBMG had this report that 60% of consumers use this on a on a weekly basis. The adoption curve on enterprise

basis. The adoption curve on enterprise is not going to be a question of generalizability. it's going to be a

generalizability. it's going to be a question of hypersp specific performance on a specific task right and so there isn't actually a benchmark for that like

if I you know take um uh let's take a investment summary document for a private equity firm right there's no benchmark to say firm one this is how you write in investment committee memos

does this generate something that looks with 99% precision like something you would would roll out there's no benchmark to do that and so that's where what I see as the adoption curve is

actually the fine-tuning and inference layer of actually testing that getting into a place where that firm could say like this looks good. I'm okay with this. I've you've tested it. Like

this. I've you've tested it. Like

machine learning has a um context I don't know if you've heard the banks do this thing called model risk management where they actually do a whole host of validation and testing on things like redlinining before they roll a model out. That's what the enterprise is going

out. That's what the enterprise is going to have to do. And so it's not that the model improvement doesn't matter. I

actually think the the benchmarks are a good way to get some some uh sense of model improvement, but it they're almost orthogonal to enterprise uptake. I think

enterprise uptake depends on trust and precision on specific tasks at 99% accuracy, not generalizability.

>> If those specific tasks are removed in the way that you said like summary docs for investments, often it's done by more junior people in the earlier stages of their career when they are building and kind of scaling those skills. Do you

think we will have a talent pipeline problem if we do remove a lot of those junior roles which we are seeing in certain cases already and I think we'll continue to see where we won't actually have the graduation pathways that lead

to the leaders that we have today because we've removed those junior roles.

>> I don't actually so so I think one of the one of the challenges is that the adoption curve of the stuff is going to take a lot longer than people uh expect.

So I do think you know I said this to you earlier like I think on enterprise this is a 5 to 10 year adoption journey not one to two and so I think you have a dynamic where people will have a lot of

time to react and to think about what's useful as you know in addition to that and so um I actually find a lot of the people coming out of college right now are some of the highest adopters of this and the most useful for these kind of

tools and so we're hiring more and more people of that profile not less um but you know I I think the the adoption

curve the the usage curve of that group of people certain tasks will not be done but there will be many more so I'll give an example um accounting right if you took

um if you worked at a bank example or any accounting firm in the 1980s this is absurd to think about but you literally calculated revenue and financial statements with a slide rule right like people literally would sit there and

they would generate a financial statement manually on paper with slide rule And that was how people did accounting. Now, Excel comes around.

accounting. Now, Excel comes around.

That becomes the main tool everyone uses to do accounting. And so, in theory, you'd have less accountants because you went from manual generation of slide rules to Excel, which actually makes it way easier to do that. You look today, we have about the exact same number of

accounts and back the same number of junior accounts. And what's happened um

junior accounts. And what's happened um is way more people do way more sophisticated accounting scenarios with the tools they have. It's this old idea

of Jevans paradox which is you increase consumption with advanced technology and so the number of accounts didn't go down. You actually had way more

down. You actually had way more accounting. In fact, every FPNA function

accounting. In fact, every FPNA function is probably larger now than it was 25 years ago because the work people do is more sophisticated.

>> Totally get that. I do want to go back to we said about kind of market composition and how we see the different players. Is this a market where you said

players. Is this a market where you said like Uber and Lyft is this a market where there's one and two players and they take the dominant market share and then there's everyone else? Is it a cloud market where it's much more evenly

distributed? How do you project that out

distributed? How do you project that out in say a 10-year horizon?

>> Um you know I I think in both AI training and in enterprise I don't think the answer is one player. You know I think actually interestingly in the enterprise historically there's probably it's been

palentier not many others. So that's

kind of I think why you've seen more um people want alternative options to that.

I think that I think that's part of the reason you've seen so much excitement on enterprise AI recently. I think most of these markets end up with three, four or five players. I don't actually think

five players. I don't actually think it's even two. And I think that choice in consumers is is a u markets tend to allow to create that and that's a good thing, right? Like I think you'll have

thing, right? Like I think you'll have some specialization on certain topics.

you know, some, you know, maybe some better at coding, some better better at specialist task, some better at PhDs, but I think it'll it'll stay with a fair amount of choice.

>> When you look at the landscape, who do you most respect and what do you learn from them?

>> I would say Palanteer is a company I probably respect the most in in in enterprise AI.

>> It's really interesting. You see them as a competitor more than uh the Surge or Mccor, Churing or any of the others.

>> I think they are both competitors in different ways to different parts. All

of those players are competitors in different ways to different parts of our business. Um, I I think I call out

business. Um, I I think I call out Palunteer because I think they realized 10 years before the rest of the kind of tech market that for deployed engineering customization would be

important. And I think that was a very

important. And I think that was a very countercultural leap at the time, you know, because look, I mean, I spent a lot of time running for deployed engineering teams. Uh, and most of what I saw was players like Accenture. That

was not a like what was called tech services back then was not a place that anyone wanted to play in. And so Pal spent a decade before anyone realizes was important building good tech, right?

And so I have a ton of respect for that and the the culture they built out of that. I think on the AI training side,

that. I think on the AI training side, um I won't comment anyone specific. I

think I think all the players in the space are good and they all do different things. Well,

things. Well, >> there are large revenue numbers thrown out.

>> Yeah.

>> Are they revenue? Because I've done shows before with them and I got battered bluntly when people like, "Oh, it's not revenue, Harry, and you can't categorize it as revenue." Is it GMV not

revenue? Are we playing fast and loose

revenue? Are we playing fast and loose with the truth on revenue versus kind of bookings?

>> I think it is revenue. I think that um your the rate you get on every project is different. The margin you make on

is different. The margin you make on every project is different. So I do think it is revenue and I think that the um >> Can you help me understand? Sorry, I'm

very naive. If I have if I'm acquiring uh amazing talent Yeah. and I get paid for that. Yeah. How? And then I have to

for that. Yeah. How? And then I have to pay them and then I get my take at the end of that. How is that different than booking on Airbnb where I get my take from a location, but I have to pay out

to the owner?

>> Oh, good question. Well, I think Airbnb has one consistent fee. That's the

difference. There's actually a fair amount of variation based on the skill set of the expert. Like, you don't have a consistent rate relative to the booking amount. That's the biggest

booking amount. That's the biggest difference. So there's huge variety

difference. So there's huge variety depending on the project, the expertise type, the expert type of what you book on that.

>> Are there any other big misnomers that you think are pronounced in the industry where you consistently are I wish people would change the way they think about it?

>> Look, I I think the biggest one is just the view that uh I think when I when I first started uh first started this job, the main push back I always got was that synthetic data will take over and you just will not need human feedback two

three years from now. And I it's interesting I don't um from first principles that actually doesn't make very much sense if you think through it right if you think about the diversity of tasks that exist in the world and

then how long it would take you to get comfortable with the accuracy it doesn't make any sense right like I'll take legal services because it's a really interesting one right a lot of the legal data in the world exists with big law firms it doesn't even exist in the

public so if you take like the corpus of publicly available information that's that's been commoditized for years at this point right and so most of the logic

is incredibly contextual to language, culture, multimodal context, and the information stored in individual companies as an example. And so the only way to actually do the fine-tuning

process consistently and to get it accurate for any specific context is RHF. And I actually think in my in my

RHF. And I actually think in my in my decade in my Mackenzie days, Mackenzie going on black days, um that was the the thing I realized was different about

traditional ML models versus genis. In

machine learning, you can back test. You

can get to a really clearly statistically validated outcome without any human intervention. I think on the genai side, you are going to need humans in the loop for decades to come. And I

think that is something that most people are starting to realize. I think it's always confusing to people when they hear like, oh, that's how models are are trained on the back end. I didn't

realize that's how the statistical validation works. And so I think that's

validation works. And so I think that's been an interesting evol.

>> You're profitable, correct?

This year we have started to invest a lot more. So I think one of the big

lot more. So I think one of the big differences uh historically invisible had only raised 7 million of primary capital in its entire 9-year journey. Um

we've now we initially announced 100 actually right now raised 130 million and so I'm investing very heavily in technology. So we will not be profitable

technology. So we will not be profitable this year. No.

this year. No.

>> Good. Can you just take me to that decision because this was going to be my question which is like that's a very clear decision to be profitable and profitability comes often at the extent of growth naturally. Can you just take

me to that decision- making for you and how you thought about it?

>> Yeah, look, I mean to me it was a simple one which is if you think about the dynamics of return on capital, uh you can either harvest capital or invest capital and your decision to invest depends on the growth you see as a result of that investment. And I think

we're in the greatest environment for growth that has ever existed. I think

Invisible is really uniquely positioned to capitalize on that growth, too. And

so I think of our four five core platforms. I think of the growth vectors across both AI training and enterprise and there were just way too many different things I thought were

interesting to invest and it was the clear best use of capital and I look I'm trying to build this for the next 10 to 20 years and I think if you want to build enterprise value for 10 to 20

years now is the time to invest and build um and I I hope we never get to the harvest stage but I it's definitely not now.

>> Where are you not investing that you want to be investing? I I think the simplest answer is actual physical world interactions. So what I mean by that is I think a lot of the

most interesting data that we don't even really have access to yet is is things that exist in the physical world that are more complicated to acquire and organize. So I'll give you an example.

organize. So I'll give you an example.

We um we're serving one of the largest uh agricultural conglomerates in the US on um herd safety. So actually like monitoring risk factors. when should you

send a vet for their herd of of cows basically and that whole process relies us on us actually sending for deployed engineers to farms dropping Starlink terminals into those

farms and building out custom computer vision models in those contexts and I think there are so many different physical world contexts that become really really interesting but it does take cost and capital to build those out

like you know I think oil and gas uh oil rigs are an interesting one as an example and so I think physical world interaction patterns are the some of the most interesting growth vectors for

this, but they do take time and money to invest in. Robotics being another big

invest in. Robotics being another big part of that.

>> One area of investment that I think is interesting is brand. How do you think about Invisible's brand today? Well,

>> was interesting. When I took over, we had had if you looked at the entire public internet, I think there was one article available. Uh, and so I we've

article available. Uh, and so I we've definitely spent a lot more time this year thinking about >> was that a deliberate decision?

>> I think so to some degree. I think

Invisible has a culture of you know we believe in doing great work for customers and we were kind of not really focused on telling the whole world about that.

>> Does that become detrimental to the business at some point though?

>> Yeah, look I do think branding matters a lot. My view now is that um it's been

lot. My view now is that um it's been very helpful for for us to spend time where I spend a lot I spend about 70% of my time on the road uh and I you know I go to a lot of conferences things like

that and I think building a brand is really important for trust for awareness for engagement and so and I think also how you tell that story is really

important uh so I I'm very much a believer I have this um one of my favorite quotes like Mark and has this idea that when p private and public narrative diverge that is the risk or

the opportunity. So meaning if you say

the opportunity. So meaning if you say things you don't believe to be true or if everyone's saying things that don't don't believe true then what is the actual private narrative? So I think it's been very important to me to make >> so can you just help me understand that?

>> Yeah.

>> What do you mean by that?

>> So hypothetically if I was going around saying we have an out-of-the-box agent that does everything and then that wasn't actually true. That's what either creates opportunity for others or risk for us. That's how I think about it. And

for us. That's how I think about it. And

so I think what's been very important for me and how >> is that not our industry? I'm sorry. I

mean like I I don't mean to pick a fight with Mark Andre, but like hello Mark.

Like our job is to sell and then deliver later. Like uh I'm looking at thinking,

later. Like uh I'm looking at thinking, well I'm >> Well, you know, I guess it's all a question of degrees. And I think in my mind, um like I want to say things where the narratives are the same to the

public and to what our team thinks, what our customers experience. And so I think that's part of why I have focused on saying some of the nuances of what's not working and not claiming everything works out of the box. And I think that is that is a different approach, but

it's been a core to how we've thought about building the brand is we are building this around trust where like I want a company we work with to know that if I say this will work, it will work.

And I I think you only get one chance to do that, right?

>> Do you agree with fake it till you make it?

>> Oh, that's such an interesting question.

Fake it. I think it depends on what faking it means, right? And and and what I one of the things I think is really complicated about geni is it's non-deterministic, right? So like if

non-deterministic, right? So like if you've never built a machine learning model to do pricing in uh industrial manufacturing, you can still understand what data is available, understand how the price is being set today and get

pretty comfortable that what you build if you say you will build it will work.

And I think that is okay. I think the challenge of non-deterministic systems is there is more risk to uh faking it till you make it. Meaning if you you can kind of go out and say your agent will

do anything and then you actually have to deliver an agent that works, right?

Right. And I think that's part of the the interesting you're asking about accounting dynamics is I think it's part of the interesting dynamic of like a lot of the contracts that that people will sign right now are like I'll sign for 50

agents to be delivered but then the question is do you deliver the agents?

Do they work? And so I think that is a different thing than SAS to go to go back to your earlier question. If I

deliver a SAS box I know it will work.

If I deliver an agent in the current world there was actually a report AWS came out with today. It's interesting

that like 70% of agents are actually not even uh AI agents as you think about like most of the agent agentic processes today are actually traditional script writing and just traditional automation

right and and I think that's why I don't selfidentify as an agent company actually at all like I think we do AI agents we do like um AI workflows are a core part of we do but we do data we do

training and fine-tuning and agents are one tool in the toolkit because I think too much emphasis on A lot of the time it won't work.

>> Did you see the video of the robot going around the house recently and it was like the worst thing ever? It was like 11 minutes to take out a glass dishwasher and then at the end it was like and this was controlled by Simon in the back room and you're like the

shittest robot ever was then controlled by some weird dude in your back bedroom.

Like this is so >> I I do think that is Yeah, I did see that. And look, I think robotics is

that. And look, I think robotics is another one that will take longer but will be really interesting when it works. But by the way, I think even in

works. But by the way, I think even in that case, you'll need more task specific robotics, not just broad-based.

>> Have you ever faked it till you make it and been caught out?

>> And did you learn anything from it?

>> When I first started working and it wasn't even called AI back then. It was

kind of data analytics was what it was actually called. Uh this is probably 12

actually called. Uh this is probably 12 years ago or 13 years ago now. 12

probably 12 years ago. Uh in my Mackenzie days. Um and you know I I I

Mackenzie days. Um and you know I I I think the firm gave me a pretty interesting purview to try and explore where I could build out AI uh offerings across um different sectors and customer

bases. And I don't think I knew what I

bases. And I don't think I knew what I was going to build candidly. I think

that the interesting dynamic was I I had a lot of conviction that and partly because some of the things I done before that AI could be really useful on a whole host of things from inventory forecasting to pricing to credit

underwriting. you just thought

underwriting. you just thought intuitively of like the sources of data.

The fact that so 70% of the software in America is over 20 years old. Most of

that data is massively fragmented, not clean. And so a lot of the decisioning

clean. And so a lot of the decisioning that happens in the enterprise is done in a really fragmented way. And this is what I did know. I did know that like you took a you took your average person

looking your average sales rep making a call. Most of the time they're like

call. Most of the time they're like googling some stuff to try and figure out what information. Not now, but this was 12 years ago. They had very little information on the script to say customer information what they might

sell. So I had a lot of conviction that

sell. So I had a lot of conviction that that would work. Um I did not know uh what would be most interesting. In fact

there were areas I thought would be really interesting like banking that were actually much harder to do this inconsistently. It was somewhat you

inconsistently. It was somewhat you mentioned earlier like bank. So so the average bank spends 93% of its cost of its tech cost on maintain initiatives.

Oh, >> 7% go into building new things.

>> It's my favorite thing with people that I I just had one of the CEOs of a big vibe coding platform on and he was like, "SAS is D. We're going to build our own products."

products." >> Yeah. And I'm just like maintaining,

>> Yeah. And I'm just like maintaining, provisioning, updating. Are you pie?

provisioning, updating. Are you pie?

>> If Yeah. If you've never gone through infosc and approval of the bank, like the banks are banks and look for very good reason. Banks are much more

good reason. Banks are much more complicated to do a build like that in, right? And so I I think what was

right? And so I I think what was >> I this this event that I was at last week was a bank. They have 6 and a half thousand people in KYC alone. 6 and a half thousand people.

>> It it's a great example. And so I think when I was doing that in the early days partly because there was very little media coverage or interest in it. I was

kind of figuring stuff out from first principles. And so I think the degree to

principles. And so I think the degree to which I faked it when I make it was I had to figure out you know other people I worked with and customers that trusted me enough to allow me to co-erate and

develop stuff with them. Um I had to figure out a way to recruit really good people. That was actually like I

people. That was actually like I actually think if you take any business very simplistically. It's a question of

very simplistically. It's a question of can you build trust with customers and co-erate to develop and and make things work and then can you recruit unbelievable people to deliver that and and it actually really comes down to

recruiting in a lot of ways. I think

that that's actually the number one thing we focus on. I think of us as a talent company as much as anything else.

Like you could you could argue that like not to use a sports analogy but like Nick Sabin did not build Alabama football with the process. He built it with recruiting the best football players in the country. And I think about that the same way. It's like you

have to recruit great people. So in some degree some degree in the early days of that, you know, 10 12 years ago, I was setting a vision and trying to figure

stuff out and actually iterating a lot of stuff. And I do think we ended up

of stuff. And I do think we ended up building a lot of things that really worked, but it took time and it took iteration as much as anything else. It

took iteration and um trust. So I would say the counterintuitive thing is I didn't fake it and then I never told people it would definitely work. I would

actually my entire approach would be to say I think this would work. This is my reasoning why I think it would work and let's build this and that actually a lot of people were very comfortable with that. I think if you go in and say I

that. I think if you go in and say I have an out of the box AI that solves all your problems, people are pretty skeptical.

>> I do just want to stay on recruiting because again I always again I think the show is successful because you put on the hand and you're like >> as a startup CEO one of your biggest jobs is to recruit great people.

>> Yeah. Having recruited people across different companies now both you know McKenzie and now obviously invisible what would you advise startup CEOs in the earlier stages knowing all you know

now on what it takes to be great at recruiting acquiring and retaining great talent >> yeah it's probably the topic I spend an enormous amount of time focused on that it's probably the topic I think about the most because I actually do think if

you get amazing people everything else will follow from that >> so you agree with the moniker of like hire great people and let them do that work because people kind of push back on Now, >> yeah, I think I think not just hire,

hire, retain uh and hire, retain, and evolve great people because I actually think you have to give them a platform that they enjoy day-to-day. And so, I think the two things that I believe that

somewhat counterintuitive is um when you recruit a great person, I don't think about role most of the time. Meaning, I

think people are very role focused of like I will hire this person and they will only do oil and gas as an example, right? The reality is like really good

right? The reality is like really good people will will run five to six different roles across your they'll run seven to eight different products. Um

particularly on the business side you may have somebody that does everything from delivery to sales to account lead and you can be comfortable with that if you hire great all-around athletes in a lot of ways. And I think the second

thing is it has to be fun. This is

actually my my my view on one of the narratives that has gotten a bit lost in the last couple years is if you have a culture that is brutal to work at, people will leave. They might stay

around as long as your stock's high, but they're not going to stay. And I think you have to create an environment where people really enjoy going to work every day, where they're intellectually challenged, and where they feel like they can unleash creativity. And so, I think that's I spend a ton of time

thinking about that.

>> Can you just I I don't want to argue back, but I I want to build great companies myself. I'm trying to with 20

companies myself. I'm trying to with 20 VC and I try to build good cultures.

Revolute is a brutal culture to work at famous for it. But Nick has famously always told me culture is Winning is what matters. When

people win, they learn more. They earn

more and they grow.

>> Yeah.

>> And that really is culture. Brutality in

bounds.

>> Yeah.

>> Drives humans.

>> Yeah.

>> Is that wrong?

>> No. No. I think it's actually right. And

let me let me caveat what I said is I think it's also the nature of the business I am I'm in being AI meaning I actually think that's a very true statement if what you're trying to do is scale a relatively consistent business

model to do one or two things then that is a function of execution and hiring people to go in very specific roles and do very specific things well and actually sorry let me caveat my prior comments on that I think the difference

is a lot of what we do is research and exploration fundamentally right and so in the AI world it is a different dynic dynamic in that you're trying to figure out very specific problems with

customers to solve and build really unique tech and so I think in that world you do have a different cultural dynamic. It is a research culture as

dynamic. It is a research culture as much as it is an implementation culture.

Is that difficult then when you know I I just we do a show every Thursday which is blown up which is incredibly um nice for us as a business but essentially we have Jason Lanin and Mario Driscoll two

VCs and we talk about news and we talked about Sam Alman and war mode um can you do a war mode then in the culture of research and AI where it's maybe more

thoughtful does that work >> yeah there are definitely parts of our like I think if you take our delivery and operations team they're in war mode quite a bit of the time so so I Again, I'm more describing

general, I think, countercultural beliefs I have on how to hire certain sets of great people. I don't think it applies to every single function of the company. I would agree with that. I

company. I would agree with that. I

think there are definitely um you have to be able to push really hard to deliver certain outputs and I think we do a great job of that. Um but I also think if you know there there have been

ideas of like every great engineer should be able to spend 30% of their time on new projects as well as sprinting on the existing ones. I think

it's paradigms like that that are important.

>> What decision are you scared to make, but you think about it often?

>> Yeah, I think the simplest answer I'd have to that is that um growth in this industry relates to the amount of capital you raise and you know your earlier question about investment. I do

think there's a world in which if you pursue hypers scale growth, it is possible but you have to invest a lot more to do that. like every new company, every new customer you on board, you have, it does cost money to do the forward deploy engineering work. You

invest more in your tech. And so there is an interesting like do you run a business for you know um consistent steady growth for 20 years or do you try and build something that gets to 50 to

100 billion dollars and becomes gamechanging. And I you know I I think

gamechanging. And I you know I I think we are we have very much tried to operate in a way where I think we have a path to profitability and everything else but we are going to invest in the

near term because I think it is it is a very interesting time to do that. I know

you don't like to name names, but I can because like when Mccor raises like $2 billion, are you like, "Fuck, we need to raise more money."

>> It's interesting if you look at the players in our space that there have been very different levels of capital raised uh and people had success more and less. I actually think a lot of our

and less. I actually think a lot of our investment is in different areas than many of our peer set training are focused on like a lot of it's in things like the enterprise. It's in uh core software platforms that are maybe a little bit different than what others

are focused on. So I don't know that I think you can raise a lot of money and the question is where you spend it. Um

again I actually think most of the capital we need in the next 5 years is more enterprise focused. I think we've actually built something on the IT side I feel very very good about.

>> We were talking about recruiting before I went off on a tangent there. Um you

now have offices despite being a remote company for several years.

Does remote not work bluntly?

>> Yeah, so we were a fully remote company for nine years until I took over and uh we've now off we've now gone fully in per well largely in person and we do have we do have some folks that work remote but we now have offices in New

York. We took the old Pinterest space in

York. We took the old Pinterest space in San Francisco um London, Paris, Poland, DC and just opening Austin, Texas now. Um, and I think the interesting thing I've

experienced since that is I do think it's I do think remote you really struggle to build culture in the same way. So I think that the things I've experienced since we were remote is

just a way stronger positive culture of collocation which I think people enjoy their work and get to know their co-workers a lot better as a result of it. I think it gives us a lot more depth

it. I think it gives us a lot more depth with customers to be colllocated in cities where we spend time with them. I

mean like if I take London and Paris, we need to be colllocated with the customers there. It can't just be um you

customers there. It can't just be um you know someone in a zoom screen in New York.

>> Do you see productivity increase >> exponentially? Yes. I think that um I

>> exponentially? Yes. I think that um I think if you take engineering as an example like I think you can execute engineering tasks remotely but the process of working through really thorny problems like we so I've tripled the

size of the engineering team just an example this year right and what I can tell you is the interesting thing is vast majority of those people wanted to be in person. Now I'm not saying that's true of all engineers but is it was

interesting how many people particularly the younger tenurs were like I want to be colloccated I want to work through things and so I mean I don't even mandate office attendance I just have it

in those offices and we have huge appetite like I was with our we have 40 people in our London office I was with many of them last night they were all commenting on how many of them come in voluntarily even on like a Friday where they might not need to because they like

being around their peers. Look, I think that I would actually bifurcate two separate things and I don't think they're related. One is the hours you

they're related. One is the hours you work seven days a week depending uh very flexibly depending on when client needs exist from physical collocation. I

actually don't think they're related.

Meaning I think the benefit of of integration is if stuff comes up on a Saturday uh or you're pushing on a new product build like you will work on that Saturday. But if you do that from your

Saturday. But if you do that from your your home that's totally fine. I think

office culture to me is like if you took a hypothetical thought experiment and you said over a year. Um I think there is a diminishing return from being in the office all the time where you lose

flexibility. So as an example, if if I

flexibility. So as an example, if if I said we were physically we were remote 100% of the time, that would not work at all. If I said we were physically in the

all. If I said we were physically in the office six days a week, I I think that is overkill and you lose great people.

Particularly senior enterprise folks don't want to be in the office on Saturdays. But what I think we found a

Saturdays. But what I think we found a nice balance between is people come to the office most days. People really

enjoy being with their colleagues. They

work most days, but they can do it from their own home on the weekends. And I

think that sort of flexibility is good.

>> Final one before you do a quick fire.

What did you believe about management that you now no longer believe?

>> I I think two things I would highlight.

Um, one is that I think control of is a bit of a fallacy depending on the volume thing of things you have going on. Meaning I actually think to the question earlier on hiring

great people. If you're serving several,

great people. If you're serving several, you know, let's say a couple years from now, you're serving a couple hundred customers on different topics, you actually need to have values, consistent

tooling, consistent approaches, but you need to empower all those teams at the edge to operate and do what they will.

And so actually one of the big focuses I've made over the course of the year is to reduce a lot of our hierarchy, make the organization way more flat so that companies that that um people at the edge serving customers are empowered to

make decisions. They have

make decisions. They have decision-making frameworks, they have consistent tooling, but they are empowered. I think trying to control

empowered. I think trying to control that centrally maybe works in like a manufacturing business, but you you lose a lot of latency of decision-m. So you

know I think if you look at like there's a lot of uh interestingly military history that would say the same thing that it's like actually if you look at the function of an army it it it doesn't

at some point it moves into people in the field make the decisions and so you have to have the training the strategy recruiting to do that and then you have to empower your teams to work and I think I think about a lot of that very

similarly. Um I think the the um where's

similarly. Um I think the the um where's my other one?

Oh, I think the second thing I think a lot about is in the AI world at least, strategy is a somewhat overrated concept. And what I mean by that is I I

concept. And what I mean by that is I I think actually all strategy I was talking to a a CEO in the biotech space and he was saying that um strategy is very important to them because every time they make a capital decision, it's

a seven-year capital cycle, right? And

and so in that case, strategy makes a lot of sense. But in the AI world, one thing that's been interesting to me is every 3 months, the entire world changes. And I I've just had to get very

changes. And I I've just had to get very comfortable with that dynamic. And so

you have to think about your investment life cycle as core beliefs you have and then 30 to 40% of things that you iterate constantly based on new tech. So

there is tech that you're going to build like a new voice agent comes out that will become obsolete and you have to just be very comfortable that you're building an interoperable set of frameworks that you can integrate the

new tech into. Um and and that has to be a core function of the business is fiveyear strategic planning is not a useful exercise right now in a lot of ways. I think you want to think about

ways. I think you want to think about five years in terms of the cultural context you build the organizational like the institutional memory to use the seven powers framework but the actual iteration cycles are much much faster

and I think if you don't react to the market if you don't think about you know if you don't react quickly that that does not sustain and now I think enterprise the interesting flip side of

that is enterprise sales cycles for example are much longer so it's not like um you can't survive unless you're making decision But but I do think the big thing is a lot of the tech being

developed changes two to every two to three months and so you need to be constantly incorporating that into what you build.

>> I think final final one I promised before a quick fire. You said about always being traveling and you mentioned your girlfriend earlier. How do you make that work and what would you advise me as like hey tips and tricks to not have

a severely pissed off girlfriend most of the time?

>> I think the first thing is to find a great girl who understands that you are really passion and is is supportive of that. I think my girlfriend Claudia has

that. I think my girlfriend Claudia has been has been great on that front. I I'm

very appreciative of that. But look, I mean, it's tough. I'm on the road uh probably 60% of the time. I've, you

know, if you look at my last you look at my last four, five weeks, um Riad, Geneva, Paris, Berlin, London, um uh San

Francisco, Boston, Singapore, now London again. So I mean that that's a

again. So I mean that that's a >> you enjoy this.

>> I do in some ways. I think that I feel very lucky to be building something at this particular time and with a group of people I love working with. And so, you know, this happens to be what I spent my

last decade doing. And it happens to suddenly now be what like what a lot of people want to do, which is great. And

so, I feel very lucky because of that.

And so, every day I wake up and see what else can I do to be to kind of push that forward. And so, I do kind of live on

forward. And so, I do kind of live on the road. But, look, I think you I think

the road. But, look, I think you I think some of the things I've tried is like, you know, you figure out things like FaceTime. you you you make sure you keep

FaceTime. you you you make sure you keep the cadence interaction high. Um because

being on the road is tough. Um but I also don't think it's forever. I also

think I'm at that fun stage of trying to take something to like we kind of went zero to one and now we're trying to go one to end but we're not yet you know fully mature public company or anything like that. And so um I think she's been

like that. And so um I think she's been very understanding throughout that process.

>> Are you ready for a quick fire round?

>> Yeah.

>> Okay.

Think I should name it the discomfort round. Open AI at 500 billion or

round. Open AI at 500 billion or Anthropic at 360 billion. Which would

you rather invest in?

>> I do not comment on any uh any players in the module builder space for right of reasons.

>> You can see why it's the discomfort round. What's the most underrated infra

round. What's the most underrated infra company today? You know, I'm going to

company today? You know, I'm going to data bricks, which is you're going to be like, well, they're very rated, but look, I think their tech is great and and I think that it's interesting in a lot of ways, the most useful foundation

for AI is really good data data bricks infrastructure. I think when I hear a

infrastructure. I think when I hear a customer has them, I'm always very happy.

>> What's the best advice that you've been given that you most frequently go back to? We kind of talked about this a

to? We kind of talked about this a little bit earlier, but a a form a CEO that I respect a lot when I when I took the role, I asked him his advice.

They're like, "What's the best way to think about a team?" And he said, "Look, your job as a CEO is to do three things really well. Recruit great people,

really well. Recruit great people, create a culture where they love working together and build great things, and try and make them all extremely rich." And I think it's a funny framework, but I

think an interesting it's an interesting way to think about like that is my responsibility to employees. I want them I want to find

employees. I want them I want to find great people, help them enjoy each other and then build something that that becomes big and helps all of them achieve their dreams.

>> What's one widely held belief about AI that you think is completely wrong?

>> That out of the box agents will solve everything with a push of a button. And

I I think that is that is I think the biggest misconception now is that um I think many people were hoping the adoption curve will be I buy something and I just push in my business and it takes a whole process and fixes it. And

I think they're realizing it requires training, fine-tuning and a whole host of process redesign and business ownership.

>> You are me today. You have a new $400 million fund.

>> Yeah.

>> And you're a partner in the fund with me. Where should we be investing where

me. Where should we be investing where most people are not? because everyone is investing in agents out of the box.

>> Yeah. Well, yeah. Um, look, you know, I I I think it's an interesting question because um I think a lot of the reason people are investing in the agents out of the box is that they're trying to apply a SAS paradigm of what's worked

historically to AI, which is challenging. Like I think the the model

challenging. Like I think the the model building layer is clearly producing amazing returns. I think the AI agent

amazing returns. I think the AI agent layer is more complicated. Now, where I think that's also complicated is the application layer is tricky too. And I

think you hear a lot of commentary on this like many of the applications may or may not work. Um they're not really getting full workflow embedding. They're

more of like um kind of nice to have in workflow context. So actually my

workflow context. So actually my counterintuitive take would be I think one interesting question of the par of the paradigm now is whether new

companies built around AI get distribution faster than big companies figure out how to adopt AI. I think

that's like the interesting paradigm for our society. And so I think some of the

our society. And so I think some of the most interesting new businesses are actual businesses using AI in the physical world that are AI native and that will be highly disruptive. So you

mentioned Revolute and banking for example or you could go into like loan servicing. There's many different areas

servicing. There's many different areas where people are standing up new businesses. One of the most interesting

businesses. One of the most interesting stats I've heard recently is if you look at Y Cominator's recent class, I think it's like the largest it's 2x the revenue of any prior class. And many of those are businesses that are actually

uh serving a customer need, not selling that customer software if that makes sense. And so I think from an adoption

sense. And so I think from an adoption standpoint, one way to do this is to bet on AI agents which are more of like a SAS paradigm who will sell stuff to customers and the other way to think about it is what are business models

that will change because of this. Uh and

I think there's a whole host of like you know genai native services businesses are very like you know u tax accounties etc are really interesting examples of that.

>> Again you're you're a partner with me in the fund. Yeah. Do we just get used to a

the fund. Yeah. Do we just get used to a world of lower margins? Is that is that how this business plays out? Like is the world of 70 80% software margins over?

>> First of all challenge that 70 80% software margins actually ever existed.

What I mean by that is there's the gross margin and then if you but if you look at like profitability in public software multiples it's fascinating right in the last two years you've seen public software multiples go from 20x to 10x

partly because of growth changes and partly because they've tried as they move profitable their growth slows materially and what you realize is like I I actually would take the flip side of this which is the integrated units will

be very very profitable because the way they grow they'll be able to acquire customers faster build them things that are good faster And so they won't have the box stickiness, but they'll also I would argue a lot of those software

companies below the line were not that profitable.

>> When you look forward to the next 10 years, final one, what are you most excited about? Like you know, for me, my

excited about? Like you know, for me, my mother's got MS. I look at potential advancements in MS, drug discovery, treatment pathways. What are you most

treatment pathways. What are you most excited for? I like to end on a tone of

excited for? I like to end on a tone of optimism.

>> Yeah. You know, I think despite some of my uh what I call realism on on enterprise adoption, I actually am an AI optimist. I actually think that the

optimist. I actually think that the current narrative on some of the risks are far outweighed by some of the benefits and like just to give a couple examples right and I'll go through for including healthcare if you take um uh

if you take energy as an example right there's a lot of question around like data center implications for for energy but you do the math right now data centers are about 1% of total global electricity usage AI data centers are

025 to 0.5% of that so actually really small u I don't know cooling electric uh air conditioners is 14 to 20% of global

electricity usage. So if you just I mean

electricity usage. So if you just I mean AI has so many different ways with grid optimization cooling where I mean the World Economic Forum just came out and said this it's going to be massively net net positive from an from a um

environmental impact standpoint. So I

think energy is one where if you think about all the energy needs we're going to have and the investment now going into clean energy because of all this I think we'll actually be in a much better place 10 years from now. I think

healthcare is another interesting one.

If you um if you look at uh US healthcare, we spend 14,000 14,000 per capita per year uh on patients in the US. So that that's like a rough spend. That's two to three two

and a half to 3x what like Germany and Canada spend as an example. Now if you then break down the the the context of that, you know, 9% of that roughly is is administrative, something like 25% of

it's waste. And then actual cost of care

it's waste. And then actual cost of care is like really challenging. I mean Johns Hopkins just released this thing this stat that um 25 20 250,000 deaths a year

happen because of avoidable errors. You

you you see things in AI like 20% better identification of breast cancer risk for example. So I think actually healthcare

example. So I think actually healthcare is another one where the cost framework for healthcare has been not good over the last 20 years and the cost of care improvements will be really material if AI works well. So I think that's another

one. I think the the one I'm um probably

one. I think the the one I'm um probably most excited about is education. I think

actually if you're a um if you're a kid growing up in any, you know, soci soio economy disadvantaged um city in the world, your ability to learn about any

topic on earth incredibly quickly is better now than it will has ever been at any point in history. You can take any topic on earth and with just an internet connection learn, you can go through and you can pick your topic. And I think one

of the reasons that's particularly important is the educational system we've had for the last 10 105 years actually 50 years doesn't really work. I

mean we have massive K through2 challenges with um STEM topics in the US for example. We have huge learning gaps

for example. We have huge learning gaps largely by sociodemographic context and most of our educational system is based around like teach people biology uh

English and history and like not teach them basic things like FICO scores or how to do coding and so I and to add to all of that then the the the the college system has has created the student debt crisis where people way too many people

are going to colleges that are not worth going to for and taking on enormous amounts of debt to do it. So I actually think again I think the way our educational system will shift will will function will shift material. We're a

talent assessment company an enormous amount of people we bring in are did not go to college and we assess them on cognitive aptitude and skill. And so I think the the really positive note I

would leave on is I think the way that people learn the topics they learn the way we look at resumes and how to screen and assess people will will move in a really positive direction and I think a very different one than we've had the

last hundred years. Absolutely thrilled

to hear that there is value in uh non-OL or dropouts. As a dropout myself, this

or dropouts. As a dropout myself, this has been so much fun to do, Matt. Thank

you so much for being so flexible with the topic type. You've been fantastic, dude.

>> Thank you for having me.

Loading...

Loading video analysis...