LongCut logo

Are LLMs Dead?

By Opaque Systems, Inc.

Summary

Topics Covered

  • LLMs Hit Diminishing Returns
  • Hype Cycle Entering Trough
  • Metadata Leaks Secret Sauce
  • Proprietary Data Unlocks Mastery
  • Confidential AI Prevents Leaks

Full Transcript

All of the public data has already been consumed. It's already been gobbled up.

consumed. It's already been gobbled up.

So the way to get additional improvements is around proprietary data.

Companies are very reluctant to use that with good reason because once it leaks, it's gone forever. It gets consumed into somebody else's model and there goes your your your strategic advantage, your

competitive advantage.

[Music] Hi, welcome to AI Confidential, a show about AI and how to benefit from it responsibly. I'm Aaron Fulkerson, the

responsibly. I'm Aaron Fulkerson, the CEO of Opaque, >> and I'm Mark Hinkle, the founder of the artificially intelligent enterprise network. All right, today we are doing

network. All right, today we are doing something a little different. Instead of

interviewing a tech leader about AI, we're going to break down the debate that's going on in the AI world right now. Are LLM's dead or not? We should

now. Are LLM's dead or not? We should

have had Rube on this because our friend our friend Ruben Cohen wrote about this and in a LinkedIn post and he writes that the the tech world's lost its mind and LLM's plateaued more than a year ago

and everything else has been incremental at best which yeah I mean there's some truth to and he points out that the uh the SWE bench verified numbers point to

this as well and there's been a lot of discussion about this around the launch of ChachiPT5 And obviously I don't think they're

dead, but I think what we've seen illustrated with chatbt5 is that we're reaching the upper bounds of the possibilities with LLMs. When we see

these models drop, we're not seeing these huge step function improvements and quality.

>> Yeah, I agree. When we first saw chat GPT and I guess that was GPT3, we most of the world had no comparison.

Now we see new ones coming out from Clyde Gemini. Uh GPT5 just came out and

Clyde Gemini. Uh GPT5 just came out and compared to 3 years ago, it's amazing tech, but where where it is six months

ago is like it's just a meh um incremental upgrade. And I think that's

incremental upgrade. And I think that's that's what everyone's saying is we had this huge, you know, exponential leap over and over again and now it's just 5

10% better. I guess, you know, it's hard

10% better. I guess, you know, it's hard to say for sure how much better each version was, but um they don't seem to be the huge upgrades that they had been

in the past.

>> Yeah. And I think that there's something happening around AI right now which we're familiar with in technology, the hype cycle.

And we are tipping over the peak of hype and sliding into the trough of disillusionment.

And it's a common it's a common framing of how a particular technology will come out. it'll capture the attention of of

out. it'll capture the attention of of um a community in this case the global community with AI and people's expectations will will continue

to grow grow grow and then there's a point that it tips into a downward trend of criticism and um that's exactly what we're seeing because we're not just

seeing people expressing their disappointment with GPT5 we're seeing it across a variety of different spaces with an AI that we're going to get into as well.

>> This is a technological phenomenon that we see all the time is that we come up with a new tech, we

maximize its potential and we keep, you know, doubling down and doubling down and you get to the point of diminishing returns. You know, I think a good

returns. You know, I think a good example could be like CPUs and GPUs, right? So CPUs when it comes to

right? So CPUs when it comes to artificial intelligence and neural networks aren't very efficient, but we, you know, we're pushing it and pushing

it and then you had Nvidia come along with CUDA and basically turned graphics cards into supercomputers

and initially you saw 1600x improvement in performance on the data crunching and that's why everybody wants

GPUs right now, but there's other technologies and techniques that we might um want to look at to get to the

next level just like transformers and neural networks that actually work like Alexnet from uh I think that was Jeff Hinton

sort of started the whole ball rolling again after an AI winter. We're in a, you know, we're having a cold, cloudy weekend in San Francisco right now with

the LLM controversy, but that probably too will clear with some additional updates. But what are you thinking?

updates. But what are you thinking?

>> Well, there's there's a bunch to unpack here. One of the things that I've

here. One of the things that I've observed, you see this a lot online and I've I've actually talked to friends about this where they were very frustrated by the launch that OpenAI

rolled out with chat GPT5 because they'd become so attached with the interactions they were having with 40. And you'll see a lot of people who have written articles where they do analysis across

multiple vectors for specific um capabilities like writing or analysis. And I' I've read a few of

analysis. And I' I've read a few of these and generally people are claiming that they feel chat 4 is superior to chat GPT5.

I don't know if that's true. Like I I looked at the comparisons.

My experience is I think chat GPT5 is incrementally better than 40. But it's

worth noting that what I think we're experiencing is people are developing a personal affinity for the person the the actual personality of the way that the

model's communicating with them. You

even see people writing about this like I lost a friend which is I don't I don't do this kind of personally I don't anthropomorphize the the u models. I I I

do have a bad habit of like being really polite because that's just how I communicate, which that's another funny factor that that Samman talked about like a couple months ago where the the hundreds of millions or tens of millions

of dollars that saying please and thank you to the models cost this company. But

um I don't feel like a sense of personal connection like an emotional connection with the way that these things communicate with me. And people do.

>> Everybody likes a certain tool for a certain job. And so if you're a

certain job. And so if you're a carpenter and you were carving wood, you might like a different kind of chisel for different things. I think that's what was happening with, you know, even

me for GPT. I would normally start with GPT40 and then when I needed quick good reasoning, I would do the 03 and the 03

mini and then eventually I think it was 04 high. And it's so funny cuz I knew

04 high. And it's so funny cuz I knew exactly which one and I'm switching all the time based on my task. Now what

they've done essentially is um they're using an LLM router to basically decide which model and which expert if it's

mixture of experts is best for your use case. And obviously

case. And obviously they didn't have the training data on how we used it. And and also you you mentioned something really that I always

think about and I'm maybe I'm so jaded, but you know Sam Alman said and I think it was the number was $50 million a year it's costing them to to handle please

and thank yous in the chat GPT. Well, um

I think what's happening now is they're probably the LLM router portion of chat GBPT is learning from us. And I'm a

little cynical in the fact that if you're learning, is there also some calculus in there to say, hey, let's optimize for power consumption because

and processor consumption and everything else because every single one of our queries today is essentially op they're operating at a at a loss. So, you know,

until they get to the point where compute and processing is cheaper, they're they're losing money on every query. So, what do you think on that?

query. So, what do you think on that?

>> The point you just made about how chat opens using our interaction data with chat GPT to optimize like which

model it's using >> is super interesting and important to note. I had a conversation with a

note. I had a conversation with a financial services company that does wealth and asset management and they were sharing with me that

they believe that OpenAI isn't using their proprietary data that they're using in in OpenAI to train their their actual models. But what they are

actual models. But what they are absolutely certain of is across their different financial analysts even when they're creating like a an evergreen account

over the last year they've observed that the ways and means in which they're interacting with the data like the actual user experience the behaviors in which they're interrogating the data the

questions they're asking how they're doing that is clearly training and creating better outcomes.

So why is this significant? Like oh

that's great. You might say that's wonderful because then it's it's actually helping me to learn from to to to get the benefit of other people's interactions

with chat GPT models to deliver me better outcomes. Well, if your business

better outcomes. Well, if your business is financial services and you're in the business of having the best financial analysts who are taking data and

interrogating data and the system is learning their specific ways and means. It's a

regression to the mean. It's creating

everybody's going to have the same ability because it's learning how to get to a specific outcome that's desirable through the interactions with the data.

Now, that's the intellectual property of that financial services firm. So it's a huge issue for them because they're like wait a second we're like training

through the metadata that's created as part of our interactions with the system our secret sauce our financial anal differentiation.

So, um, I think you're right that the way in which they're using it kind of like an LLM router to select which models and then intuiting based off of a broad corpus of people's interactions

with the system is improving the outcomes. But of course, in the

outcomes. But of course, in the enterprise context, that's a really significant problem because you're leaking your secret sauce.

>> Yeah, 100%. So when it comes to that secret sauce, I think that we will more than definitely see more and more small

language models or and I hate language models because I think it's we're going to see small models that are very very

specific and the problem will be either you'll either come together as an industry and say we're going to agree on

owning this um like a trade organization. So let's just say uh your

organization. So let's just say uh your example was finance that the JP Morgans and the you know Bank of Americas and

the Goldman Sachs of the world join a trade organization not unlike with Linux and say you know we're going to have a foundational model that we will train

and we will all use but those best practices in the industry we all benefit from and then our data will be the addition and that's that's a little

controversial because, you know, we don't know how close you draw the line between process knowledge and specific

financial data and maybe even some other kind of data on special projects within a company. But I I think you're almost

a company. But I I think you're almost gonna have to see a level of um specificity beyond the foundational

models because otherwise they will just own us all.

>> I totally rock what you're saying. I'm

going to tie it back to the original concept. There's the data used to train

concept. There's the data used to train the models.

>> There's the data that people are using as context like the proprietary data in conjunction with the models. And then

there's this metadata that's part of the interaction stream with these systems. And these systems aren't a single model.

They're they're a collection of models that are interoperating with each other and being routed. So the anal the example I gave about a financial analyst

whose specific processes in which they leverage the model is generating this metadata which is actually their secret sauce. Like

think about it from a financial analyst standpoint. Mostly they're operating on

standpoint. Mostly they're operating on publicly available data, but they have a specific process that's their intellectual property that gives a differentiation in market for why their customers are going to work with them

because they get better returns because they have a specific process to interrogate that data. Well, if that metadata is being collected and used to

benefit everybody who interacts with the system, you're actually eroding your differentiating value by making it, as is commonly known, a regression of the

mean, meaning that AI regresses to the average. So, everybody has basically the

average. So, everybody has basically the same um output.

Well, that means that your secret sauce is now part of that soup that's being served to everybody.

>> Yeah. And I think the architectural pattern that enterprises will have will be not unlike what um Altman is doing

with OpenAI. So I think you'll have a

with OpenAI. So I think you'll have a LLM router. So we had uh um you know the

LLM router. So we had uh um you know the light LLM is a popular one or um we had Marco Paladino on earlier this season

from Kong where based on the context of the query it routes to an LLM. So I

think you'll have almost the equivalent you'll have a general purpose safety you know I want to learn about a general concept in the world and I'll go to chat

GPT or one of their one of the open AAI models Gemini maybe Facebook someday whoever or Meta but then you'll have

other specific models so you will have the financial um uh models and this is this is the part where I'm not sure how it's going to play out. So, you're

basically saying that the process could be my secret sauce for one of the large financial vendors. I'm always going

financial vendors. I'm always going under the um assumption for AI is eventually we're all pretty much going

to have access to the top AI. It'll be

like electricity and internet is it'll commoditize.

So I'm trying to figure out what is that part of the model or part of the AI layer that is your specific competitive

advantage and is it all of your data and all of your processes or is it some of your processes and some of your data and that's the question that I think remains

unanswered to me. So let let's let's go back to the original concept of the innovation from LLM seems seems to be moving from a step function to a more

incremental curve. Um that doesn't mean

incremental curve. Um that doesn't mean that we're going to see a slowdown in the AI landscape.

And the reason is really twofold. one,

the data, as we've already explained, that's used in AI models has all of the public data has already been consumed. It's already been gobbled

been consumed. It's already been gobbled up. So, the way to get additional

up. So, the way to get additional improvements is around proprietary data.

And I'm I'm using that term broadly because that could be your actual um proprietary data related to your customers, your processes, etc. It could also mean metadata with how you

interact with these systems as a human, like the processes that you execute, the workflows that you execute. So, I'm just going to say proprietary data.

Companies are very reluctant to use that with good reason because once it leaks, it's gone forever. It gets consumed into somebody else's model and there goes your your your strategic advantage, your competitive advantage. Totally

competitive advantage. Totally separately, there's the underlying architecture, right? So there's the

architecture, right? So there's the architecture that's currently transformer-based and there's a bunch of innovation that can happen there. All

right. So let's address the whole data question which is one of the vectors for significant improvement before we get into the architecture.

Imagine AI models like ChachiPT. These

are like culinary students that are trying to become master chefs and in their journey they're consuming

all of the publicly available cookbooks.

Well, now this particular culinary student has consumed all of the cookbooks there are on the planet. All

of the publicly available ones. in order

for them to get better.

There's a whole wealth of data that's sitting around or information, culinary information sitting around that's for example secret family recipes that are passed down for generations. The actual

techniques for executing the steps within the cookbooks. For those of you who've cooked, you totally can. You've

seen this where you can follow a recipe and somebody who's more experienced as a chef can follow exactly the same recipe and it tastes orders of magnitude better than what you cooked. It just is way better and you

cooked. It just is way better and you don't know why. So there's specific techniques that are, you know, maybe coveted and kept private by professional

kitchens. And then there's like trade

kitchens. And then there's like trade secrets around selecting your ingredients. How do you select your

ingredients. How do you select your ingredients that go into it? So if you think about the data that's used within these models, it's a lot like culinary

students that are trying to become master chefs and they've already exhausted all of that public data. But

there are just troves and troves and troves of super valuable information that will help them become a master chef, which they have not been able to tap into.

And the reason why is companies have tons of private data, emails, documents, customer interactions, metadata about workflows that could make AI much better and smarter, but it's locked away behind

corporate walls. So that's that's one

corporate walls. So that's that's one vector that's super critical to understand. Now the second one around

understand. Now the second one around the architecture is first you can use a stupid model with the publicly trained data, right? Or you can use your data in

data, right? Or you can use your data in an unsafe way with these models because it would be unwise for you to use this data without having some kind of verifiable guarantees around the data

privacy so that you can retain ownership or sovereignty because once you leak it, it's gone forever. It'll be a regression of the the the means as we've discussed and you'll just be the average of everybody else and all of your

differentiation goes away. But here's

the point. Either way, let's say you do want to use this data with the current architectures. Well, transformer-based

architectures. Well, transformer-based architectures are what we call quadratic complexity.

I'll I'll use another analogy with this.

What that means is you'll hear about like blowing out the context window. Um,

the way that these work is that anytime you want to call upon something, you have to load everything into memory. So,

I'll use another analogy. Let's say that you're a college student studying math and somebody asks you, "What's 2 plus two?" In order for you to answer that

two?" In order for you to answer that question using the current architecture, you would have to load your entire math curriculum into memory. You would have to understand everything from basic

arithmetic all the way into theoretical calculus and combinotaurics. And you'd

have to load this all into memory to answer a simple question like what is 2 plus 2? That is what we mean by

plus 2? That is what we mean by quadratic complexity.

Now there's new approaches that are what's called linear. So, um, there's a variety of these. Uh, we we'll touch on them. What what linear means is that

them. What what linear means is that you don't have to load every piece of information on the planet about math into memory in order for you to answer a very simple arithmetic question. You can

just immediately go to the folder around addition and answer the question very quickly. Now, the significance of this

quickly. Now, the significance of this is it's not just a performance improvement in terms of speed. It's an

improvement in how deeply you can do reasoning because you can string together answers much more quickly, meaning you can process much larger data

sets. So that means that hey, listen, if

sets. So that means that hey, listen, if I'm a chef, I don't have to load all of the public every single public cookbook, every single trade secret within a corporate kitchen or every single recipe

from everybody's grandmother into memory just to figure out how to do cocoa.

instead I could just load cocoa and I can execute on cocoa um or the steps within cocoa.

>> I think what you're coming up with then is if we divide that data between let's just call it common knowledge versus

experential data. So that's the 30 years

experential data. So that's the 30 years you put in to become a master chef.

That's the 10,000 hours that took the Beatles to become the Beatles. is that

part that stuff that you can only come come up with is um from experience is what people want to hold back but if

they hold that back will that limit the model. The second thing is I think what

model. The second thing is I think what people want is almost like um in the matrix where people like I can just

download all of that stuff and immediately become a kung fu master in 60 seconds. The third part of that is

60 seconds. The third part of that is quadratic versus linear. You know, when we first got GPUs, we got such a big performance hit that what they were

basically doing, and I think this is Ilia Sutsgiver, the um co-founder of OpenAI was talking about when he did it,

is when we all of a sudden we had so much power, instead of giving you the directions to the nearest Starbucks, we gave you the directions to all the Starbucks in the world and let you pick

that. But now that we're pushing the

that. But now that we're pushing the limits, that efficiency of giving you the path to every Starbucks is sort of the quadratic approach and the linear

approach is just give me that directions to the closest Starbucks and that might scale better because it's not as um data

and compute intensive. Is that is that what you're saying?

>> For sure. And you've seen you've seen like um RV talks about this. A lot of people are talking about it. They're

kind of mimicking this these new architectures from like state space and recurrent families by breaking up the problems into a subset of agents. So

that's where you have like these multi- aent systems. So they they're they're kind of hacking around the constraint of well I I can't load like I can't load the entire map of every Starbucks on the

planet just to figure out a Starbucks that's near me. Instead I just want Right. So what they'll do is they'll

Right. So what they'll do is they'll they'll break it up into subsystems or sub aents as part of a multi- aent system to kind of hack around this this

problem where um other uh architectures for AI models whether it's like neurosymbolic or state space and recurrent families they're just taking

entirely different approaches. The point

I'm making here is two two primary vectors we're going to continue to see innovation. The real

lowhanging fruit is unlocking your proprietary data and obviously opaque being confidential AI. Um we do this all the time where the specific LLM

switchboard you're talking about. One of

the customers that we have that's public is Service Now. And we run their confidential one LLM switchboard in an opaque environment that allows them to provide guarantees so that when they're

pulling in data from across all these disperate systems which span trust boundaries, there's guarantees around data policy compliance and proof that it's not

leaking information. Right? So that's

leaking information. Right? So that's

that's one lowhanging fruit for innovation to drive acceleration in value from AI models.

The other is really at the architectural level and you know we leave that up to the frontier model labs and they all have interesting work going on to

provide new engines for these AI models.

But in order for us to achieve real ROI like return on investment inside the enterprise um it's really the only way to create like a defensible mode or any kind of competitive advantage is around that

lowhanging fruit of your enterprise data. And I think one of the reasons

data. And I think one of the reasons that um we're having this discussion is

also that MIT report that came out that said 95% of generative AI pilots at companies are failing which I believe

100% cuz uh you know it's it's it's not unlike any new technology trend. You know,

everybody invests on that front end of the curve as it starts to fill up the headlines. We're in that hype cycle or

headlines. We're in that hype cycle or that trough of disillusionment right now before we see the the big >> it's going to get worse. We're we're

just we're just cresting the peak and over the next I'd say 12 like six months we're going to drop down into the deep trough of disillusionment. I want to go

back to that MIT research that was published. They did a analysis of um

published. They did a analysis of um enterprise AI deployments, generative AI. 95% of them are delivering zero

AI. 95% of them are delivering zero measurable return on investment. And

that wasn't a typo or a misstatement. It

was 95% are delivering zero ROI.

Um they further went into companies that buy AI tools from vendors succeed 67% of the time more than those who are trying to build their own

and the biggest return on investment seems to be back office whereas the preponderance of investment with AI has been front office like sales and marketing.

So there's a lot here that we should dig into. Um I kind of the the one of the

into. Um I kind of the the one of the important points is like there's been 30 to40 billion dollars invested in generative AI in the enterprise. Only 5%

of pilot programs achieve meaningful business impact.

The bottom line here is they're using off-the-shelf models or they're trying to train their own models.

um when they do this uh it's it's like I I the key thing is that I keep coming back to is in this age of AI where AI is

the new platform it's your proprietary data it's not just that it's valuable it's your only moat once you leak it it's game over and in reality in these

enterprises they're not using their proprietary data they're not integrating into their workflows and this came out of of the the research too was they talked about like integration.

So another key point is that 90% of workers I hope it's 90% are using some kind of AI tool. They're using ChachiBt or Claude which if you aren't using that

like I I probably don't want to work with you.

>> Uh so I I'm just I'm like setting the table with all of the facts from the article so we can we can discuss more clearly about them. Um,

as you said, this doesn't surprise me.

>> Yeah. Yeah. No, it doesn't surprise me at all. I'm guessing that the large

at all. I'm guessing that the large corpus of data is from existing older companies, probably larger companies because when you look at companies like

Cursor and you know Windsurf and these companies that are growing at tens of millions of dollars of revenue month over month, you

have to say, well, those companies, their pilots are all like killing it.

But if you have an existing workflow and existing unstructured data and as we previously discussed, you're not sharing

the data that you do have. You're

probably not getting the advantage and you don't know how to get forward without, you know, is it a compute problem? Is it a data problem? Is it

problem? Is it a data problem? Is it

both? Is it a architecture problem?

because we've covered all three of those things. You're just probably stuck in

things. You're just probably stuck in trying to um use the wrong tool or not use the tool because you're scared you'll cut off a finger.

>> So, it was very specifically focused on enterprises that they surveyed. So,

they're big companies. So, you're faced with like three options. Stay stupid.

I'm just going to use AI generically.

And that was another call out in the paper was that the people that they spoke with were using it for sales and marketing which you receive those um cold outreach emails and LinkedIn

messages. They're really bad and it's

messages. They're really bad and it's just chat GPT generated or some other model generated outreach. So that I would I would call that stay stupid.

>> Um option two, be crazy. go unsafe and just roll your

be crazy. go unsafe and just roll your data into these foundation models without any kind of verifiable guarantees. Well, they're not doing

guarantees. Well, they're not doing that. And and I speak with enterprises

that. And and I speak with enterprises every day. Um I'm not alone. There's all

every day. Um I'm not alone. There's all

kinds of thirdparty research. Less than

11% of CIOS in the enterprises have adopted generative AI. When they do so, they're not integrating their data. The

number one factor for getting ROI from generative AI and agents consistently comes up as data security, data privacy is the number one thing blocking them from going into production. So that

really leaves the third door, which is you got to go confidential. There

there's there's no other way for you to provide verifiable guarantees around your data being kept private unless you're doing it using confidential uh

AI. And there's a bunch of people I mean

AI. And there's a bunch of people I mean I I had a call with the Azure team yesterday and all of their critical AI and internal processes are converting to

confidential.

So it's not just them, it's Meta, it's Apple, it's Google, they're all rolling out confidential guarantees because the nature of these non-deterministic

systems require you to do that. In order

for you to get to scale, you have to have a trust layer that's verifiable.

what you're hitting on is actually a technology um that we've had for a while now, but I'm going to back it up because just

because of what I do, I I train people on how to adopt AI for their companies.

And um I think when we talk about these pilots failing, they don't know about confidential compute as a way and trusted execution environments that

already exist and is accessible in their hardware. But that leads to the bigger

hardware. But that leads to the bigger question. Um I trained 100 people that

question. Um I trained 100 people that were part of women in STEM here in uh Triangle recently. I'm sure that over

Triangle recently. I'm sure that over 50% of them had PhDs and the majority of them had masters. So they were super smart researchers and biotech and that

kind of thing. Every one of them wanted to know how AI works. And this is my hypothesis is that the enterprises are

very well willing to invest in the technology but they didn't do any investment or a disproportional investment in the

education of their teams whether it's their engineers on using confidential compute on using data pipelines on using you know LLM routing and things like

that we just talked about I think the education of people on how to solve problems in their business, in their organization, if they're nonprofit,

government, etc., is really as much of a gap as the wall that we're hitting with um technology that we previously discussed today.

>> Yeah. Like here's here's what I've observed about your training for business and enterprise users. People still think

enterprise users. People still think that AI is to automate writing tasks and they don't know they don't even know

the basics of like you write you then give it to your co-pilot Claude and say give me critical feedback my audience is

XYZ where where do I improve this um people are trying to automate things which creates AI slag instead of using it intelligently and

and that's the thing that you've done so well in educating folks around how to use AI effectively.

>> I love having AI as a thought partner but sometimes you almost have to say you got to be a you know you got to be super critical. We talked a lot about a lot

critical. We talked a lot about a lot today. We talked about, you know, the

today. We talked about, you know, the technology and, you know, the the failure of these pilots, but I think it

comes down to um the the data. And you

touched on it a little bit, but if you're going to be prescriptive to an enterprise of you're one of the 95% that has seen no discernable

like ROI on their investment in AI.

Let's and you wanted to go help them.

What would your prescription be today?

>> How do you utilize your proprietary data? You you use it as part of

data? You you use it as part of grounding with your your um your models.

And you'll you'll hear like rag retrieval augmented generation. And

there's other techniques. I I use rag kind of broadly for the concept of grounding. And what that means is you're

grounding. And what that means is you're taking all of your grandma's secret recipes and everybody's grandma's secret recipes and all of the culinary wizards

who understand the techniques deeply that experential data which exists within your enterprise and you're using it in your AI. That's how you create differentiated value with AI. It's not

just using the generic foundation models. You're going to get AI slag. you

models. You're going to get AI slag. you

need to ground it with your own proprietary data which you have to be careful because if that gets out well it

it's in the model your your your competitors have access to that. Um so

what we focus on is what we call confidential agents and these confidential agents will do grounding with rag or other techniques with your

enterprise data and nobody can see it not opaque not a hyperscaler not the not the model itself can um they'll process it but they can't train on it and you

have verifiable proof of that and that includes the metadata that's part of your your interactions with it too. So

number one is if you're if you're inside of an enterprise this totally makes sense. If you're inside of a business

sense. If you're inside of a business this makes sense. If you're an individual probably doesn't make sense yet. Um but people who are using

yet. Um but people who are using ChachiPT are using it as a therapist.

That's really frightening you know and the way that people are interacting with these systems where they think it's their friend because they don't fundamentally understand that this is not a conscious being. They they just

don't. they they people are are creating

don't. they they people are are creating a personal emotional connection with these systems and that's just I'm sorry it's just not healthy like I use it to

understand um specific approaches particular one I was using recently was there's this concept in uh cognitive behavioral therapy and there's an acronym called

RAIN and it's about recognizing an emotion so that you can process it so that you don't react in an emotional way you're much more thoughtful about And I was trying to reconcile how that

relates to Buddhist detachment. So I was having a conversation with Claude about the concept of detachment. There's a few concepts in Buddhism and how it relates

to cognitive behavioral therapy. Like

that's healthy, right? I'm I'm I'm getting a deeper understanding of >> um both spiritual concepts and how they apply to living in a healthy way with

modern psychological concepts. I'm not

talking to it to get advice. The point

I'm making here is uh we have to put in controls to protect people's data.

They're they're metadata around how they're interacting with this. Um so

those are there's the enterprise approach around confidential and then there's the personal approach. Here's

one other thing that's super important.

And there's there everybody's talking about AI observability tools, Mark, right? You

and I talked a lot about AI observability tools.

If you're using AI observability tools to protect your data, you only detect after you've leaked it.

>> So, it it's only after something's happened that you can detect it. And

here's the thing, AI observability is incredibly costly. It's actually more

incredibly costly. It's actually more expensive than AI inferencing. Right?

This came up at summit. You had it in your workshop. People were talking about

your workshop. People were talking about this in your workshop. So, I'm not saying you don't need AI observability, but let's be real. The problem with that

is again your proprietary data is the only way to create a moat with AI like a defensible competitive moat and AI observability is going to only identify after you've leaked and it's already too

late and furthermore the costs are more expensive than the AI inference which means that's not scalable. Uh you got to protect your data. There's ways to do this through confidential AI. Uh AI

observability is only going to get you so far and it's going to be difficult to scale just because of cost considerations. And people need to think

considerations. And people need to think about these systems for what they are.

They're very fancy autocompletes. And

yeah, I love them. They're wonderful.

And there's going to be new techniques which already are in production today that will overcome the current limitations to continue to see

exponential improvement in how these systems work. Um, but uh that doesn't

systems work. Um, but uh that doesn't mean that you should think they're your friend. I'm the history guy and I I

friend. I'm the history guy and I I think that you know and I worked for the first national international ISP and uh

PSIET back in the mid90s.

>> I always thought >> Yeah. Well, it was

>> Yeah. Well, it was >> I thought it was like psionics cuz you know I like D and D. So I was like, "Oh, it's like the >> people called it set, but I think it was short for performance systems

international was the original name and then network, but uh you know we got everyone online and very and we had the same sort of thing is everybody's online

and then they were like well we can't use email because we have no security for email and then PGP became pretty good privacy became something that a lot

of companies adopt. opted so that their um you know their emails stayed encrypted in between two different points like we use but today that's

evolved to pretty much like signal or telegram. Um then the very very shortly

telegram. Um then the very very shortly after we got you know started to get people in line is when people needed secure access and that's when VPNs

started to rise. I think this is the same uh analogy is you know we start with um getting everyone to use AI and then it

surfaces the very real problem of how do we keep privacy in those days it was getting them on the web and then having a VPN today it's we started to use AI

and then realize that we need to keep our our data private so the security layer becomes confidential which confidential computing you know, it's

becoming a standard that everybody has access to and probably should consider adopting um sooner rather than later.

>> Here, here's a couple of important points. You said VPN and there's been

points. You said VPN and there's been news that just broke about Zcaler during their investor day talking about their trillions of transaction data from logs from their customers and how they're

training models on that. So, they were talking about our data. Our data and it's actually their customers data.

>> Oh, yeah. So it's interesting there like here again if I'm a large enterprise and Z scaler has all of my network data as part of their VPN log I would feel very

uncomfortable with them using that in building AI models or doing AI insights mining unless they were using confidential AI. They can give me

confidential AI. They can give me insights about my data without seeing my underlying data. we have shared our data

underlying data. we have shared our data and our behaviors with all these SAS companies and not most notably you know

this uh we've given up all this data in the past and the stakes were you know it was pretty symmetric we gave up our our website data and we got that stuff I

don't know that it's going to be a equal exchange of value between the vendor and the AI companies >> it's not it's asymmetric to the enterprise vendor to extract value unless you have some kind of ability

through confidential AI to guarantee that okay you can give me insights about my data but you can't see my data you can't use my data elsewhere >> um otherwise it's complete asymmetric value capture on the part of the

enterprise vendor um so we we should wrap it up I mean I guess the original question is are are LLMs dead Mark

>> uh I don't think anything that we're using today is dead but I think all of it is subject to change which is the real point there is things are changing

and um but they're definitely not dead.

>> I completely agree. If you want to learn more about how to use AI responsibly, check out the website opaque.co.

If you like the podcast, please subscribe wherever you find podcasts and leave us a review. The this episode of AI Confidential was produced by Wolverton Media Solutions Places Media,

Kodiak Media, the AIE.net, and Opaque, the leading confidential AI platform.

Music is by my buddy Matt Malamuka. I'm

Aaron Fulkerson.

>> And I'm Mark Hinkle. And we'll see you next time.

Unless the evil data robots come and take us away.

Take your enterprise data and ground your AI. Oh, bug fly on my head.

your AI. Oh, bug fly on my head.

I got it. I don't know where that came from. Get up on the mic.

from. Get up on the mic.

>> How about now?

>> Ooh, that sounds sexy.

>> Unless the evil data robots come and take us away >> to El Salvador or wherever wherever we disappear people in the United States.

Now, where do we disappear them to?

>> Bellingham, Washington. We feed them to the We feed them to the Bellingham serial killer. The evil robots. Robots.

serial killer. The evil robots. Robots.

There's my penc coming out.

Loading...

Loading video analysis...