LongCut logo

Bringing AI to the Auto Industry | J.D. Power at AIPCon

By Palantir

Summary

Topics Covered

  • AI Secret Sauce: Data Beats Models
  • ChatGPT Obsolete for Car Buying
  • Prompt Engineering Drives LLM Behavior
  • LLMs Unlock Warranty Root Causes
  • Hybrid LLMs Conquer Auto Intelligence

Full Transcript

I'm going to give you a little bit of show and tell of the things that we're doing with AIP and LLMs before that little introduction. I think you know Jetty Power because we give awards to cars. We do benchmarking. We talk to a bunch of millions of customers of mostly OEMs. And then we give awards. So you've

probably seen commercials on maybe the Super Bowl or so that say the S-Review of the Year or the Best Experience on a Luxury Car. So that's us. But in

reality, that's a very small part of our business. Most of our business is around data analytics. So what we've done over the last few years is to

data analytics. So what we've done over the last few years is to build and aggregate data sets that in a way define the inner workings of the auto industry, which is a $1 trillion industry. This is a big, big industry. So

we understand what cars can we build, what cars are actually built, where are they sent, what cars are sitting in parking lots in the dealers, when are they sold.

the prices are sold at, what incentives are applied, demographics of who buys those cars, the problems the cars have when they're repaired, warranties, et cetera, et cetera. So we

have a complete view of the car industry. And we spent many years doing things with these data sets. We do some machine learning models, very domain-specific models, traditional models. And we decided about a year ago that we're going to get into

traditional models. And we decided about a year ago that we're going to get into AI with more conviction. And particularly this year, we've been driving to become an AI-first company. So we're happy to partner with Palantir. I think the last slide about iterating

company. So we're happy to partner with Palantir. I think the last slide about iterating with a part of your trust is very relevant to us because we've been learning a lot over the last five, six months. We actually have been working together for about five or six months. So why now? See, this works. Okay, great. So

what we've seen over the last few months is that even my mother asked me about chat GPT, correct? So there's a big step. innovation in AI, correct? Things are

becoming in a way commoditized. If you go to sites like Hugging Face, there are hundreds of thousands of open source models there to be leveraged. Many LLMs, some regression models, classification models, some models are being downloaded tens of millions of times. So now

the secret sauce of AI is not on the models that you can build, but how do you apply them to the data that you have And as you will see, how you deploy them into applications that can actually move the needle. And again,

we have consolidated these data sets. We have bought many companies over the last four years because we wanted those data sets to come into the fold. So we bought companies that have inventory data, for example. Six months ago, we bought a company that had data around EVs. So now we have all this data, and we say, awesome, with all this data, and then with AI, with LLMs, what can we do? And

as we look at the industry, a trillion dollar industry, it becomes a very exciting and open field opportunity. Our job with Palantir is not to necessarily focus on efficiencies internally, but actually build solutions that move the needle in the auto industry. Those

are our main clients, correct? So it's a trillion dollar industry. Any problem that we solve is a big problem to be solved. So you can think about problems like when you buy a car, correct? And you try to figure out what is the car that you want, that you need. Where can I find it? What is the right price? That's for a consumer a big problem. Dealers have to figure out what

right price? That's for a consumer a big problem. Dealers have to figure out what car should I order? How should I price them? Correct? And OEMs have a lot of challenges and opportunities ahead. What car should I design? Which ones should I actually build? Where should I send them in the US? What should be the right price?

build? Where should I send them in the US? What should be the right price?

The incentives. And are my cars breaking? What are the problems in repairs? In repairs,

for example, in warranty, there's about a $7 billion spent today in the oil industry just to manage warranties. Can we move the needle in a $7 billion line, for example? So what we're trying to figure out, we're trying to experiment with planters,

example? So what we're trying to figure out, we're trying to experiment with planters, how can we use AIP? And again, when I say we're trying, we started four or five weeks ago when the launch, correct? So it's impressive how much we have done. And I'll show you a little bit where we are. But what type of

done. And I'll show you a little bit where we are. But what type of problems can AI help us solve, particularly LLMs? Let's start

with the problem statement that I started at the beginning, correct? I want to buy a car, and I want an AI to help me buy a car. So the

first thing you can think about is something like this, correct? I want to buy my first plug-in hybrid. I currently have a small SUV, and I think it's the right car for me and my family. What models can you recommend to me? So

you can think also, I love AI. I'm going to go to Chad GPT and ask him this question, correct? But unfortunately, If you go to Chad GPT and you ask them, hey, when were you last trained? It will tell you that it was trained in September 2021. So how can a machine, an AI

that was trained so long ago, help me buy a car now? And if you look at the industry, there are a lot of things that have happened in those 600 days from the time that Chad GPT was trained, correct? 600

new models were launched in the industry. 1.5 different car configurations were made.

1.2 million cars in inventory are in each day in the dealers. But 1.1

are sold on the average. And that's actually getting now higher than we're getting away from COVID. And there's about $2.4 billion a month spending incentive. So how can an

from COVID. And there's about $2.4 billion a month spending incentive. So how can an AI that was trained 600 days ago help you understand what is the right choice for you now? So we're trying to figure out that. What we're doing now with LLMs, what we've done for the last few weeks, is try to determine how can we take the best that an LLM can offer and map it with the

data that we have that is real-time, high-quality data. So I'm gonna show you...

I'm sorry. So we asked ChatGPT for the question that I showed you about the plugin hybrid, and they gave me some models that were actually 2021 models, obviously, because it was trained that way. But at the end, it said, Additionally, checking with local dealerships or visiting manufactured websites will provide you with the most up-to-date information on the latest plug-in hybrid models available. Great. Because that's the only thing you can say, correct?

Can we do better with that data? So I'm going to show you a demo.

Let's not start it yet. And a couple of things before the start. One,

this is not a consumer-facing application. This is an application that we're building for us to prepare and build that consumer-facing application that's going to be in our site and it's going to be in the site of some of the OEMs. So what we're learning now is the interface, but most importantly, how do we tell the LLM how to, quote unquote, behave? How do we put all these things together? And I'll give

you some of the lessons that we have learned over the last few weeks. And

then second, what is important to us is to understand that the dialogue that a person has with an LLM can go in many, many different ways and can go very deep or very shallow. I'm going to show you here just a couple of questions that a user might have. But then we can talk about what's happening in the background and how that conversation can be extended

to many other use cases. So let's start the demo. Cool.

So we start a session. We call this session Family Car. And the

user types, basically, the question that I talked to you about in the first slide, correct? Which is, I want to buy a hybrid. Can you help me out? Correct?

correct? Which is, I want to buy a hybrid. Can you help me out? Correct?

So this goes to an LLM. The LLM takes and understands what the question is about. And then the LLM says, OK, I'm going to look at Jetty Power's data.

about. And then the LLM says, OK, I'm going to look at Jetty Power's data.

And I'm going to find the best answer for this question. Correct? So the LLM takes that and gives that to the application. And it says, OK, these are the cards that I found. on the JITI Power data that I think you will appreciate, or that would be good for you. But then there's an additional question. Hey, I

like this feature that keeps me in the same lane. This is customer language, not technical language. And actually, I don't want to spend that much money. So the LLM

technical language. And actually, I don't want to spend that much money. So the LLM takes that, tries to understand the message, goes back to the JITI Power data, and says, OK, here you go. Here are three potential models that you can take a look at. And by the way, here are the trims, which are specific trends for

look at. And by the way, here are the trims, which are specific trends for those models. The customer can select a couple of things that they like, and then

those models. The customer can select a couple of things that they like, and then we tell the LLM, awesome, here are the two things that I want you to compare LLM. And he goes and says, here you go. So those things that are

compare LLM. And he goes and says, here you go. So those things that are written there, if you look at it, are basically the reasons why the LLM recommends one or the other. This text is written by the LLM. And you see things like, OK, I got you. It's lane keep and assist. It's great for your family.

On the other side, there's a bunch of features that you might like, but it's pretty expensive. You told me you were not, you needed to watch your budget, et

pretty expensive. You told me you were not, you needed to watch your budget, et cetera, et cetera. And then we told LLM, please give me a list of the features that you think are relevant on these two cars. Correct?

Let's find a car. LLM, help us find a car. So LLM goes to inventories and shows you where the cars are. And then just select the one that you're going to go. This one is a bunch of miles away, but apparently, potentially, it's the best car you have available. And then there you go. You find your car, et cetera. Again, short conversation, but just keep in mind that

et cetera. Again, short conversation, but just keep in mind that the LLM is driving all this. And I'm going to give you a quick look at what does it mean. So I told you this is an internal application. So

as you train the LLM, you might tell them things. Like, for example, great, but when you show a car, also show the model year. Or you could say also show the payment, the monthly payment, because people don't understand MSRP very well, show the monthly payment. So,

monthly payment. So, okay, so what's happening in the background? Very quickly, I don't know if you guys work in Foundry or not, but basically here we have the yellow, which is the interface with the customer, where the chat happens. The green things are what makes it possible, the interaction between the LLM and all the parts of the applications. The ontology is in the middle. This is where our data is, the data

applications. The ontology is in the middle. This is where our data is, the data that we think is relevant for this use case, and then the applications on the other side. Correct? The beauty of having Foundry and AIP is that you can build

other side. Correct? The beauty of having Foundry and AIP is that you can build this really, really quickly. Really quickly. So we build this in a matter of days and iterate in a couple of weeks. Now, the cool thing is then, obviously we're connecting all the systems, but what about the LLM? How do we tell the LLM how to, quote unquote, behave? So you do that through a parameter engineering and that

also AIP allows you to build that into the system. So this is actually what we tell the LLM to behave like, to do. So you are an AI assistant that provides guidance on someone looking to buy a vehicle. This is an actual instruction.

You are friendly and helpful. This is an actual instruction. It's kind of crazy, no?

Et cetera, et cetera. So that's the first one. The second one is Use only in the information of vehicles from JSON, which is the Jetty Power basically data, correct?

But make inferences about what the customer may want based on your query. This is

actually what we typed as a prompt to build the application. So on the last one, we're telling OpenChatGPT, we're telling them when somebody wants to compare vehicles, don't go crazy and go back to 2021 and find some stuff. Don't do

that. Just focus on the Jetty Power data. Correct? So there are more things that are happening here. We are telling them, for example, they don't know these. So if

you can get, we will tell the LLM, if you can get a zip code from the, ask for a zip code, and you can check how much snow falls in that zip code. If there's a lot of snow, think about all-wheel drive and the clearance of the vehicle. If you have a zip code, you can know the price of electricity and the price of gas. And you can give me a total

cost of ownership comparison between plug-in hybrid and non. Also miles, for example, you drive a day, et cetera, et cetera. So it's very compelling.

The beauty of the system, again, is that on the top you see some of the stuff that we're doing in Palantir. We have several applications. Some of the applications are out in our clients already. We basically have data flowing all the way to an ontology and then applications feeding that ontology. One of those applications, for example, it's a is an application around repair analytics to understand warranty costs, and optimize warranty costs.

So what we're doing now is an application using LLMs that will take verbatims from repair. Every time a car is repaired, the technician in the dealer writes, this

from repair. Every time a car is repaired, the technician in the dealer writes, this is what the customer told me, and this is what I found and how I repaired it. So you have hundreds of thousands of those, millions of those. So if

repaired it. So you have hundreds of thousands of those, millions of those. So if

you're a system engineer trying to focus on how to minimize warranty, you want to understand root cause. Why is this failing? At what speeds? At what temperature? What are

the behaviors? So we're building an LLM, an application that is powered by LLM that will allow us to do this. And this is really real. I mean, we present this to a client in two weeks. And if we get it right, once we're convinced that it's right, we'll deploy it as an application to our clients.

A few things that are really critical for us. like the slide that previous talked about is speed. We need to be able to do this fast. We need to be able to not only take things to market pretty fast, but learn really fast.

And the iteration cycles here are not measured in months or weeks or days. This

is about every hour we're learning things about how do we train LLMs and how do we understand how they behave and how to leverage their insights, et cetera.

Extensibility, we need to connect every single application, every single model that we have in our platform to an LLM-driven experience. And the last two are completely fundamental for us.

Our data is our IP, is the value of our company. The data that our customers put in our systems is absolutely critical. So this has to be bulletproof. It

has to be very secure, and privacy and governance are a really must for us.

Finally, the road ahead. The first one, we have done this for a few years.

So we bring data, analytics, and we do domain-specific models. Now we're leveraging LLMs to integrate into GD Power data and also bring other models to provide new experiences.

We've been working on that, and we're actually releasing applications in a couple weeks. And

ultimately, what we're thinking the challenge is, how do we build intelligence that actually understands the auto industry and can tackle fundamental questions? And that is combining LLMs with the main specific models. And we are even arguing, should we train our own LLM to be able to be more nuanced about the discussions that we have in the

auto industry? So, running out of time, I wanna thank the Palantir team, great partners,

auto industry? So, running out of time, I wanna thank the Palantir team, great partners, Taylor, Suzanne, Jan, Barry, all those guys have done amazing work. A lot of respect and appreciation for the work and looking forward to the next few months. Thanks a

lot.

Loading...

Loading video analysis...