LongCut logo

From Data to Intelligence, How Snowflake Powers Our Digital and AI Strategy

By Big Data LDN

Summary

Topics Covered

  • Silos Breed Inconsistent Truths
  • Snowflake Enables Real-Time Scale
  • Replatform Demands Real-Time Sources
  • Medallion Lakehouse Powers Certified Products
  • Strong Foundation Unlocks AI ROI

Full Transcript

Again, thank you very much all for coming to see my presentation. Um, my

name is Tom Prior. I'm the principal data engineer at RAAC. I've been in data around about 12 to 14 years now. Um,

started my kind of data journey at Argos as a senior developer. Went through a couple of um, replplatforms there. Did

some uh, customer data um, worked on large transformation projects um, in Argos. then started Argos and started

Argos. then started Argos and started the RAAC with principal senior data engineer. Um primarily originally we

engineer. Um primarily originally we were SQL server stack but we've replatformed onto a more cloudnative um infrastructure which I'm going to talk

about today and the journey that we've been through over the last four years replatforming our primarily our data stack on Snowflake what that's unlocked and the key things that now we're

looking at in the future. Now we've got a very strong data foundation. Um, as I say, what I'll do is a brief introduction of me. I'll give you an introduction of the RAAC. If many of you

may have not heard RAC, I hope you all have. We're kind of a big UK brand. Um,

have. We're kind of a big UK brand. Um,

a bit about where we started. So when I joined the RAAC for SQL stack so I'll go through where we started the pitfalls that we had the troubles that we had

when we were on SQL and then go through the steps that we did take to then replatform on Snowflake what decisions we made what architectural decisions we made what software we needed to look at

and then over all the benefits including some real world benefits that we've had now we're on Snowflake using a full cloud infrastructure

so for the RAC. Um, we're the biggest breakdowns and motoring service brand in the UK. Um, we have about 125 year old

the UK. Um, we have about 125 year old company, maybe even older than that. We

have around 15 million breakdown members, over 15 million breakdown members, covering both our corporate partners and individual members. We also

have around 700,000 insurance customers.

Um, 1.2 2 million breakdowns we've serviced so far this year. Um 3.4

million my accounts that we have and I'll go into detail about that application and the the digital apps that we've got. And we're on a 14-year continuous growth. So we're seeing

continuous growth. So we're seeing constantly more members, more breakdowns, more RAC accounts. With that

comes the need for a scalable and robust data architecture because as we start to increase our portfolio of customers, we

need the appetite for that data grows significantly. We've also released early

significantly. We've also released early last year our service maintenance and repair if some of you hopefully might have used it um using our RAC mobile mechanics. where we can go out, give you

mechanics. where we can go out, give you service, um, service your car, do repairs, do investigations on top of the breakdown services that we currently

offer to customers. And the heart of it is kind of our my rec. This is the digital app that all of our customers can download. They can see one one of

can download. They can see one one of the big things we did last year was our fuel finder. So you can find out what's

fuel finder. So you can find out what's the cheapest fuel around you. But we're

starting to add more and more data products to this app. that requires us as a data team to provide that data in a more consistent format. So I'll I'll

talk a little bit about that and what Snowflake and the other tools that we are using enables us as a business to be able to service and gives what we're

calling our single pane of glass our simplified certified data products that allow us to then all our apps can look at the same version of a customer, same version of a policy, same version of a

breakdown. There's none of this siloed

breakdown. There's none of this siloed infrastructure that we would have had or we did have when we were on on prem SQL.

So before Snowflake, we were primarily one data team. We managed a full SQL server stack. It might be echoing around

server stack. It might be echoing around here onrem full SQL server standard way maybe 101 15 years ago. Um, as the

estate grew, as we had more customers, more breakdowns, the appetite and the complexity and quantity of data grew significantly.

And what that meant was we were unable to do things that the business wanted like look at real-time breakdown data, look at real-time policy data. We just

couldn't do that. We didn't have the infrastructure, the data engineering pipelines to be able to do that using SQL.

So as I say, we've got one data team. We

provided all the reports, all the exports, whatever you would need. And

what would happen is that box would get full. We'd try and add more compute.

full. We'd try and add more compute.

We'd try and add more things. But it

would get to a point where we said, "No, you can't access this. We need to replicate it somewhere else." So we we then built out and ended up replicating on multiple different SQL boxes around the business. One for reporting,

the business. One for reporting, insight, analysts, operations. They all

had their own box. And what would happen is we would then replicate that data our our curated data through our ETL processes to those databases and they

would do their own reporting. Now that

introduced a very significant issue with the fact that people are reporting one thing one side and then people are reporting a different number on the other side and it it became a big

problem. And what we needed to do was

problem. And what we needed to do was look at how do we how do we centralize our data platform so that we can both give a consistent reporting output to

all of our customers but also can we look to the future and go right if we're going to stand up digital applications if we're going to advance our data

estate how do we make sure that all of our data is wellmaintained well curated well governed especially with the fact that we've got

lots PII information, customers breakdowns, insurance claims, it's a lot of PII. We needed a consistent and

of PII. We needed a consistent and wellthoughtout way of governing that data as well as an ability to then share that with whoever we needed to.

So we looked maybe around four or five years ago at the cloud ecosystem at the time. We looked at the Amazon f you know

time. We looked at the Amazon f you know um we looked at Amazon, we looked at Azure. Azure didn't have signups at that

Azure. Azure didn't have signups at that point. So we looked at you know ADF did

point. So we looked at you know ADF did they have hosted Azure SQL? We looked at other in, you know, can we just move our

SQL servers into the cloud and and take advant of more VM style hosting and not have it on prem. But we looked at

Snowflake and we're Snowflake gave us that tick box of being cloud agnostic um region agnostic. So we could put it anywhere we wanted to and it it gave us

that foundation layer to be able to then say right okay what can we do with this that came with some different challenges which I'll go through in a bit but effectively we looked at some core

things that we wanted snowflake to give us big one was performance and scale we wanted to make sure that looking forward if we were to go from say 12 million to

15 million customers how do we make sure that we can take on all that data around their customers, their pro their policies, their breakdowns. How can we

make sure that we can scale up to handle that sort of bandwidth in terms of data?

Another thing was speed. We were seeing hours if not you know tens of hours of latency between us ingesting the data and being able to process it. We were

full ETL. We weren't ELT. We weren't in that new space of transforming data. So

we needed something that was robust and speedy that could handle the types of analytical queries across not just like years but very short right what's this customer's policy at that point in time

we needed both sets concurrency is another one as well to get rid of that fragmented um ecosystem in our siloed approach where we'd have multiple SQL instances of replication we wanted the one

platform to deal with concurrency so that people our digital apps didn't impact our analytical workloads and our analytical workloads didn't impact our input and creation of our pipelines into

the data. There's an element of self-s

the data. There's an element of self-s serve as well. What we did with Snowflake is we wanted people to be able to go in and query the data themselves, not have to rely on a data team creating

a report or an analytical team spinning up a query that they can run. Being

cloud, you know, cloud first security just came with the system itself. we

didn't have we could have we could have the ability to bring people into the environment.

Another big one single source of truth.

Again going back to the fragmentation we wanted one table with breakdowns in one table with policies in. We just want that ability that anything from digital

apps to analytical workloads to be able to see oh this is the customer's policy.

They have these allowances. They have

this entitlement. They've had this many breakdowns.

We couldn't do that with a fragmented because our insight teams, our data science teams would look at their version of the truth which might be two or three hours behind ours or they may

amend or run store procedures on top of their stuff which would change the figures and then they'd report going, "Oh, well, I think it's this and it just gets into lots of arguments."

Simplify database admin. Again, a big one. Snowflake's pretty much hands off.

one. Snowflake's pretty much hands off.

You just tell it a clustering key and it goes away and does it. You don't have index management, you don't have backups, you don't have resilience, you don't there is an element of failover

that you can introduce cross region. But

in terms of simplifying database admin, we went from having to maintain backups every day in incremental then full backups over the weekend which took hours to

nothing. We don't really manage it. We

nothing. We don't really manage it. We

just set a cluster key and it goes away and automatically does it behind the scenes. We add more data. It recclusters

scenes. We add more data. It recclusters

it.

This was a big one. So we had a big digital revolution about three years ago just before we did this and we were challenged with the idea that it's fine doing what you do now. What

are you going to do in three, five, seven years? What what's the digital

seven years? What what's the digital landscape going to look like? What is

the data landscape going to look like?

We needed to think bigger than what we were doing now. We needed to think right are we get an overnight batch about one system. Can we challenge them to say

system. Can we challenge them to say give us real-time data? We can we can do it now. We can pull in real-time data.

it now. We can pull in real-time data.

There are tools in Snowflake that allow us to ingest and process near real time if real-time data. Um, and costsaving

licenses are expensive. Having the cost of all these boxes, all this infrastructure, all this licensing in one bill that we get a month that's

shared across all of our ecosystem.

That was a big one for us.

So, we chose Snowflake. We went down with Snowflake. We started off with a

with Snowflake. We started off with a few proof of concepts and then it snowballed. The business saw the

snowballed. The business saw the benefits that Snowflake gave us. They

saw the the future of what it could give us. Um so what we did was we set about

us. Um so what we did was we set about looking at right what's the architecture that we need to underpin Snowflake.

Snowflake's great at data warehousing.

When we looked at it, we didn't have any of the new kind of engineering elements, the AI, the cortex that Snowflake had then. So, we had it right in center of

then. So, we had it right in center of our RAC Snowflake as our data warehouse with all the relevant tools, security,

best practices around the edge.

As you can imagine, a business our size has hundreds of sources of data from third parties, corporate partners to all of our source systems, our digital apps,

our digital infrastructure. We needed

something to be able to both ingest these either real time or convert them to something that we can ingest into Snowflake.

We started and one of the benefits we had with Snowflake was when we when we approached this as a data transformation project, not a copy and paste. We

started off it with saying we're not going to take what we've got on prem and move it to Snowflake. That's not what we're trying to do here. We're trying to replplatform on Snowflake. So we were able to challenge some of our source

system providers and say at the minute you give us an Excel spreadsheet every two or three hours, can you give us a real-time feeding JSON? Can you be a little bit more future proof and say

right okay we can handle real time ingestion can you provide that some of them said yes and it's one of the key things we approached a lot of our source systems like how do we are moving to a

more real-time application process how can you provide us with real-time data how can you provide us with APIs or or web hooks or ingestion patterns that

will allow us to get your data faster into our ecosystem we also looked at other elements elements around there. So, Azure data factory which hasn't come out very well

on this on the presentation but um we are an Azure house so we use ADF because it's cloud native it talks to a lot of things um we use that heavily for our

data ingestion our copy into into Snowflake and then we chose DBT as our main engineering transformation tool when moving data in

Snowflake.

Has any of you used DBT before? And if

you're using it, yep, cool. It's very

powerful. It's a SQLon Ginger templating style approach to data engineering. You

give it a select statement. You give it a little bit of ginger and it goes away and creates an update and insert or a delete and insert or a merge statement off the back of you telling it certain elements. So what table, what alias do

elements. So what table, what alias do you want to give it? Um what type of incremental strategy do you want to give it? And so that that simplified how our

it? And so that that simplified how our engineers are modeling data in Snowflake. They only really need to know

Snowflake. They only really need to know what select statement they need to move it to the next layer and they can use DBT.

Bit of an uphill from going from pure SSIS store procedures to DBT but again this was a bigger project. We need we were happy to upskill

um in DBT and for reporting we're in Azure house we use PowerBI quite a lot for um for reporting and tabular report tabular and visual reporting. We still

do have a lot of exports as well. So we

needed ADF on top of that and we've recently started looking at Sigma as our Excel replacement for trying to wrestle Excel away from our end users and stop

using it and using something a bit more cloudbased.

Um on top of that something that we didn't have before is application endpoints. So the digital team can

endpoints. So the digital team can create APIs endpoints on top of our Snowflake environment. Very simple.

Snowflake environment. Very simple.

Snowflake's got connectors in most languages. They were creating APIs so

languages. They were creating APIs so they could in they could embed in their digital infrastructure lookups to our data in Snowflake. We didn't have to worry about security or connectivity or

handling how they how they get into our on-rem system. We just whitelist their

on-rem system. We just whitelist their IP, put a network policy in place, give them the relevant access they need and they can connect.

We also looked at and this is one of the things we looked at when we looked at obviously the estate across the board is how do we orchestrate this? We don't

have SQL agent anymore. We need to orchestrate this in a modern you know bleeding edge practice. So we chose Apache Airflow. It's the standard for

Apache Airflow. It's the standard for orchestration. It talks to everything

orchestration. It talks to everything that we had on the ecosystem above um with Azure DevOps as our CI/CD pipeline.

So one of the things we were able to introduce with this new infrastructure is the ability to do CI/CD across all of our data state. So that is creating elements in snow in snowflake deploying

our DBT pipelines and then in the future publishing kind of new PowerBI reports um new ADF pipelines and stuff like that. So Azure

DevOps has been a a game changer for us to get to a point where we're treating data as software engineering. You know,

it's simple. I want to create a new model, a new view for somebody. I'm just

going to whip up a branch. Yes, it's

fine. It's in production in about an hour, maybe even less than that. So,

we've gone from a very rigid, very archaic way of deploying changes and going through cab to now a very quick CI/CD pipeline for all of our data um in Snowflake.

What we then look to do within Snowflake is reimagine how we want to model that data because the way we were doing it before is you know pulling it in doing transformation creating everything but all the business logic right up front

that doesn't work in a cloud-based environment.

So we started off with the idea of having a lakehouse in Snowflake. All of

our data is loaded as semi-structured JSON.

It's schemalous at that point technically. That gives us the ability

technically. That gives us the ability to to derive the schema as we look at the data. So if a file changes, they add

the data. So if a file changes, they add a new column. We don't care. We'll just

add a new column. If it's null, it's null. If it's not, it's not. Simplifies

null. If it's not, it's not. Simplifies

our infrastructure. We don't have to do like schema detection. We don't have to do anything clever like that. All we

have to do up front is copy, convert everything to JSON, import it into a variant field. Querying JSON is just as

variant field. Querying JSON is just as fast as querying structured data in Snowflake. It is very quick. So we see

Snowflake. It is very quick. So we see very li minimal to zero latency query unpacking and flattening JSON versus just querying a schem

that becomes our lakehouse. So all of our source data is there untouched unscheed ready for modeling. Then we

have our cleansed layer where we type the data. We um apply data governance.

the data. We um apply data governance.

We PII tag. We've now got a layer in our data warehouse where we can, you know, tick it to say, "Yeah, this is a this is a number, this is a boolean, this is what it should be, here's a description,

whether it's PII, what source did it come from." We can start to catalog

come from." We can start to catalog where the data comes from.

Then we've got our integration layer where we start to combine some of those sources together. Type two, we create

sources together. Type two, we create this is very much a medallion architecture just stretched out a little bit. We go through type to a lot of the

bit. We go through type to a lot of the data. We're getting, you know, CDC feeds

data. We're getting, you know, CDC feeds now from Azure. We need to type to that data to make it meaningful for people to use that data. We create rough

dimensions and and fact tables based on now the line is drawn on cleansed.

That's our source. That's every source.

From integrated onwards, we now start to bring in the data to its relevant business areas. So for example, we've

business areas. So for example, we've got multiple systems that deal with a customer's breakdown, but that's one breakdown to a customer. So it will go in the breakdown schema. The the

business know then that breakdowns are in the breakdown scheme. Policies are in the poly schema. It doesn't they don't need to care about where the data came from. So this is the work that we're

from. So this is the work that we're doing as a data engineering team to start to think about right we're the ones that create this well-defined data model at the end that the business can

see. They don't need to care about where

see. They don't need to care about where it comes from.

Then we've got two layers which are conformed and derived. So the the conformed layer is where we take the latest version. So integrated is where

latest version. So integrated is where every version of a policy is. Conformed

is where the latest version is and derived the key bit here the OBTs. What

we found with Snowflake is because it's column your storage. We can create tables 400 columns long and get subsecond latency on searches to it even

with millions of records in at the same time having really big analytical queries that would say right what's the average number of times we attended this customer

across years of data those two d those two queries fundamentally are different but there those tables easily handle it so now we've got the ability to have

like the single pane of class. Now, in

terms of our what we call certified data products, um we have a bit of configuration layer.

Everyone has configuration tables, you know, lookups, bits and pieces like that. These sit outside those all

that. These sit outside those all managed within DBT. So, we have a full stack managed in DBT. Each of those layers is well defined well well well

well documented and it gives the the business a clear delineation of what are my business role what are my business attributes or

entities and what are my source systems. So it's been a fun three or four years doing this. It's it's a been a very big

doing this. It's it's a been a very big project. The benefits of having

project. The benefits of having Snowflake in the middle of this really is our centralized data capture. We've

been able to now simplify the data ingestion and be able to centralize it all through one pipeline. So we know what data is going where, where it comes

from in one place.

Again, simplifying our data engineering from our source upwards just means now that we've got one version of the truth of what a breakdown is.

This one was key with infosc. They were

very very happy with us having a very consistent governance and security model. We've got pretty much what we

model. We've got pretty much what we call just in time permissions in snowflake. So people can request access

snowflake. So people can request access to certain data items for an hour, two hours, 3 hours all managed within an application so that we have a clear audit of who has access to what, what

they are doing with it and how long they've had access to it. And any PII access goes straight to infosc. They

have to approve it. So we've got a very clear because all the data is in Snowflake now. We can have that clear

Snowflake now. We can have that clear governance and security to say well we know who's got who got access to what data and why they need access to it.

Cross team collaboration is another big thing. We've had data science teams uh

thing. We've had data science teams uh digital teams in our source control at like adding to it. So they are

contributing to our source control by adding their own views. We don't have to rely on a data engineer to create a view for a digital output. They can come in, they're very comfortable doing SQL. They

can come into our ecosystem and go right okay this is very similar to how I do data engineer uh to do software engineering. Sorry. They can come in and

engineering. Sorry. They can come in and and um collaborate with us digital apps using consistent data. So

all of our digital apps now use our certified data products, which is the next one along to validate who's a customer, what policy they've got, how many breakdowns have they had. The same

data that's used for our trade reporting, the same data that's used for analytical purposes, the same data that's used to train some of our data science models. It's all the same. So

science models. It's all the same. So

it's consistent approach, a consistent message to the customer, whatever digital app they're using. And we're

doing this via certified data products.

So these are tables, views, buckets of data that um we have certified to say they're well documented, well defined, well managed and here's how here's how

to use them to get the right answer.

As well as data science bringing the models into Snowflake. I've got a separate bit about this, but we've seen data science bring their inference models into Snowflake um so that they can be

used by our data engineering pipelines.

Um and another big thing is Snowflake share. We've been able to share data

share. We've been able to share data directly with our third parties without any data infrastructure at all. We just

create a share in Snowflake, replicate it to their their um region or cloud region and they've got it. they can see in their snowflake environment. So it's

as long as we update that in near real time, they can see near real-time data about their breakdowns if we've got a B2B toC relationship with those people.

So again, these are these are kind of nice benefits that I would say are benefits to the business, but they don't really generate ROI. They they they they softly do, but there's no real concrete.

You know, it's difficult when you're approaching some of these data transformation projects to really upfront say right, okay, you're going to get this much money, especially when you're transforming or moving away from an on-rem to something that's going to

cost the same while you're running it and then you decommission the old one off. But we've seen now some some shoots

off. But we've seen now some some shoots from those seeds that we planted kind of two or three years ago. So, we've

empowered self-s serve with this. All

the business have been on training.

We've done internal training, SQL training, snowflake training. They're

now using our just in time permission system to go into Snowflake and make queries themselves. They're not asking

queries themselves. They're not asking data engineers, oh, how how many have we had this or what's this? They can

actively go and look at stuff and self-s serve. So, it keeps them happy. Another

serve. So, it keeps them happy. Another

big thing you may have seen, and if not, I can share some um we did a case study with Snowflake around Mavis, which is our new member validation application.

This has saved us millions of pounds in operational costs because it's reduced the AHT that an adviser has to find a customer all based on the data that

we've got in Snowflake now being in a consistent pattern that we can show advisers this customer has this entitlement and these allowances that reduces AHT from two or three minutes

searching in each source system or I'm trying to find you sorry to immediately based on the number that they call us in the contact center or their application when they log in prompts up oh it's this

customer they select them oh here's how many claims they've had here's how many polic here's what policies they've got their beneficiaries stuff like this so we're looking at doing that on other all of our other applications so our my app

where people can see their policy our rescue me application which is our digital breakdown reporting tool all these apps are going to look at the same policy data and know the same thing

about a customer and these two are kind of linked together So the fact that we're now getting near real-time data into Snowflake means that we can start to give operations more information about

the customer and data science have have put together an inference model that we use on top of our breakdown data to give a better indication of how operations can intervene if a particular breakdown

needs more help than another. So based

on characteristics about a breakdown, this model can give us a a a bit of a steer on. Right, we need to go this

steer on. Right, we need to go this customer because we need to do something. So again, these are real

something. So again, these are real world benefits that we've seen real ROI on our investment in Snowflake. And it's

all about how we've set up that single pane of glass, that that bedrock of infrastructure, that well-maintained data engineering pipeline that now we're looking at additional things. Can we put

stuff like Cortex, which is Snowflake's LLM model, on top of our data so that people can integrate in Teams and write, "Oh, how many breakdowns did we have yesterday?" And it'll come back with a

yesterday?" And it'll come back with a prompt, come back with a a query and run it to say, "Oh, we had this many breakdowns." All based on the fact that

breakdowns." All based on the fact that we've now got a wellgoed model, data model for breakdown. We can put a semantic layer on top of that, give some key questions, train it a little bit,

just let it free, integrate it in Teams. we can have a a data um a data chatbot basically for the RAAC where people can ask questions about policies breakdowns and stuff like that.

So couple of takeaways again, strong data foundation is key.

As long as your data foundation is strong, everything else you build on top of it should be very well defined.

We've, as much as it's been a pain to kind of set up that strong data foundation, which is completely separate to how we did SQL, we're now be able to stand up all of our digital apps on top of it and get the performance and

benefit that they want, as well as all the analytical processes. Keep things

simple. We've tried to simplify our data models so that we've got one table for breakdown, one table for policy so that when digital or people are going coming into Snowflake and going where do I see

breakdowns? It's simple. It's the

breakdowns? It's simple. It's the

breakdown table in the breakdown schema.

It's everything's lined across all the columns that they would ever think.

We've named everything very simple. When

did we receive it? When did we get to the customer? These kind of timestamps

the customer? These kind of timestamps are just very simple for people to just go in and quickly understand it.

play to the strengths of your infrastructure. We have we have

infrastructure. We have we have understood that we are never going to be 100 millisecond latency kind of real time bleeding edge. That's not what our snowflake infrastructure is about. It's

about capab the capability to be a lookup for our apps as well as our analytical and store data in one place.

You know, there are other technologies about that are better at bleeding edge real-time stuff.

Make sure you clearly understand your scope. I think this is a good one. We

scope. I think this is a good one. We

understood that we wanted to do a tra data transformation project. We were not doing a copy and paste. We understood

the scope was to get rid of our on-rem system and and create a data platform that would be beneficial to what we do in the future.

That's a big one because what happens is people see the cloud and AI and go I want XY Z. You could do all that but you need to have the foundation in place to be able to then step up and go right

okay well yeah let's do an LLM. let's do

um let's integrate AI and how we do this. Let's look at you know can we do

this. Let's look at you know can we do some clever stuff in terms of searching transcripts and stuff like that. So

yeah, thank you very much all. If

there's any questions I can take some. I

think we've got time. We've got a mic. I

think we've got a bit of time. One

minute.

Hi, thank you for the speech. A quick

question. You were speaking about costs.

>> I can't hear you.

>> You cannot.

>> It's not on.

>> I can hear myself.

>> Let me put some headphones on and I might be able to hear you. There you go.

>> Sorry.

>> Okay. So, uh you're speaking about costs. Uh and this presentation that you

costs. Uh and this presentation that you made is very clearly to me it is giving a single version of the truth. Yes. So,

that is convincing. I'm not convinced about costs. Did you actually spare on

about costs. Did you actually spare on costs with this or it was the same as before?

>> It was difficult because um whilst we had our on-prem costs and our kind of infrastructure cost licensing, we had all the support around that. What we had

was at the same time that we introduced Snowflake, it's a big blow up of oh well we can do XY Z. So our use of Snowflake exponentially grew at the same time we

had our on-rem but the ROI that we had in terms of the benefits of that you've seen kind of like that million pound saving was much more than the cost that we've

done since we've started Snowflake. So

it's it so what I'm saying is it at the start of it it looked at like it's all cost and it's difficult when you do these kind of data transformation projects it's all cost but if you've got a clear scope of what you're trying to

get rid of with a clear understanding of well I'm going to take this cost away as long as you're careful with snowflake it's very you know the the cost saving is there definitely is but the problem

is you'll see a lot more appetite in people wanting to do lots more on snowflake and then that blows your cost >> that's why it's not credible that costs are >> reduced Yeah, >> but if you have a ROI, it's fine.

>> Yeah.

>> Awesome. Thank you,

>> Awesome. Thank you,

Loading...

Loading video analysis...