LongCut logo

Gaining buy-in: A guide to successful AI integration in development

By Google Cloud Events

Summary

Topics Covered

  • AI Elevates Developers to Architects
  • AI Speeds Coding, Bottlenecks Reviews
  • More AI Code Cuts Delivery Performance
  • Agents Transform Entire SDLC
  • Test-Driven Development Tames AI

Full Transcript

[Music] Hello. Uh, I'm Nate Avery. I'm an

Hello. Uh, I'm Nate Avery. I'm an

outbound product manager here at Google Cloud. I've been a Googler for being

Cloud. I've been a Googler for being over three years now. And uh, it's my pleasure to uh, introduce my colleague Cedric.

>> Thanks, Nate. Cedric Yao here. I head up app innovation uh and modernization programs on the go to market side in Google cloud. Uh been a Googler for

Google cloud. Uh been a Googler for almost four years now. So excited to chat with you about AI assistance and and what that world looks like in the

future. Back to you Nate.

future. Back to you Nate.

>> We're going to do a brief demo and overview of code assist. Uh I'll then hand over to Cedric again uh and he can go a little bit deeper in his presentation. uh about some of the

presentation. uh about some of the things that people don't always discuss when we talk about making this move into

uh a new agentic workflow world and a world where asking AI to help in ways we haven't really asked it to do things

before uh in the world of software development. So uh we'll go ahead and

development. So uh we'll go ahead and start that journey. Coder assist 101.

Uh we have a slide here. Uh it says kind of a lot. Uh it's a essentially code assist is

it provides developers with an assistant that they can chat with in regular human language about programming topic. Uh a

developer they can ask it to propose it architecture. They can ask it to provide

architecture. They can ask it to provide it with help on a new feature. They can

ask it to explain existing code, add documentation to their product, and so much more. Uh, and this is super helpful

much more. Uh, and this is super helpful because now you're able to have new developers on board quicker. You can

have mysteries of your code unraveled and and really get in there and start looking at the things that you want to do, which is to help make that code better for the business.

Uh, and so you've probably been hearing a lot about this. Uh, I know the word agent has been used a lot recently. Uh,

and you're going to hear it again here.

Yeah. But, uh, we're going to try to do what we can to keep things uh, simple and straightforward and, uh, help guide you through the process and and let you know why we're excited about this and

how uh, how you can become more involved.

So, uh, one thing that we've learned here is that AI development or AI assisted development is just kind of the

new norm. Uh, is how things go now. Uh,

new norm. Uh, is how things go now. Uh,

it is no longer something that is far off in the future. It's being brought into the the present. Uh, in some form, almost every developer has had some

level of involvement with it and an experimentation.

Uh but with that we have some questions.

Uh things start to get a little bit murky and we want to know well how can how can a developer truly benefit from it? Uh what should they expect? How can

it? Uh what should they expect? How can

they deliver that their customers? How

can they make their software better? I

don't know. But then at the same time there are some we'll say other things that weigh on their mind like hey will this software replace me? you know if everything is

replace me? you know if everything is working perfectly and and agents can write code and do things uh if you just tell it go write a program you know what

is the role of the developer well we found that fortunately uh developers still very much needed you know it it

but what happens is that role changes slightly uh you go from maybe being really kneede in the code and figuring

out to looking more now higher up than you were before. You're looking at business problem that you're looking to solve. You're asking how can this

solve. You're asking how can this application how does the architecture of this application achieve that goal? Uh

does the application scale properly? Uh

does the application make sense financially to the customer?

You look at things in a slightly different way. Uh but developers are

different way. Uh but developers are still very much needed. Uh so we have a couple of different products. First one

we have uh Gemini code assist and GI code assist has an array an array of features. Uh it's a IDE plugin. It works

features. Uh it's a IDE plugin. It works

for all the more popular uh IDEs out there like VS Code and the Jet Brains line of editors. And it provides a lot of what people would think of as like a

base layer of features. Uh code

completion where it completes the code as your developers type. Code generation

where you can generate new snippets of code. Explanations for any unfamiliar.

code. Explanations for any unfamiliar.

You can do problem resolution. So you

can have it get in there and figure out uh an error message and how to resolve that error message. Document generation.

Uh one of the things that you know first opened my eyes to it was when I could ask it to just make a readme file. And

you know it can do so much more than that. But that was one of those like

that. But that was one of those like small examples that like a light bulb moment for me. You could have it create tests uh and assist in unit test

creation. You can uh also use AI

creation. You can uh also use AI augmented code reviews. Now, this is something that's super helpful because we're finding is

that as there's more and more code being generated by AI, there are uh bottlenecks implemented in in parts of

the existing SDLC. And sometimes uh we find that it's really at the code review stage that things get a little bit stuck. And so using AI to help with

stuck. And so using AI to help with those code reviews can uh can really sort of free up that slow. Uh a lot of

folks have pointed to this statement here uh by Senor Pai. Uh now this was a little bit a while ago. So uh well it says that a quarter of new code at at

Google is generated by AI.

The current stats are a little bit higher than that because things have evolved rapidly in this industry. Uh but

it goes to show that we do uh we do use this stuff ourselves if that that helps.

So where there's great speed uh there's also great responsibility in the paraphrase. Uh when it comes to understanding how uh high performers

work uh we look at uh a report called the DORA report. It's DevOps research and assessment. uh it's been around for

and assessment. uh it's been around for about 10 years and some of you might remember it as the state of DevOps core.

Our data suggests that improving the development process does not automatically improve software delivery at least not without proper adherence to like basics of successful software

delivery like using smaller batch sizes and having robust testing mechanisms. And the reason for this at least what we hypothesize is that we're just able to

develop more and more code faster. uh

the AI is doing its job. It's assisting

folks in helping them be productive, but the true productivity really comes when the entire SDLC is reviewed. And so you get into these weird states where you

see a stat that says there's a 7.2% reduction for every 25% increase in AI adoption. uh and and it seems like it's

adoption. uh and and it seems like it's throwing things out of whack, but again it gets back to that uh the the previous slide where we looked at uh so do we work everywhere? How do we

inject AI properly to essentially balance out the equation throughout the SDLC and through that we're able to transform

developer workflows. uh the way that we

developer workflows. uh the way that we worked in the past doesn't necessarily work the same way now uh because again we're able to just kind of do more and

so what we want to do is make sure that a product like Gemini Code Assist is able to integrate at multiple points within the SDLC.

So we're going to help with design, building, testing, maintaining uh reviewing deploying troubleshooting uh and pretty much everything in between. uh we're going to provide

between. uh we're going to provide codebased awareness and as integration and it helps provide a really broad context so that the answers that you're

getting from the code that's generated uh they're not just generic answers.

They're going to be answers that are of quality answers because they're relevant to you because they have the context of your code and your environment in it while it's making that uh determination.

We're going to help you with code reviews uh to streamline the review process. Uh when you have that running,

process. Uh when you have that running, it does something as simple as just uh reviewing the code and dropping a comment in and in that comment it lists

all the things that it has found and you have the option to either accept or reject or move on. Uh but it's that extra pair of eyes that's out there uh

working on your behind. But then we also have another product called cloud assist and that helps with more of the operational tasks uh where for instance you're looking through a set of logs and

in that log you find an error message well there is an investigate button that you can use and then that helps you uh use AI to get to the root cause analysis

and here's that software development life cycle we talked about before and now we're also looking at integrating agents into that uh these agents they

work on your behalf to do various tasks and this is an interesting thing because these tasks can run in the background.

So uh when you start thinking about productivity uh that's a huge boost uh because you're now taking these these AI

processes where it used to be like a paraprogrammer and now you're actually kind of dishing the ball off to it to have it handle some of those things for

you. And we can see here how those

you. And we can see here how those agents are going to going to impact the software development life cycle and different stages. So you have it at uh

different stages. So you have it at uh migration, you could have one of those running, you could have it uh do agentic coding, which we talked about, you could

have it do security. And

pretty much everywhere along the way, you can see that there's a spot at which we can inject an agent to help push the

code from one stage to the next and provide some level of either assistance or in some cases flat out just doing the task. Uh so we're really excited about

task. Uh so we're really excited about what this means for us. uh what this means for developers and how it can help developers you know really look at the

architecture of their applications as a whole. Uh recent we released uh Gemini

whole. Uh recent we released uh Gemini CLI and uh it's released as open source.

Uh it uses the Gemini 2.5 Pro models in the background and we also offer very generous amounts of usage. So this

really allows developers to get in there, buy it themselves, uh, and really get their hands dirty with it. It's open

source, so you can look under the hood.

You can remove a lot of the doubt that sometimes comes in when, uh, when a developer is bringing in a new tool and now you can see for yourself what makes it tick and you're able to also

contribute to it. So we think that this is really big boost and uh you know when it comes to helping our developers

you say well okay well why'd you release a CLI tool uh it's it's an interesting thing what we've noticed that each developer sort of has their own

preference their own work styles uh some of them like to work within an IDE some of them prefer the CLI and one of the things that we want to do is make sure that we're there to help developers uh

wherever they are. We're going to meet you where you are and we're going to provide you with the best tools to help you uh accomplish your development goals. And so the CLI is just another

goals. And so the CLI is just another one of those surfaces. It's ubiquitous

though you can essentially everyone has a full name almost uh which makes it a lot easier. It makes it super easy to

lot easier. It makes it super easy to jump in and just get started. I think

that that lowers some of the threshold to entry as well as uh you know some of the fear that people have uh as they go

into these uh these new areas with CLI itself. uh we find that you know start

itself. uh we find that you know start looking at like well where is this helpful eventually it's helpful for anyone who considers themselves a creative uh you

know if you're a developer working on a hobby if you're a power user if you're working on on really big projects the CLI tool Gemini CLI is great because

it's essentially like a Swiss Army knife uh that's running on your machine and you can use just standard protocols like MCP key to increase the amount of tools

that uh that it has at its disposal and make you more productive. So,

uh we've come to this part where hopefully uh it it shows us a little bit about where we're where we're at and where we're headed and we know that the future

is agentic. Uh we've talked about, you

is agentic. Uh we've talked about, you know, things like mixture of agents approaches where software and agents talk to each other over different tools.

It all sounds far-fetched. It really

does, but it's real and it works and it's happening. And we can see this in

it's happening. And we can see this in uh not just our tools, but tools from uh other places as well. And so, Potus, it's transforming really the way

developers work. Gemini CLI is also uh

developers work. Gemini CLI is also uh on that path uh even though you know as it's recently released and there are other things that it can do for you. We

can use it to establish policies uh for what's acceptable in terms of general AI use. Uh you must frame generative AI as

use. Uh you must frame generative AI as a learning activity in your environment.

Uh you want to help foster trust by providing a vision for your developers uh who have these evolving roles. Yeah.

uh uh essentially uh if you if you see the picture there and you get it uh you'll realize it's a reference to do Android stream of do Android stream of electric sheet

otherwise known as bladeunner to the rest of the world and uh with that I'm going to hand things over to Cedric who will talk a bit more about uh the trust

that's needed and how that trust can be earned in an environment.

>> Yeah, thanks Nate. I really appreciate the the walkthrough of Gemini Assist and some of our Gemini products there. I

think having worked with hundreds of customers over the past year or so, the thing that really comes to the forefront to me in terms of AI adoption is really

this trust statement here, right? Uh I

think there's a lot of things uh that folks have seen in the enterprise space, right? like how many people have seen

right? like how many people have seen these kind of numbers dropped on them?

It's like yeah, you'll get 40% faster and and you kind of look at it like uh now is that true? Right? The reality is

this in studies we've seen time and time again that developers do actually get faster uh with some of their task, the

ability to understand code, the ability to write new code, right? And so what it what doesn't translate in the enterprise is that we oftent times think that if we

just drop an AI tool into an enterprise development team that suddenly we're going to see all these productivity metric just start skyrock right uh and that typically not the case of what I've

seen and I think the biggest barrier is that the reality is that door research has shown time and time again almost 40%

of engineers have little no trust in AI.

And I think the important thing is that there are ways we can help you accelerate your adoption. I have a really great outcome overall. And so, you know, the first

overall. And so, you know, the first thing that people think about uh when you ask about trust in AI is they get this picture here, right? Like um you

know, this is what happens if you trust AI. Uh the overlords are going to take

AI. Uh the overlords are going to take over, right? And that's that's not the

over, right? And that's that's not the reality of it. To Nate's earlier point, there will always be a human in the loot. Uh that's a critical piece of this

loot. Uh that's a critical piece of this journey. And so learning how to work

journey. And so learning how to work with AI effectively and and and gaining that trust in the con in the models themselves, I think, is a critical piece

on that journey. And so what we're actually seeing the research say is that it takes time. That's really what it's saying, right? It takes time uh for

saying, right? It takes time uh for developers to get used to it. They're

they're effectively getting used to a new language. When we think of prompt

new language. When we think of prompt engineering, we're no longer writing code fully in a programming language.

We're writing in a natural language and we have to interact with a model. And so

this is like, you know, having a pair programmer. So it's getting used to

programmer. So it's getting used to another person, another AI assistant in your development life cycle. Uh and so that's one of the things that we've seen

here in terms of peak AI use. So if it takes this long, right, how do we start to accelerate that is the question that we get it asked all the time. Remember

this slide that Nate posed up there whenever I got onto customer calls. This

is always the question that's asked.

It's like does Google get such high adoption rates overall? uh and so one of the ways that I think about it is that

AI is as an industry is moving faster than ever before right changing things a lot so today largely when folks in the enterprise use AI assistance and their

software tools it's really around autocomp completion assistance right so they're looking at how correct is this autocomp completion now what I would

posit is that there's actually different stages of adoption So what you're seeing here are adoption curves really and we're starting to lead into a a major

adoption curve around pad assistance in chat what you'll find is actually a lot more accurate in terms of get you what you're wanting right uh and the ability

to interact with the code from a chat perspective is going to get you the results you're looking for there but there's a ton of learning that's going to happen in the chat space you're going to learn how to interact with these

models you're going to learn how to make better prompts and get the outcome you're looking for. And now what you're seeing with Gemini CLI, for example, is

this world of what is agent assistance look like, right? And so we're we're now on the leading edge of that. We've got

trailblazers over here starting to use Gemini CLI and trying to figure out what's the best use case, how to use the thing inside their enterprises. Can we

do things like migrations at scale? when

we do things like translation to different languages at scale, there's a lot of questions that still have to be answered and the best practices have to be figured out and maybe there's some code that's going to have to get

submitted right to the repository and get incorporated in Gemini CLI in future. A lot of questions are still

future. A lot of questions are still unknown, but this is the next stage of early early trailblazer adoption. And

the final thing is going to be this idea of a mixture of agents, fleet of agents helping, right? And the ability to have

helping, right? And the ability to have many different agents doing different tasks over the course of your development life cycle, all interacting with each other. Now, the critical thing

here is you're going to learn very quickly that the lessons you learn from FAT assistant and autocomp completion are going to play dividend into how well

you can accelerate your AI adoption journey. And so, the key is getting

journey. And so, the key is getting started as soon as possible, right? And

so, when you ask the question, how do we do that? I think if anything you leave

do that? I think if anything you leave here with should be this one slump, right? And so there's important things,

right? And so there's important things, three things to think about. The first

is really around investing in training.

Uh the thing that we've seen is that teams that invest in training, doing these things at scale, teaching their developers how to use uh AI

assistant tools, the best use cases for them, walking them through prompt engineering and giving them some space in order to learn how to use these tools

effectively yields huge dividends over the long that a great segue into like what's the next thing. really around

empowering the the teams, fostering a learning culture and that can look like a lot of different things. Maybe it's a hackathon, maybe it's mandatory learning, maybe it's use case specific

workshop around like how do you migrate or modernize you know a legacy net codebase or a legacy Java codebase any number of these activities that gives

developers space and a playground that they can work with these tools will get better results. One of the things lately

better results. One of the things lately I've been doing with customers a lot is this world of prototyping. What does

prototyping look like with an AI assistant tool? We don't have to worry

assistant tool? We don't have to worry about security and we don't have to worry about deploying these things to production. So, this is an a great use

production. So, this is an a great use case for vibe coding, right? The big

buzzword going on today. Like, let's use AI assistance to go build us an AI agent and see what that looks like in our environment, right? And we can learn and

environment, right? And we can learn and validate various different hypotheses and understand whether or not this makes sense for us as a business. But you can

rapidly do that very quickly there. And

the final thing is really at a developer level, how do we more effectively learn, right? And that is through fast feedback

right? And that is through fast feedback loop. And so there's many different ways

loop. And so there's many different ways to get fast feedback loop. I kind of highlighted one around like building a prototype and user testing and learning

whether or not that's a validated hypothesis, right? Can we for example

hypothesis, right? Can we for example use an agent to automate some of our workflows in our business environment?

Uh, another one is really can we use test-driven development for example.

This is a different way of thinking about getting a fast feedback loop, right? rather than us worrying about

right? rather than us worrying about reading through the code and figuring out whether or not what you know uh the

the AI models kind of generated for us is valid, right? Like nobody ever loves reviewing code of others. Now imagine

reviewing code of an agent or an AI model like that becomes less enjoyable as a engineer, right? Instead what we can do is flip that on its head. we can

write test code and have the agent validate code, right? Like write

generate code, drop that in and test it very quickly and we know very quickly whether or not uh that code passed, right? And so then we can start to

right? And so then we can start to refactor and do things from there. Uh,

and so I would say like the fast feedback loop is probably one of the more critical things that you as an individual developer, enterprise

developer can do to really scale up your usage of AI assistance there.

All right, so let's talk a little bit about that fast feedback loop with a TDD demo. Uh so test-driven development is

demo. Uh so test-driven development is really a practice that I highly uh advocate for especially with AI assistance. I think this is one of the

assistance. I think this is one of the patterns that if teams use this highly they their usage of AI is going to it's just going to ramp up very quickly right

because you're able to get that fast feedback. So for those who may not know

feedback. So for those who may not know what test-destriven development is, it's the idea of red meaning. And so the first step is you write a broken T. The

second step is you make that T pass. And

then the final step is you go and refactor that code, right? And so

this feedback loop provides kind of guardrails, right? This red broken test

guardrails, right? This red broken test is basically the premise of which everything else gets written. And so now you've got guardrails for the AI assistants to kind of work within and

you can know very quickly whether or not things broke or whether or not you know you wrote some bad code or whether or not the models wrote some bad code. Uh

and you're able to get that feedback loop really quickly. So with that, let's kind of switch gears here. I'm going to pop open Visual Studio and let's dive right in.

All right. Wonderful. So here I've got Visual Studio up. In in a past life, I did a lot of consulting with organization helping them learn how to

use TDD effectively. What you're seeing here is actually uh a codebase from a long time ago that I did with the team uh where we were building a scoring

calculator uh and we did this using test-driven development as a iterative way to kind of build this out here. So

what you see here is the scoring calculator class. Uh it is very poorly

calculator class. Uh it is very poorly written. Notice the number of ifelt

written. Notice the number of ifelt statements here, but that's totally okay. We also have a test file here with

okay. We also have a test file here with many different use cases around uh various different scoring. Uh so what the first thing you're going to want to

do when you're doing TDD is you want to make sure that everything runs. So I'm

going to do a net test and I should see 10 successful pass.

Yep, perfect. So, we're seeing a bunch of uh tests that are passing. Uh the

next thing is we're going to want to go build that neck broken test, right? We

want to go to the red side of TDD. And

so, one way to think about scaling AI adoption, I think about things like prompt template. And so, for every

prompt template. And so, for every codebase, you've got a set of libraries and patterns and practices. Um, and one of the patterns and practices that I've

written up here is that we're using XUnit and we're using inline data attribute to write acceptance tests uh for user story, right? And so what you're seeing here is a user story uh

that is really centered around a Girkin style format. So it's given a bowling

style format. So it's given a bowling game five spare in every frame except the 10th frame uh five spare five when you calculate the score it should be

150. Now, for those who may not be

150. Now, for those who may not be familiar with bowling, uh there's this concept of a bonus frame that you get in the 10th frame. In the 10th frame, if

you score a strike, which is you knock down all 10 pins in the first roll, or a spare in this case, where you knock down the remaining 10 pins, you actually get

a bonus. And so right now the codebase

a bonus. And so right now the codebase doesn't actually out uh and so what we want to do is make a test uh that accounts for this. So what we're going

to say is we're going to say um using xunit right here's the user story. Oh sorry clicked

on the wrong thing there. Uh,

I'm going to add in the user story here and do this for the at given file. This

is where all the unit tests are. I'm

going to run this. We're going to let Gemini uh give us what that user story could look like.

and we're going to go ahead and accept that change. So let's go and accept the

that change. So let's go and accept the change here. We're going to save this

change here. We're going to save this change. And now what we want to do is we

change. And now what we want to do is we just want to validate, right? So it

added this line of code here. Let's go

validate and make sure that we're getting the expected result. Right? And

so if we know anything about bowling calculations, if we add all this but don't count this fifth frame, the score we should be getting is 145, not 150.

Let's run that test.

We should have one broken test, which we do. And we're happy. And we found that

do. And we're happy. And we found that instead of 150, it gave us 145. Great.

So the code is operating the way we expect it to operate. We're good here.

So the next step is we need to go make this test.

And so in order to do that, we could start by saying something very simple like um at scoring calculator

modify the code to make the let's do this. Let's

let's write that.

Let's do this. Modify to make the test.

Now, the reason this is a bad prompt is that it's very ambiguous.

But this is typically where uh engineers, developers start with it.

they kind of make these you simple prompts around asking um the models to do a bit too much, right? And so the thing you start to

right? And so the thing you start to learn over time is that you have to be very explicit about how you want these models you to handle the code itself.

And so uh we'll we'll run this and we'll see if the test passes.

Give it a little bit more.

All right. So, I'm just going tohead and accept all the changes, right?

Everything is under test. We know where things are. Let's save this. I'm going

things are. Let's save this. I'm going

to clear this and run the test. Let's

see what happens. Look at that.

Actually, Gemini solved it for us. Uh,

and we now have 11 passing tests. So, it

looks like what it did is it checked for the last frame. If it was uh in the bonus roll, it started to parse that and actually calculate that in. This is

great. Uh, we were able to get the model to actually do what we wanted in the first pass. um that means they're

first pass. um that means they're improving over time but we still have an important step that we need to go through and that's around the refactoring step. So there's multiple

refactoring step. So there's multiple ways that we can go about this. So for

example, we can ask Gemini to uh on a scale 1 to 10 where one is unreadable

and 10 is very well designed and elegant code.

rate the boring scoring calculator.

So, what I want to do is just go ahead and start the refactor process here.

Uh, and what we're going to do is ask Gemini to just kind of assess the code base and rate it for us.

Uh, and so as we expected, what we're seeing is that it rated it pretty poorly, right? It's not very well designed. Some of the things that uh we

designed. Some of the things that uh we highlighted there in terms of the logic is very um

is very if else driven, right? makes it

really hard to follow. Uh and so maybe what we can do is try and take some of the changes that Gemini

uh has made. Let's see if we can accept it. Give me a warning that it couldn't

it. Give me a warning that it couldn't accept the changes. So what we'll do instead is that

we will copy the areas for improvement or ask it to make some of these changes ourselves. Right? So let's come in here

ourselves. Right? So let's come in here and let's uh re ask it to say refactor

the code to make it more readable.

and address the following issues.

So, I'm just going to drop in the existing issues in here and ask Gemini to go ahead and refactor that code.

And then what we can do is we can apply those changes there and know very quickly whether or not uh the refactoring worked.

All right. Now, let's go ahead and accept all the changes that Gemini suggested.

And let's come in here and rerun the test.

And so now that we've ran these tests, we know for example that the code that Gemini created for us, all these changes are valid changes and

we know that the code has not changed uh functionality, right? And so we can gain

functionality, right? And so we can gain trust in what the models are generating for us. Now we can come back and we can

for us. Now we can come back and we can ask it again on a scale from one

to 10 where one is highly unreadable and 10 is well written

and elegant code scoring I'll keep And now what we should see is a much better score here.

Wow. So we took us from a four to a nine. We have confidence in the code

nine. We have confidence in the code that was written. Why? because we had built several uh acceptance tests and we were able to rapidly iterate through

various different prompts make sure uh that we got the right out here. So with

that this is a great way to think about how you can use these fast feedback loop uh to go ahead and gain more trust and

more confidence uh in AI and its usage on your enterprise codebase.

[Music]

Loading...

Loading video analysis...