Google’s New Stack: Gemini On-Prem, ADK, Open Models -- Interview
By Prompt Engineering
Summary
## Key takeaways - **ADK Enables Purpose-Driven Agents**: With agents, you're actually going to go with a purpose. You're going to build software and say, 'I need you to go do X,' like booking an airline ticket, buying something in retail, or in B2B getting components from 500 suppliers proactively based on inventory levels. [02:15], [03:14] - **Google Cloud: Most Intelligent Platform**: The platforms getting more intelligent. Imagine if you didn't have to even query the platform for what your cost is. It just told you once a week, once a day, once a month, so models don't run out of control. [04:00], [04:45] - **Gemini On-Prem via Distributed Cloud**: Gemini's price performance through API calls is the best in the industry, so why wouldn't you want to make it available to more people on prem or other infrastructure. There's a play to make Gemini available to more developers beyond just the platform. [09:46], [10:24] - **Full Stack Economy Enables Free Models**: We're the only hyperscaler that can go from chip all the way to the full stack, giving us an economy of scale nobody else has with TPUs, Nvidia, AMD. This scale business means serving the largest model to the most developers is cost effective. [11:24], [12:34] - **Open Models Fuel Research Innovation**: DeepMind releases open weight models like Gemma because research arms need unfettered access to change weights and distillation. Innovation happens elsewhere, so enable all those other smart people to do interesting things. [19:03], [20:13] - **AI Studio for Learning, Vertex for Production**: Students and beginners start with AI Studio to learn prompts and have Gemini do something as a learning thing. For deploying applications to serve others, use Vertex or Cloud Run, as AI Studio is not the platform for that. [14:44], [15:23]
Topics Covered
- Agents Shift from Query to Purpose
- Google Cloud Becomes Most Intelligent Platform
- Secure ADK Prevents Rogue Agents
- Gemini Goes On-Prem for Price Performance
- Innovation Happens Elsewhere via Open Models
Full Transcript
Hey everyone, uh this is Muhammad. This video is going to be very
Muhammad. This video is going to be very different than my normal videos. Uh in
fact, this is the first time I am talking on camera on this specific channel. So I'm trying something new and
channel. So I'm trying something new and hope you all are going to like this.
So in this new series, we're going to be uh talking to industry experts, those who are building with LLMs and those who are enabling developers to build on top
of these systems. And the goal is very simple. I want to bring industry
simple. I want to bring industry insights to developers and enable developers to build thoughtful systems without all the hype that we are seeing
all over the place. So, a few weeks ago I was at uh Google Next and I had the opportunity to sit down with Matt Thompson, who is the director of
developer advocacy at Google Cloud. We
talked about Google Cloud, Gemini Gemini on Prem, which for some reason nobody's talking about, but I think it's one of the most fascinating ideas that
we have seen in a while. We also discuss ADKN a lot more. So, I hope you're going to like this conversation. Hopefully
this is going to be first of many conversations that I'm going to have with industry experts and would love your feedback on that. Also, if you have any specific guest in mind, please do
let me know. Anyways, I hope you enjoy this conversation and I'll see you all on the other side. Thanks. How are you doing, Matt today? I'm doing really well. It's It's been a great week. I I
well. It's It's been a great week. I I
got to admit it's Las Vegas. There are a lot of people here. I'm getting a little tired of people. I I will always love developers, but I'm a little tired of all the other people that are here. It's
a lot of walking. Uh we've had some good food. We've had some good weather. Um
food. We've had some good weather. Um
had some great demos. Um so I've enjoyed it, but I'm tired. Okay. Well, it's the last day, right? So you get to go home after this. That's exactly right. Yeah.
after this. That's exactly right. Yeah.
What what uh so there were like a lot of stuff released uh or announced, right?
Uh when it comes to developers, like what do you think what developers should be excited about? So there there are a lot of things that developers should be excited about, but I I'll I'm gonna because the list is very long. I'll I'll
pick two to start with. Okay. So, uh the agent developer kit ADK is actually a huge opportunity for everyone to get started with agents. And and I say this
because we're going to uh it's going to change so much of what we do. So if you think of enterprise apps today, um the the really old model was you put up like an XML schema, somebody goes and queries
it, figures out what you do as a service and then you know queries it to go figure out, you know, what they're going to do to call you. That's that's old school. So then we add in, you know
school. So then we add in, you know we're starting to use the web and use REST APIs for that. But it's still the same notion of I got to go figure out what you do and then be able to do it.
With agents, you're actually going to go with a purpose. Yeah. So, you're going to build software and say, "I need you to go do X." And, you know, the classic ones that we've been talking about is you're going to book an airline ticket you're going to go buy something in
retail, or interestingly, the B2B scenario is if I get, you know components from 500 different suppliers I now no longer have to query 500
different, you know, things. I can
actually build an agent smart enough to be able to say here's the manifest of what you need to go get for us on this kind of um timing once a week, once a month, whatever when a certain inventory level gets a certain amount and building
an agent that then could go do that proactively for us. Uhhuh. Changed
tremendous and then turn it around to the supplier and go why wouldn't you build an agent to actually then allow agent to agent interaction. So the
reason why I think ADK is so interesting, it's the very basis of that. It allows people to start playing
that. It allows people to start playing with agents. I can't tell you what the
with agents. I can't tell you what the the greatest winning combination of agents are going to be. Nobody can right now. Yeah. But allowing developers to
now. Yeah. But allowing developers to start playing with Azure. So that's
number one. The second thing and and this is less of a product announcement but more of kind of what we signaled is the platforms getting more intelligent.
So the notion is we actually want people to be building applications without having to worry about a bunch of the things the platform is going to take care of for you. One of those that we actually talked about here, which I
think is really interesting, is imagine if you didn't have to even query the platform for what your cost is. It just
told you once a week, once a day, once a month. That way, if you're using models
month. That way, if you're using models it's not going to run out of control for you. Like you can say, I'm I want to
you. Like you can say, I'm I want to know when I'm at 90% of my of my capacity or 90% of of my, you know whatever I put in for cost.
That to me is the signal that's most important for developers to listen to because when we think about the the hyperscalers, everybody can kind of figure out in their minds what AWS is going to be. Everybody can kind of
figure out in their minds what Azure is going to be, what Google Cloud's going to be is the most intelligent cloud platform. And there there's just nothing
platform. And there there's just nothing else that's going to be close to that.
And that's that's what we signal. That's
what I'm super excited about. Coming
back to the uh agent development uh kit uh what's kind of the philosophy behind like uh making it open source like is it more of a uh so that it's widely adoptable or getting feedback from from
the community so you're a friend so I'm going to be honest with you here sure we want adoption we want feedback all those things are true the reality is there will probably be a thousand1 different
agent frameworks and different ways for agents to engage what you're seeing Google do is go out early what you think is and try to get momentum around one way of doing this. We think it's the right
way. There could be other ways. One of
way. There could be other ways. One of
the things that we've said on stage here is that we we expect to interact with a lot of different agent frameworks. Yeah.
So, part of the the thinking behind ADK was we're going to give you not an abstract but a a fairly generalized view of how agents can be built, how they interact, how they work, how they're
deployed and managed and allow that to then work with any other framework that people come up with. So, one of the reasons for putting it out in open source is to enable that early adoption get people playing. The most important
piece for us right now is get people thinking about what this world's going to look like. I I can I can name, you know, 500 different scenarios on how agents are going to work. I don't know any of them are right. Yeah. So, what's
going to happen is we'd like the developer ecosystem to go play. Then
we'll start seeing what's winning. Okay.
Okay. But is it going to be like a more of a opinionated uh implementation at this point? I I think there are pieces
this point? I I think there are pieces where we're opinionated. So, um agent interaction, how they communicate, how they can be managed. Um security is a big one. We haven't talked about a lot
big one. We haven't talked about a lot here, but one of the things to understand is um and and this is where we start getting into kind of the ethereal stuff. Um rogue agents. Yeah.
ethereal stuff. Um rogue agents. Yeah.
Like how does a system manage a rogue agent? Can can a rogue agent be built
agent? Can can a rogue agent be built such that it can go destroy other services? Of course not. And so the
services? Of course not. And so the notion is at least on Google Cloud agents will interact with agents in a well- definfined and secure way. And so
part of the ADK going out there is starting to get developers to think about um one, we don't want anybody building malicious agents. That's not the goal
malicious agents. That's not the goal right? But we also don't want people to
right? But we also don't want people to to have to suffer from that. So we're
putting constraints around agent capabilities. A agents can't destroy a
capabilities. A agents can't destroy a port, for example, right? Things like
that. And so that's a um that's part of the thinking behind this. Okay. Yeah.
No, it's actually really good that uh you brought on the security because like I think some of the other frameworks probably haven't really paid attention to it, right? And it's that's a really big concern now something which
everybody is discussing or asking when you guys announced the agent to agent.
How does it play with MCPS?
So and and this is a great question. So
in the developer keynote we talked about two different levels of engagement.
There's there's MCP and there's AA so agent to agent. Um I would love to give you all the greatest thinking that we've got on this, but the reality is we're still thinking some of this through. So
um is there a time in the future, this is not an announcement by Google. Um is
there a time in the future where we see agent to MCP engagement or MCP to agent engagement? Possibly. Right now we see
engagement? Possibly. Right now we see those as kind of I don't want to say different but maybe two different layers of engagement. So to answer your
of engagement. So to answer your question, I would say hold tight. Let's
see what happens. Okay. Okay. Okay. I I
want to switch gears towards the cloud.
Uh and one of the announcement which I I think is very important but hasn't really got a lot of traction or notice is uh uh Google distributed cloud
specifically uh Gemini on prem. Are
there any concerns about uh leakage of the model weights and things like that?
Right? Because I think that that that's that's a huge thing to enable. Yeah. So
one I'm going to say this is a great question like like you you I don't know how you've listened so well this week but that was you you've already internalized what we announced and now are asking really good questions about
this. And I'm going to give you um I
this. And I'm going to give you um I think there's some interesting answers here. I'm also going to tell you there's
here. I'm also going to tell you there's a lot of this we have to go figure out right? But let let's talk a little bit
right? But let let's talk a little bit about the philosophy. So, if we go back a little bit, we had a product called Anthos, which is now something we we use it for something else. But the notion is could we deploy Google cloud on other
infrastructure your own infrastructure on prem etc. The reason why we did that and the reason why we had that program was because we see the need to take Google cloud services and either take
them to the edge or take them to other infrastructure. Uhhuh. Okay. So that
infrastructure. Uhhuh. Okay. So that
that's a thought that's now five years old with us. Okay. So when we start thinking about Gemini and again it has nothing to do with this other infrastructure but when we start thinking about Gemini that thought was
already native to us uh and so the question was is Gemini let's say popular enough or is the price performance good enough so that we'd want to take it to other platforms and the reality is is it
is now so um we have a number of uh kind of I'll say leading indicators it's not statistically significant but we have a number of people out in the community
that are now telling us Gemini's price performance through API calls is the best in the industry. Yeah. Okay. So, if
you have that, why wouldn't you want to make it available to more people? And
so, what you're seeing is clearly we have an opportunity to take Gemini to other platforms on prem etc. There's a whole bunch of stuff we still have to work out there. Um, but that's the
philosophy behind it. I also think I mean we're talking among friends here.
There's a play here where we want Gemini to be available to more developers.
Uhhuh. So why limit it to just the platform? Yeah. Right. And so um how
platform? Yeah. Right. And so um how popular is it going to be on some of these other platforms? I don't know. And
you the security piece is a really interesting thing, right? Because we
clearly don't want if you have your own model weights for these things, you don't want that suddenly um can we secure it on another platform for example. I think some of those questions
example. I think some of those questions are still open and how we're going to do that. I I would say um from a philosophy
that. I I would say um from a philosophy standpoint, we don't want to put a model out there that is um easily mishandled.
Yeah. Yeah. So, yeah, makes sense. Makes
sense. And another question which I think like everybody's curious about, uh most of the other uh frontier labs are releasing like more more and more expensive packages, right? Like per
month, right? How is uh Google able to uh serve some of these models for free?
This is a great question. So, um, I'm going to give you I'm going to give you a developer answer in a minute. I want
to give you the Google answer first. So
the Google answer is and and for all of you that have heard our answer 10 times before, I apologize, but we're really the only hyperscaler at this moment that can go from chip all the way to the full
stack. And that gives us an economy of
stack. And that gives us an economy of scale in places that nobody else has.
Now, we have a number of partners whether it's, you know, Nvidia, whether it's AMD, etc., as well as our own TPUs that allow us to build infrastructure at the core that nobody else can touch. So
the first thing is we know and it's it's a math equation, right? We know our economy of scale if we build the stack right will be better than everybody else's. It has to be. Yeah. Because
else's. It has to be. Yeah. Because
we're the only ones actually building things at the bottom, building things at the top. Now that's not a great answer
the top. Now that's not a great answer for a developer because okay, so that that makes sense. That's how we're going to do how we manage the cost. The
reality is um this is very much a scale business. The more developers that we
business. The more developers that we have using these these resources, the better price performance we can get out of this. Clearly serving the largest
of this. Clearly serving the largest model to one customer is going to be the single the single most expensive thing you can do. Yeah. Yeah. Serving the
largest model to the most developers out there. There's a cost effective measure
there. There's a cost effective measure that we can get to. Um some of it you're seeing us being very aggressive as well.
So, can we do it cost- effectively for the next 10 years? I think there's economy of scale that we have to get there. One thing, you know, just as an
there. One thing, you know, just as an example, we we talked about our new TPUs here. Mhm.
here. Mhm.
Um the Nvidia chips, the AMD chips these are all amazing things. We'd
probably like to see more developers building workloads for TPUs as well.
Yeah. Because then there's another economy of scale that we can deliver.
What does um what do some of these open source models on TPUs look like from performance perspective? That becomes a
performance perspective? That becomes a really interesting point. Um, we have the ability to build hardware specific to the model that you want to run. Now
we're not going to do that. It's not
like pick your pick your GPU, TPU, pick your model to that it works, but certainly we can start looking at patterns of this. Um, so I think you're seeing, excuse me, Google being very
aggressive in this space. Okay.
Interesting. Yeah. And I think like one of the announcement was uh VLM support for uh TPUs, right? So, so that kind of I think gives us an indication of Google uh does want to support some of the open
weight models, right? Yep. Absolutely
true. And yeah, and you're hosting some of those models in Google Cloud. Yes.
Already, right? Yep. So, and I and I see where you're going with this. So, the
the answer for us is choice matters right? You're not going to I don't think
right? You're not going to I don't think you're going to hear Google start saying model is a choice, but the not the notion is is we're trying to provide what we think will be the most open
most complete um and most performant cloud in the industry. Okay. And that
means you do have to offer up solutions for for other as great as Gemini is and as great as Gemma is, there are other models that people want to use. Yeah.
Yeah. That that that's that's a good point. I think a source of confusion for
point. I think a source of confusion for a lot of people like okay when do you use AI studio versus when do you use Vortex? So um for for everybody
Vortex? So um for for everybody listening to this that's a brilliant question. Thank you for asking. Um the
question. Thank you for asking. Um the
the answer I'm going to give you is going to leave everybody wishing I say more. Let's start with we think a great
more. Let's start with we think a great way to get started and let's start I so I'm a developer guy. For me let's start
with persona. If I'm a student, if I'm
with persona. If I'm a student, if I'm uh in, you know, if I'm I'm in in the US, it would be K through 12 or even going into
college. I'm going to start with AI
college. I'm going to start with AI Studio. I I want to get the knowledge of
Studio. I I want to get the knowledge of playing with learning prompts, having Gemini do something for me as a learning thing. And I'll I'll share. So, I have two kids in college in
share. So, I have two kids in college in the US right now, both studying computer science. Both started their AI journey
science. Both started their AI journey with AI Studio. And so now there's a point um we heard it in the developer keynote here yes yesterday where if you want to deploy
that if you want to build an application out of it that you then want to serve up to other people AI studio is not the not the platform for that. Okay. Yeah. So
there is a point where you are going to want to deploy this and um we have so I I sit in Silicon Valley um I'm in San Francisco all the time. Part of my job is actually to go talk to startups and
listen to what they're doing. Mhm. And
without naming a certain startup, one of the startups that we've been working with like five weeks ago hated
literally wrote on X, "We hate Vertex we hate Gemini." Okay. And okay, woke us up. We went and talked to them. We
up. We went and talked to them. We
worked with them. They've now published a number of YouTube videos that say they love Gemini.
Vertex is not their first choice. Yeah.
And so what we want to do is be able to give people choice. Um, the one that you didn't put in there is what about running Gemini on Cloud Run? Uhhuh.
Yeah. Right. So, and again from a startup perspective, if I if I had a startup, if if I was talking to a startup founder and they were just starting out, but they knew they were going to
build an application that they had to serve to people. And usually startups don't have time to play with something for three months and then go build something like they're starting from day zero and they have to be getting something out. They need to get to an
something out. They need to get to an MVP within, you know, weeks or months.
Starting with Vert.ex is not a bad answer because they're going to be able to scale up really easily. But it's hard getting that infrastructure hard relatively hard to get infrastructure set up. Running that stuff on cloud run
set up. Running that stuff on cloud run is a really interesting opportunity for them. Yet, if I'm in an enterprise and I
them. Yet, if I'm in an enterprise and I know I need that infrastructure, I need that managed process. Uhhuh. I'd start
with Vert.ex and just take the hit up front. We're making it much easier. Um
front. We're making it much easier. Um
there are we know uh through work that we've done recently that people can get up and running in Vert.ex in you know in a few days. Okay. But it's not in a few hours. Yeah. Yeah. So so so like the the
hours. Yeah. Yeah. So so so like the the reason is a lot of people or especially developers they will start at AI studio.
Yes. Because that's I think the simplest inter right then uh like I don't see a simple transition to vortex from there.
It's so you're so let let's let's talk about one of the things that we've done.
So there is a path forward that we're trying to implement. So the first thing was the was the common SDK. Oh yeah
yeah. Yeah. So realize if we talked back if we were meeting talking a year ago you actually could have said why do you have two SDKs too, right? Two platforms
two SDKs. So what you saw with us is that we we published a common SDK that allows you to get started with an API that's common across
both. So that's the first piece. Now
both. So that's the first piece. Now
how do we actually enable migrating what you might start constructing in a in the loose sense something like AI studio to easily deploy into Vertex? Mhm. Not
announcing anything here, but you should assume that we understand the problem and that's something that we have in our future as well. Okay. But it's it's a great question and for all your listeners, yes, we know this is an
issue. Yeah, that's good. That's good.
issue. Yeah, that's good. That's good.
And thanks for bringing up the uh common SDK now. Yes. Like I think that that
SDK now. Yes. Like I think that that that makes the transition relatively smoother. It makes it smoother. We know
smoother. It makes it smoother. We know
there's still work to be done. I also
wanted to ask you about Google being in a in a very interesting position because you're not only releasing uh state-of-the-art uh frontier models, but
you're also releasing uh openweight models like Gemma, right? So I'm curious like what is the philosophy there?
That's great. So um I've got to step out of my job here for a second because if I look at what Google's doing, I need to bring in DeepMind. Okay. So uh for all
of you that that know so DeepMind is our research side of the business and they are as a research institute they are phenomenal. Uh many people are aware
phenomenal. Uh many people are aware that Demis just won a Nobel Prize for chemistry, right? I mean there aren't
chemistry, right? I mean there aren't many tech companies that have a leader that has a Nobel Prize for chemistry right? Um but that came out of work that
right? Um but that came out of work that they were doing to uh forward the you
know the research in in biology. Mhm.
And so having a research arm that is working with you know you name it the top thousand the top 10,000 researchers in the world they have different needs than a developer working in an
enterprise developer working at a startup. And so when you start seeing
startup. And so when you start seeing some of the open wave models or things like Gemma, there are many many research arms for companies and universities that
need to have unfettered access to the model itself, changing weights, maybe even changing some uh changing some of the dist distillation that happens. And
so from our research side, we want to enable that because and this is not their philosophy, it's my philosophy.
I've always had a belief that innovation happens elsewhere. Mh. And so what that
happens elsewhere. Mh. And so what that means is you have to have strategy. Not
all the smart people are going to work for you. Yeah. So how do you then enable
for you. Yeah. So how do you then enable all those other smart people to go do interesting things? And so that's the
interesting things? And so that's the strategy you're seeing from DeepMind is that they want to create both a infrastructure tooling and capability for all those other smart people to go
do interesting things. Yeah. And so that is at the foundation of why we do closed models, open models, open weights, etc.
And how does that so on the I say the enterprise commercial side we benefit from that. Why? Because there are lots
from that. Why? Because there are lots of open source developers and lots of other companies that want to play with those stuff as well. Uhhuh. So um that's probably the best answer I can give.
That is my personal philosophy. If you
talk to the deep mind folks, they'll tell you that their primary audience are those researchers. Yeah. And if you talk
those researchers. Yeah. And if you talk to people in Google Cloud, our primary audience are obviously developers and customers. So yeah, makes sense. Makes
customers. So yeah, makes sense. Makes
sense. And uh I I don't know if you have an answer for this or not, but uh do you think like Google is going to continue uh these two parallel paths of uh like
smaller openweight models versus these uh state-of-the-art closed source models, right? Or there might be like
models, right? Or there might be like some sort of like okay you start uh open sourcing some of the previous generations. Yeah, it's really
generations. Yeah, it's really interesting. Um, so
interesting. Um, so I I will say so I do a set of predictions in the kind of the IT space every year and I'm really bad at it. So
I don't know that I can predict what the future of Google will be for what we're going to do with the larger models. I do
think you'll see continue to see us serving the research institutes uh universities students etc with open models because there's so much value though. I I don't know what happens with
though. I I don't know what happens with the commercial side other than to say um and I this is this is common knowledge now that one of the most interesting things that we've learned as a company
and this was a year ago is that you can actually derive a smaller model from the larger model to gain um price performance. Yeah. With all the
performance. Yeah. With all the knowledge of the larger model. Yeah. The
relation. Yeah. And that's but that thinking about that from a philosophy standpoint means that there's value in continuing to push the boundaries on that larger model and then distilling
what you want from for each market each opportunity and if you open those things that might be okay. Uhhuh. Yeah. So I
I'll stop there. I can't predict what we're going to do in the future space.
Yeah. No makes sense. Makes sense.
Uh this is great. Now coming back to I think the developer point of view. So
one question I would ask you is there are a number of different cloud providers available right as a developer why should I even care about Google or like what's what's
something different um yeah so I love the question I've got to be careful because I think some of my arrogance will come in so I will uh let me just say and and people can look this up if they want I'm not going to say the name of the company so I worked for another
hyperscaler for 10 years I chose to came come to Google because of Google cloud the other hyperscaler uh that I worked for was either number one or number two in the market
depending how you want to look at it. I
chose Google Cloud 5 years ago because I thought it had the best opportunity specifically for innovation. So I
already mentioned that I have this philosophy on you have to be able to enable innovation happening elsewhere.
That happens in Google too. So we hire um I'm take you on a little story. I
like telling stories. No worries. Um I I came to Google in part because Google has many intelligent people. Super
intelligent people. It's one of the smartest companies I've ever worked for but not everybody who's smart works for Google. Matter of fact, I would argue
Google. Matter of fact, I would argue we're a very small percentage in a very large ecosystem. So, how do we enable
large ecosystem. So, how do we enable all those people to do interesting things? That's my philosophy for
things? That's my philosophy for developers. My goal is to allow them to
developers. My goal is to allow them to innovate on a platform doing whatever they're excited about doing. It could be gaming. It could be, you know, classic
gaming. It could be, you know, classic IT, it could be SAS applications, it could be AI based stuff, it could be um you know, my both my kids are big into
social and they're like, what's an a what's the new AI social engagement? I'm
like, yeah, right. Um, so for me, the reason why I think Google, if we are winning, the reason why we're winning is because we've got
the best balance between performance capability, and openness.
So, some some of them are more open some of them might even be a little more performant, some make you, you know like Legos build every little piece together. Um, our goal is really simple.
together. Um, our goal is really simple.
enable customers to be really effective on our platform and then for me personally enable developers to innovate whatever they want to do. We we do not want and this everybody in your audience
is going to laugh when I say this but our goal is actually to remove every hurdle that developers have developing on our platform. Okay. Okay. And we're
not there yet. I get I think this also comes down to the tooling as well right? Oh yeah. Yeah. Like what what
right? Oh yeah. Yeah. Like what what type of tools you build for developers in there? Thank you. Thank you for the
in there? Thank you. Thank you for the prompt. I really appreciate that. Um
prompt. I really appreciate that. Um
yeah, so we believe that one way you enable developers to be more innovative is by allowing them to abstract away some of the um complexity that that you know one of the things that that's very
common is the complexity. My kids tell me all the time that when I was studying CS it was easy compared to what they're doing. I'm like that's not true. You
doing. I'm like that's not true. You
guys are doing Python. You're like I helped I help deliver Java to the world.
like you get to use all these high level languages where all the run times are easy to use and memory is managed for you and everything else. That's easy.
No, no, no. We're building much more complex applications than you were building, you know, when you began.
That's what they say to me. Tooling is
the way we abstract that complexity right? So, what you heard me say earlier
right? So, what you heard me say earlier is that we're trying to build the intelligence into the platform. That's a
form of tooling, right? But on the on the tool side for developers specifically, I I'm going to be a little um bombastic here. Uh-huh. I actually
believe we're moving towards a world I'm not going to put a date on this where the ID is going to fade away into the background. Interesting. So, I don't
the background. Interesting. So, I don't think we're going to be living in an ID per se, but I think the services that an ID provides will be available to you both from tools. So, things like um you
know, if you're using GitLab or any of the you know, the the management for your for your code, um there's services that come there. A matter of fact, you could argue that many of the software
development life cycle services will come from where your source is, right?
You also want code assistance. Code
assistance is, you know, I um I keep asking my kids, "How much of your work at university is being done by using code assistance versus your brain?" And
and they go, "Oh, we only use it a little bit." And I'm like, "I don't know
little bit." And I'm like, "I don't know about that." Wife coding. No, vibe
about that." Wife coding. No, vibe
coding. That's right. Actually um uh without going too far into this uh one of the big things in universities in the US right now are hackathons right they do studentled hackathons. My son helps organize the hackathon at his
university. They ran a vibe coding uh uh
university. They ran a vibe coding uh uh kind of uh bracket for people who wanted to you know just take ideas and just go off on them. And I was like wow. So it's
it's you know whether you believe it or not it's being used especially by by people getting concerned. But tooling I think is really important. So for us code assistance means a couple different things. So we announced here um Firebase
things. So we announced here um Firebase Studio, right? So for developers that
Studio, right? So for developers that want to get started in kind of a more complete environment including deployment, Firebase Studio is brilliant and code assistance is built in for you.
Yeah. So and then if you're more working in more of the traditional space, you know, we have we have our code assistance built in there and we can plug it into most of the IDs that you're going to want to use. Even though I say
in the future we may not be living IDs.
Um the other things that we've talked about here is by building intelligence to the platform we're going to be able to um help you at runtime, at deployment time, and later at management time. So
Okay. Okay. Now, this is really insightful. Um any uh any tips or tricks
insightful. Um any uh any tips or tricks for upcoming developers in the W coding era?
Um I was going to say uh do it with friends. Okay. do it with people you
friends. Okay. do it with people you like because one of the notions um more stories but collaboration is really important. We we all know this in this
important. We we all know this in this industry. Uh my team uh we collaborate
industry. Uh my team uh we collaborate on everything. Uh we measure uh uh the
on everything. Uh we measure uh uh the ability and the interest that people have in collaboration. The lone wolf model has disappeared. So uh vibe coding
works really well if you get along with people you're doing it with. If you
don't like the people you're doing with I don't think it's vi coding. might be
something else. That is true. That is
true. I think we we're going to wrap it up on that. Uh thanks Matt a lot uh for the opportunity. It was a great
the opportunity. It was a great discussion and hope everybody enjoyed.
Uh thanks for watching and see you in the next one. Thank you very much.
Loading video analysis...