LongCut logo

Google AI Studio Tutorial - Beginner to Expert in 10 Prompts

By Santrel Media

Summary

Topics Covered

  • Live AI Copilot Beats Solo Work
  • Nano Banana Edits Photoshop-Level
  • System Instructions Reshape AI Persona
  • Vibe Code Full Apps Instantly

Full Transcript

Google AI Studio is so much more than just a basic chatbot. For example, you can vibe code an entire app in here. You

can turn text into speech. You can

generate images or videos. You can even do Photoshop level editing of images without ever opening Photoshop. And all

of this is completely free to use. So,

this is going to be a full tutorial on how to use Google AI Studio to make sure that your job doesn't get replaced by AI. And even more importantly, your job

AI. And even more importantly, your job doesn't get replaced by somebody who knows AI better than you. So this

tutorial will show you 10 incredibly powerful ways you can use Google AI Studio to improve your life for work, for travel, for anything that you could imagine. So with that being said, let's

imagine. So with that being said, let's start off by going on my laptop and you can get started for free on Google AI Studio by going to a studio.google.com.

I'll put a link in the description as well and it should be fairly easy to find this. Now, once you go to Google AI

find this. Now, once you go to Google AI Studio, you will have to sign into your Google account. I already signed into

Google account. I already signed into mine, and it looks like this. Now, don't

be alarmed if yours looks slightly different, but recently Google AI Studio did have an update. So, if you used it in the past, a lot of things have moved around. I will show you how to access

around. I will show you how to access what used to be the stream feature and different things like that later on in the video. But, let's start off with the

the video. But, let's start off with the basic layout of the land. On the bottom left, you'll see your Google account. If

you have multiple Google accounts, maybe a work one and a personal one, you can toggle between them on the bottom. Above

that, you have settings. We can dive into settings later in the video, as well as API keys. But the main thing you'll see on the left side is going to be home, chat, build, dashboard, and documentation. So, home, you're not

documentation. So, home, you're not really going to spend a lot of time here, but this is basically like the quarterback of Google AI Studio. It's

showing you some different directions you can go, some quick ways to get into specific models as well as some new features they launched. I like I like going here every now and then just to see like all right, what's the latest

stuff that has been launched? Google VO

3.1 for example to generate videos and different things like that. Now below

that chat is where you're probably going to spend most of your time unless you're vibe coding apps in which case I'll show you the build section in just a minute.

But within chat, you'll see we have this main window here with many different again suggestions. Google does make this

again suggestions. Google does make this a little bit complicated, but I'm going to demystify it for you. This may look a little bit different based on which model you're using. So Google uses

Gemini. That is their AI brain if you

Gemini. That is their AI brain if you want to think of it as that. But within

Gemini, there are many different models.

You've got some that make images, some that make videos. And it's not just one thing. So you have to select which model

thing. So you have to select which model you want to use based on what you're trying to accomplish. And you can do that on the top right. So if I click on that box right there, you'll see if we

go to all, there are a ton of models we can choose from. Each one has a little bit of text below it. So it'll tell you the other name of it. So Nano Banana is what everyone calls it, but it's also

known as Gemini 2.5 flash image. That

doesn't really matter. What really

matters is below that it tells you with a little eye right here. That's going to essentially be what the model is. Now, I

don't like just scrolling through all of them here. You could search if you

them here. You could search if you really know what you want, but typically I'll use these other tabs right here.

So, Gemini, if we just click on Gemini, usually those are going to be more chat focused ones. So, this could be for

focused ones. So, this could be for example pro is going to take more time, but it's going to give you more advanced reasoning. If you're trying to diagnose

reasoning. If you're trying to diagnose what's hurting on your foot, I mean, obviously consult a medical professional, but maybe you need some advanced reasoning to talk about what you were doing, what angle your foot was at, and and why it might be hurting. And

then the Pro model would make more sense there. The Flash model is going to be a

there. The Flash model is going to be a little bit faster. And then Flash Light is going to be even faster yet. And uh

that's going to be more beneficial if you're trying to just ask a lot of questions that are a little bit more basic. For example, if you're asking

basic. For example, if you're asking like, "What do sloths eat?" that doesn't need a huge, you know, pro model to figure that out. You could use something a little bit lighter. Then we have images. I'll come back to live in a

images. I'll come back to live in a minute. That one's really cool. But

minute. That one's really cool. But

images. This is again a little bit complicated the way Google sets this up.

But you have four right here. Imagine

four. That's going to be your main image generation. You've got Ultra, which

generation. You've got Ultra, which takes a little bit more time, but does a much better job rendering text. So if

you have an image with like signs in the background, the text will not look jumbled or should not look jumbled with Ultra. The fast model is a little bit

Ultra. The fast model is a little bit lighter, a little bit less precise, but you know, can crank out images quite quickly. 3.0, that's an older one. And

quickly. 3.0, that's an older one. And

then, of course, we have nano banana.

Now, nano banana, it's not so much for generating images. It's a lot more for

generating images. It's a lot more for editing images. And we can talk more

editing images. And we can talk more about that later in the video. We have

video generation. Here we have audio, which is going to turn text into speech.

And if I go over, that's the last one.

But live is really quite cool. So before

we go into any of the other ones, I think live's one that I want to get into as one of the 10 really cool features I wanted to show you on here. So if I click on this Gemini 2.5 flash native

audio preview, not the best name, like I said, but it gives us three options.

Now, if you saw Google AI Studio in the past and they had a little like um what they call it, live stream or stream something on the left side, that was a way you could share your screen and work

in real time with Gemini. So, this is actually easy to do right here by going to that, as I said, the live one, and then you can click on either talk and you can talk to it in real time and ask

it questions, have a conversation. Uh,

just I'll show you. Hey, Gemini, what uh what's a good recipe to make cookies?

>> And everyone seems to love them. Is that

the type of cookie you're thinking of?

>> Okay, so you can see you're able to just talk with it back and forth without having to keep asking prompts. And that

would be good if you're cooking in the kitchen and your hands are dirty. You

can just talk to it like that. the other

option. So if I go back, if I want to make a new chat, I'll just click on chat again. We can click on click on this

again. We can click on click on this model again. And now we can go to webcam

model again. And now we can go to webcam and you can show it things on im on you know actual images or the one that I think is really interesting is share screen. So if I say share screen, I can

screen. So if I say share screen, I can then select the PowerPoint presentation I'm working on. Now it's working. Hey

Gemini, I have a question. How do I change the background of this slide on PowerPoint?

>> To change the background, first go to the design tab. Then you should see an option called format background on the far right. Click on that and a panel

far right. Click on that and a panel will appear on the right side of your screen where you can choose various fill options. Does that make sense?

options. Does that make sense?

>> Yeah. What do you think of this one?

Does that really fit the vibe?

>> Well, the texture is interesting, but it might not be the best fit for a presentation about sloths. It might be a little too busy. Maybe try a solid color

or a more subtle pattern that doesn't distract from the content.

>> There you go. So, that is a really cool feature when you're trying to work with something. You have a little co-pilot.

something. You have a little co-pilot.

Essentially, if you are doing any kind of coding, if you're doing any design, I don't know, anything that you have questions about and you need some, you know, kind of more advanced help. This

is a great way to manage that by live streaming with Gemini and showing it, you know, whatever. Obviously, be

careful if you're working and showing like sensitive data. You don't

necessarily want to do that, but still a really cool feature. I wanted to show.

Next up, if I click on chat again, I can go to a different model. So this time, let's talk about generating images. So

if I go to images and let's say we want to go to Imagine 4 Ultra. And this, like I said, is going to be significantly better at showing text. So here I could say generate, show me a sloth in the DMV

working very slowly. And with Google AI Studio, you can't just hit enter.

That'll give you a new line. You have to hit control enter on Windows or command enter on Mac. And again, this is going to take a long time. Actually, that

didn't take that long at all. But on the right side, we have some settings. And

this is true with all of the models we're working with. This one is going to be how many results is it making. You

can have up to four results. You can

choose the aspect ratio, which is really beneficial if you're using chat GPT or just, you know, Gemini on its own website. You're not going to really be

website. You're not going to really be able to do that. You can also choose the resolution. So, if I want a 2K

resolution. So, if I want a 2K resolution in 16x9, let's try this.

Let's try this again. Going to generate that again. And it's going to give us

that again. And it's going to give us higher resolution, wider aspect ratio, which maybe is what I want for the PowerPoint I'm generating. It also shows you how long it takes to make this. You

can see right there. There we go. So, I

can click on this. And from here, you can copy it if you wanted to. You can

add it to Google Drive just by exporting like that or you could download it. Now,

number three, while we're talking about images, let's talk about Nano Banana. I

actually made a full video just using Nano Banana, but on the right side, you can see we're able to search for the model. Use Nano Banana. And Nano Banana,

model. Use Nano Banana. And Nano Banana, like I said, is really good for editing images. So, if I click on the plus

images. So, if I click on the plus button on the bottom, so this is normally where we type in our text. I

can click on plus. I could select something from Google Drive. I could

take a photo. There's sample media. I'm

going to upload a file. So, I'm just going to upload a profile photo I used on one of my other channels, my tech review channel. And now I can ask it,

review channel. And now I can ask it, please give me Ray-B band style glasses.

That would look good. And before I hit enter, I want to show you some settings on the right side. Everything's going to be a little bit different, but on this one, we've got temperature, which is going to be how creative it is. You

could have something more straightforward, which is exactly what you ask for, or higher creativity, which is you wanted to kind of just play around and try something a little different. Aspect radio ratio, I would

different. Aspect radio ratio, I would leave it as auto. It's going to do whatever based on the image you give it.

And you can have some other settings down here that are changed a little bit more advanced. I wouldn't worry about

more advanced. I wouldn't worry about those nearly as much. So here you can see it did a really good job of maintaining the image which looks essentially identical, but it added the glasses on there. And if I click on

that, you can see I it gave me the Rayban logo on the glasses. I don't

really want that on the bottom right. So

we are actually able to edit this image even further. So from here, so when you

even further. So from here, so when you have an output like this, you could either click on this, which reruns it.

You can click on this which you can delete it or you can branch off from here and have two different lines of conversation based on this right here.

So I'm going to try rerunning it first.

Actually no, we don't need to rerun it.

Let's go and ask it the next question.

So let's say please please remove the Ray-B band logo from the bottom of the glasses. And again, it should maintain

glasses. And again, it should maintain the glasses. It shouldn't maintain the

the glasses. It shouldn't maintain the image of me without changing that, but it should hopefully remove the logo in the bottom of the glasses. And there we go. It looks like it still has the

go. It looks like it still has the glasses. It removed the logo from the

glasses. It removed the logo from the bottom right. We still have the logo on

bottom right. We still have the logo on the other side up top. And I think that looks pretty good. That actually is really consistent. And that's what Nano

really consistent. And that's what Nano Banana is really quite good at. I think

we're on to number four right now. And

this is what Google used to call gems, but here they call them system instructions. So if I just go to, let's

instructions. So if I just go to, let's say we're going to go to all or just Gemini. Let's go with Gemini Flash. Just

Gemini. Let's go with Gemini Flash. Just

a fast model. Maybe you're asking it more textbased questions. Now I'm going to say how how do I make how do I bake cookies? But before I hit enter, I want

cookies? But before I hit enter, I want to go over to system instructions.

Within system instructions, you can create a new kind of instructions. So,

I'm going to say this one is angry football. Angry football coach. I'm just

football. Angry football coach. I'm just

going to say uh you're an angry football coach. Make everything I talk to about

coach. Make everything I talk to about about So, I'm going to say you're an angry football coach. Make everything

about football and turn it all into life lessons and never be impressed by me.

So, we're going to do that. And now, I could say, how do I bake cookies?

Control enter or command enter on Mac as I mentioned and it should think and let's see what it comes up with. Yeah,

there we go. Looks like uh it did exactly what I wanted it to do. Now,

this is kind of like a silly example here of it acting like a football coach.

And you can obviously continue this conversation and it'll keep doing that.

But you could very realistically do this for anything else. You could say, "You are my Spanish teacher and you're helping me learn a language." You could tell it, you are my supervisor and you're analyzing every, you know, all

the work I do with, you know, a very analytical eye. Or you could say, uh,

analytical eye. Or you could say, uh, you're an audience member. I'm going to practice my comedy on you. Um, let me know what the feedback is. And that way, you don't have to keep asking it every single time and say, what's the feedback? What's the feedback? Instead,

feedback? What's the feedback? Instead,

you talk to it in this certain light.

You give it the perspective that you want Gemini to have, which is, in my opinion, a really cool feature when you're trying to shape how you're using Gemini and Google AI Studio. Now, I

forget what number we're at, but I want to show you two different ways you can generate videos on here. The free way and the fast way and the much more advanced way. So, if we go to home on

advanced way. So, if we go to home on the left side, you can see VO 3.1 pops up right here. By the time you're watching this video, perhaps it's 3.2. I

can click on that. It'll bring us into VO Studio. A different way to generate

VO Studio. A different way to generate videos and work on this, but you will have to get an API key to do that. And

so, you'll have to create a new key.

That's something for another video. But

you can manage all of your API keys. If

I go back to start on the bottom right here. So, get API key, like I said, will

here. So, get API key, like I said, will bring you into this right here. And you

can, you know, set that up and purchase credits as you need to down in usage and billing because something like VO3.1 is going to use a lot of power from the back end. And Google just doesn't give

back end. And Google just doesn't give that away for free right now. But if you wanted a free video, we can go down to chat. We can select the model on the

chat. We can select the model on the right side, go to video, and select V2.

Now, V2 videos are going to look decent enough. You can see some examples right

enough. You can see some examples right here. They're not super advanced. I

here. They're not super advanced. I

wouldn't expect any text or anything like that to look good, but some basic physics, you know, does apply. And so,

let's just try it. Let's say we're going to generate a video of an aloe plant growing in the desert with a time lap time lapse. And we can say it's going to

time lapse. And we can say it's going to be a 8-second video. Sure, it's going to be 16 by9, maybe 9 by 16. Make it

vertical. Uh 24 frames per second is the aspect ratio. Um, and that's pretty much

aspect ratio. Um, and that's pretty much all we could do. You can add a negative prompt there, something you don't want it to be. But I'm going to run this. Or

you can even add an image, by the way, as a kind of a addition to this prompt.

Uh maybe, you know, the specific landscape you want it to be in. But I'm

going to run this. So it looks like it generated it. I can click on play and

generated it. I can click on play and see what it does. All right. So it looks like not quite what I wanted. Um Oh,

okay. Now it's night time. All right. So

that's not that's not what I was looking for necessarily. V2 is the older model

for necessarily. V2 is the older model and definitely not quite as good as V3.

V3 even has sound as well. Um, so that's definitely a lot more advanced, but I think that's kind of all I wanted to show you in the chat section, but that's nowhere near the extent of what Google AI Studio can do. The next tab, build,

is incredibly advanced. So, if I click on build, we can vibe code all types of things using Gemini. So, they've got a lot of kind of prompts down here. So,

you can analyze images, uh, do all this different stuff, but I'm just going to start off by describing my idea. And I'm

going to say create a create a snake game. I don't know. Let's start off with

game. I don't know. Let's start off with something really basic. just create

snake game and see what it's able to do.

And on the left side, you can see it did that. Uh, it named it all here. And from

that. Uh, it named it all here. And from

the right side, you can we're able to actually test it out. So, let's say start game. And I can use my keyboard

start game. And I can use my keyboard and let's play this. Oh, okay. Game

over. Now, we can also go more advanced and say let's let's make this you can rename it to worm game and make it full screen. Things like that. I don't know.

screen. Things like that. I don't know.

Just some kind of edits you want to make. Let's try it and see what it's

make. Let's try it and see what it's able to do. So, this is the kind of stuff that I've done a little bit in the past with like lovable or hosting or horizons or um there's a lot of them that are actually using this kind of setup where you have the chatbot on the

left side and the output on the right side and it's able to vibe code some pretty advanced things. So, this again just one of the many things you can do using Gemini but now baked in very

natively on Google AI Studio. So, that's

how you can use some of the basic features in the build section here. But

if we go to build, so I just I went back right there. We can also go to gallery

right there. We can also go to gallery and see some of the apps that have already been built on here. And we can kind of work off of those. So chat with maps live, that's a pretty cool feature

that is actually able to be used on I think what is that? The Galaxy um XR I think is what they call it. So that's

like the Apple Vision Pro essentially made by Samsung. You're able to interact with Apple Maps or Google Maps rather.

Um and that's a cool feature that uh I guess is is obviously using Gemini in the back end. So otherwise you can scroll through these, see a gallery of a ton of other ones. You've got your apps down here, the ones that you have made

or kind of experimented with. So um you can click on any one of those and experiment more with those. Now I think there were two more things I wanted to show you. One of them, as you can see

show you. One of them, as you can see right here, is the URL context tool. So

when you're actually asking questions on Gemini, so say I'm using the flash model as you can see in the top right, you can include links and ask it for certain things uh within a page, you can say use

this page for research, whether that is a scholarly article. Um you can also obviously upload a lot of files and images. But let's talk about this right

images. But let's talk about this right here. You can upload this, you can copy,

here. You can upload this, you can copy, you can copy a URL, have it, you know, use that for research and it's able to answer a question based on information only on that URL. So that's going to be

very beneficial. The next thing I want

very beneficial. The next thing I want to show you is all the different ways you can interact with Gemini. So

obviously I showed you the real-time you know screen sharing and stuff like that.

But if you are uploading you know for example multiple PDFs for research, maybe it's multiple contracts you wanted to read, maybe multiple images, you can do that down here by clicking on the

plus and adding all of those in. And

then the very last thing I wanted to show you in this video was the last model. If we go over to audio, we are

model. If we go over to audio, we are able to go from text to speech. So here

you're able to do either single or multis speaker audio and you can tell it what it's supposed to read just by adding in dialogue right here. So you

can go and generate that somewhere else and then paste it in here and it can read things. You can also choose what

read things. You can also choose what the tone is going to be. Again written

in a very natural language. So you can say read aloud in a warm welcoming tone.

You could say an angry tone. You can

choose what the speaker's voice is. You

can also name the speakers over here. So

speaker's voice, we can go with maybe uh I don't know this one right here. And

like I said, you can name them on the right side. So that's a great way you

right side. So that's a great way you can generate like an AI podcast or you can have another option would be I don't know, maybe if you're making a video and you want some voice, some voice over over like a screen recording of a

slideshow or something. That's something

you could do um you know just by putting this in here. Some people don't want to record their own voice, and this is a great way to use other voices. Or maybe

these voices are just a more natural and easy to understand and interpret voice than your own voice. So, those are the fundamentals of how to use Google AI Studio. I hope you found this video

Studio. I hope you found this video helpful, but that is not the limit of what Google AI is able to do. The next

video I highly recommend you watch is Google's Notebook LM. So, I'll have that video linked right here. And in Notebook LM, you can actually have it automatically generate an entire podcast without having to paste things in like

this. It's an incredible feature. Go on

this. It's an incredible feature. Go on

over there to learn how to do that.

Thanks for watching, guys. And I'll see you over there.

Loading...

Loading video analysis...