LongCut logo

5 Ways to Use Gemini 3 Right Now (Free + Pro Features Explained)

By Kevin Stratvert

Summary

## Key takeaways - **Free Access to Gemini 3 Models**: A free Google account gives unlimited access to Gemini 3 Flash and limited access to Gemini 3 Pro or Thinking models around 5 to 10 times per day. Upgrade to Google AI Pro for regular use of more powerful models. [00:49], [01:10] - **Switch Models in Gemini Chat**: In the main Gemini chat at gemini.google.com, choose Fast mode with Gemini 3 Flash for quick answers, Pro for analyzing documents or vibe coding, and Thinking for mission-critical math, engineering, or strategies that self-correct. [01:34], [02:32] - **Zapier Workflow for Social Posts**: Build a Zapier workflow using Google AI Studio and URL to text: pull RSS article text, clean it with Gemini 3, draft social media post, explain technical details, generate infographic with Imagen, and store in Google Doc. [04:45], [09:09] - **Nano Banana Pro Renders Text**: Nano Banana Pro in Thinking mode generates images with realistic small text, better camera angles, focal depth, higher resolutions, and multi-language support, outperforming the original Nano Banana. [10:12], [10:47] - **AI Search Thinking Mode**: In Google.com AI mode with Thinking using Gemini 3 Pro, search complex engineering topics like canal lever bridge for detailed explanations, interactive diagrams, and sources—requires going to google.com and clicking AI mode. [12:17], [12:52] - **Anti-Gravity Free IDE**: Anti-gravity at anti-gravity.google is a free agentic IDE built around Gemini 3 Pro for front-end and full stack development, with quota limits increasable via Google AI Pro or Ultra subscriptions. [13:12], [14:06]

Topics Covered

  • Flash for Speed, Pro for Depth
  • Automate Content with Gemini-Zapier
  • Nano Banana Pro Masters Text
  • AI Search Powers Complex Queries
  • Anti-Gravity Enables Agentic Coding

Full Transcript

Google's full family of Gemini 3 AI models is out now. Of course, the tech world is excited about Gemini 3 because of how much more intuitive it is at both

understanding natural language and replying in a more realistic way and how the advanced reasoning makes Gemini 3 great for complex math and engineering problems and a more powerful assistant

for application development. And these

models are now accessible to anybody, even if you're using a free Google account. I'm Nick and in this video I'm

account. I'm Nick and in this video I'm going to show you five ways you can put Gemini 3 to work today across Google's AI tools and we're partnering with Zapier to show how to link different

apps together in a custom workflow with the advanced reasoning in Gemini 3. You

might want to take a quick look at the Gemini subscription options. A free

account gives you unlimited access to Gemini 3 Flash and some limited access to Gemini 3 Pro. So, you can use the Pro or even the Thinking models somewhere

around 5 to 10 times per day. But if you need those more powerful models on a regular basis, you may want to upgrade to the Google AI Pro subscription. But

wait, how do you know if you need more than the Flash model? Well, we can clarify that by taking a look at the first way you can use Gemini 3 today.

Option one is the main Gemini chat tool.

You can get to this by going to gemini.google.com google.com or by

gemini.google.com google.com or by downloading the Gemini mobile app for iPhone or Android. And this is sort of the core Gemini experience where you can ask questions and have conversations.

But before you click the send button in the chat field, there's a menu where we can choose from the different models from the Gemini 3 family. For a quick, simple answer, I might choose the fast

mode, which uses the Gemini 3 flash model. I get a quick answer and even a

model. I get a quick answer and even a link to a video.

Starting over, you might choose the Pro model if you need a thought partner to help you analyze something in more detail. You can upload documents,

detail. You can upload documents, images, or code snippets no matter which model you choose, but Gemini 3 Pro is tuned to analyze lots of data across

several files or resources and work with you to help you understand it. The Pro

model is also a great partner for vibe coding or building agents. Or you might want to work with the thinking model, which takes the most time, but it's really the one to use for missionritical

analysis, for issues related to math, engineering, or even planning strategies. It takes the time to

strategies. It takes the time to research and compare multiple resources, so it can self-correct and give you the most accurate, detailed information. Of

course, all three models take advantage of the advancements in Gemini 3. If you

find that you need to use the pro or thinking model on a regular basis, that's when you might want to upgrade to the Google AI pro subscription. Our

second option is to put Gemini 3's advanced reasoning to work in Zapier.

Zapier is the system where you can build powerful automated workflows, which can even connect over 7,000 different apps and tools, even if they're not designed to work together. You can include Gemini

3 in a workflow by connecting to the Google AI Studio app. Some apps you connect to Zapier, like Google AI Studio, require an API key to connect.

You can go to the Google AI Studio website and sign in with your Google account. Then click the option on the

account. Then click the option on the left for get API key. You will need to set up billing, but you can just activate the free trial, which gives you 90 days or $300 worth of credits, which

is plenty to set up a Zapier integration. Once that's set up, you'll

integration. Once that's set up, you'll be able to copy your API key. Now, I'll

also be using an app in my workflow called URL to text. That one also has a free trial, and you can get the API key for that as well from the website. So,

now let me walk you through how I use these apps in a Zapier workflow that helps me research and write social media posts. From the Zapier web page, I'll

posts. From the Zapier web page, I'll create a new Zap. You can build your workflow manually or you can describe what you want to the AI co-pilot and

Zapier will help you build it. I'll put

in the first three steps of my workflow.

I'll make sure it's set to auto, then tell it to start building.

When the AI copilot asks me to sign into the URL to text app, I'll just need to go over to the website for URL to text, copy the API key, go back to Zapier,

click the button to sign in, and paste the API key there. And when it asks me to sign into Google AI Studio, I'll need to do the same thing.

It will set up and test those steps.

Now, this workflow gets a link to the most recent article from an RSS feed.

The URL to text app pulls the full text of the article from that link. And the

Google AI Studio step uses Gemini to clean up that full article text and make it something that's more easy to read.

So, what's next? I'll go back to the AI Copilot and describe my next steps. I

wanted to use Gemini to analyze that article text and draft a social media post. Then use Google Drive to make a

post. Then use Google Drive to make a Google Doc to store the draft.

When that's all set up and tested, there's an important thing to check.

When you have a Google AI Studio step, you should select it. Then take a look at the model field. As I'm recording this, Gemini 3 models are still rolling out. So, Zapier is defaulting to the

out. So, Zapier is defaulting to the Gemini 2.5 model, which is still defined as the newest stable model, but Gemini 3 Pro preview is in that menu along with

the model ID. You could select that manually, but I like to tell the AI Copilot to make that change, which can be more reliable. Of course, at some point in the future, Gemini 3 will be out of preview and it will be selected

by default, so you may not need to make this change. Now the other important

this change. Now the other important thing is to check to make sure that the right output from each step is connecting to the next step. For

example, the URL to text app is supposed to pass on the full text of an article.

After that step has finished testing, I'll select that step, then click the test tab, then scroll through these different data fields until I find the one I'm looking for. In this case, this field called data content seems to

contain the full article text that I want. So, I'll remember that field name,

want. So, I'll remember that field name, then select the next step in the workflow, go to the configure tab, and take a look at the prompt that it's sending to Gemini, and make sure it's

referring to that same data field. And

it is. Great. So, then I'll go to the test output tab on step three. And this

should output a clean readable version of the article. So, I'll find the field that has that readable text. I believe

that's it right here. So, I'll remember that field name. Then I'll go to the next step to the configure tab to the prompt and make sure the prompt is referencing the right data field from

the previous step. And in this case, I see that it's not. But that's fine. It's

real easy to fix. I'm just going to remove the reference that's there. Make

sure you hit the return key to start a new line. Then hit the forward slash

new line. Then hit the forward slash key. Then locate the step from step

key. Then locate the step from step three that I do want to feed into step four. And there it is. I'll select that

four. And there it is. I'll select that and that's set. So now this step in the workflow will run this prompt referencing that input which will generate a draft of a social media post

based on the full text of the article.

Now since I made a change, I'm going to tell the AI copilot to test step four.

And when that's done, I can find the output I want from the test results in step 4 and make sure those are feeding into step five properly.

Then test that.

And now that's a full workflow. But I

decided to take this even farther. I

added a step that uses Gemini 3 to explain all of the technical details from the article. Gemini 3 is great at translating complex information into something you can understand. And this

is a fun one. I told it to use Gemini to make an infographic to go with the article. Now it's using Google's Imagen

article. Now it's using Google's Imagen engine to create that infographic. Nano

Banana Pro is the much more powerful image generation tool that is part of Gemini 3, but it's still in preview and not set up to work with Zapier quite yet. But when you're watching this, if

yet. But when you're watching this, if Nano Banana Pro doesn't say preview next to it, I suggest you try it because it can generate a more detailed infographic. For now, I'm going to keep

infographic. For now, I'm going to keep image in. And I've already fixed the

image in. And I've already fixed the input from previous steps leading into these new steps. But on our final step, I want to look at this field where we can add an inline image. I'm going to

click on that, hit the forward slash key, and make sure to choose the image URL generated by step six. So that will also add the new infographic to my

Google Doc. I'll tell it to test all the

Google Doc. I'll tell it to test all the new steps that I've modified.

And when everything is done, I can do a test run of this full workflow.

When that's finished, I can go to my Google Drive and check out the Google doc that the Zapier workflow created, which has the infographic, the link to the full article text, the draft of the social media post, and the technical

explanation.

And when I'm ready, I can go back to Zapier and I can publish the workflow so it will automatically work whenever new articles are published on that RSS feed.

And this workflow is just one simple scenario. Do you use apps like Trello,

scenario. Do you use apps like Trello, Confluence, Slack, or something else that you'd like to supercharge with a connection to Gemini 3? You might want to give Zapier a try. But speaking of

Nano Banana, option three is where we'll work with the new version of Google's image generator, Nano Banana Pro, which was introduced along with Gemini 3.

You'll be able to use it in Zapier soon, but you can use it right now in the Gemini app on the web, which we saw a moment ago. First, in the tools menu,

moment ago. First, in the tools menu, I'll select the option for create images. It has the banana icon next to

images. It has the banana icon next to it. With that enabled, you can choose

it. With that enabled, you can choose the fast mode, but that will give you the original version of Nano Banana. Or

you can switch to the thinking mode, which gives you Nano Banana Pro. Again,

you would need the Google AI Pro subscription to use this without daily limits. I'll ask it to generate a

limits. I'll ask it to generate a picture of a story book with the text visible.

It uses Nano Banana Pro. It takes some time and we get a picture. In the past, AI image generation tools have had a difficult time with small text, but this looks pretty good. In fact, let's take a

close look at two pictures. One

generated by the original Nano Banana and one from the new Pro version. We see

how in most cases the Pro version renders text in the image much more realistically. But what else can this

realistically. But what else can this do? Well, Nano Banana Pro has better

do? Well, Nano Banana Pro has better control over camera angles and focal depth, supports higher resolutions, and has a better understanding of different locations and languages. So, I can ask

it to translate the text in the image to Italian, and that translated text comes through just as clearly. I can ask it to give me a wider camera angle. And these are

things that would not work with the Gemini fast mode. And of course, Nano Banana Pro can generate other types of pictures, including infographics with much more technical detail compared to

other image generation models. And once

you have a picture you're happy with, you can point at it and click the button to download it. And the resolution of those downloaded pictures will be significantly higher compared to the

original Nano Banana. Now, on to option four, using Gemini 3 in Google's AI search mode. When Google announced

search mode. When Google announced Gemini 3, they showed off some pretty impressive scenarios. But I've seen some

impressive scenarios. But I've seen some people get confused because they haven't been able to reproduce some of those demos. And I think the reason might be

demos. And I think the reason might be that some of those capabilities work in the AI mode in Google search, but people were trying to use them in the Gemini chat. And there is a difference. When

chat. And there is a difference. When

you go to google.com to do a simple Google search, your search results have an AI summary at the top that expand on your search topic and give you more

insights. This is the AI mode and it's

insights. This is the AI mode and it's not a new feature. But with Gemini 3, the AI mode takes a leap forward. Let's

go back to the Google.com search page.

And you do want to go to google.com, not just the default start page in the Chrome browser. And instead of doing the

Chrome browser. And instead of doing the search directly, I'll click the AI mode button in the search field. And there's

a menu at the top where you can switch from the default mode to the thinking with three pro model. And the same limits apply. The thinking mode is

limits apply. The thinking mode is available with daily limits for free, but is unlimited if you have the Google AI Pro subscription. So, set that, then type in your search terms. And this is

particularly impressive if you search for a more complex subject like an engineering or math topic. The search

process takes a little more time, but it will be worth it. Not only does this tell me everything I might need to know about a canal lever bridge, it even gives me an interactive diagram I can

use to understand the concept. It also

includes sources on the right so you can go and do more research on any of these sites. On to option five. Remember the

sites. On to option five. Remember the

advanced reasoning in Gemini 3 makes it a great model for app development and Google is supporting this by providing a brand new completely free IDE built

around Gemini 3. This new IDE is called anti-gravity. Anti-gravity is not just a

anti-gravity. Anti-gravity is not just a coding assistant. Google describes it as

coding assistant. Google describes it as an agentic development platform designed to be your AI partner for front-end and full stack development. Just go to anti-gravity.google to download the

anti-gravity.google to download the application. You'll find that you do

application. You'll find that you do have the option to use models from Anthropic or OpenAI, but anti-gravity is built around the capabilities of Gemini

3. So, Gemini 3 Pro is the default

3. So, Gemini 3 Pro is the default agentic model. The full anti-gravity IDE

agentic model. The full anti-gravity IDE is free, but because the system is designed to support consistent multi-step AI tasks, it is subject to AI

quota limits, which you may need to increase as you scale up. Higher limits

can be enabled if you subscribe to the Google AI Pro or Google AI Ultra subscriptions, depending on your needs.

So, if you're looking for a way to use Gemini 3's advanced reasoning to simplify app development, why not take a look at Google's anti-gravity, which

doesn't just use Gemini 3, it's designed around Gemini 3. So, conversations,

research, image generation, application development, and even automated workflows that connect several apps together through Zapier. Gemini 3 is here in full force, and there's a lot

you can do with it. For more videos like this and to keep learning, be sure to subscribe to this channel. And I'll see you in the next one.

Loading...

Loading video analysis...