AI News: Insane Week - Gemini 3 Was Just The Start
By Matt Wolfe
Summary
## Key takeaways - **Gemini 3 Tops Benchmarks**: Gemini 3 is Google's new flagship thinking model with major jumps in reasoning, coding, multimodal understanding, and long context performance, crushing all the benchmarks to be the top dog across the board. It's available everywhere now including Gemini web app, AI mode in Google search, AI Studio, API, and CLI. [01:01], [01:23] - **Gemini Agent Takes Actions**: Google launched the Gemini agent, a new experimental mode in the Gemini web app that can browse the web, check Gmail, analyze calendar, pull Drive documents, build slide decks, and plan multi-step tasks all on its own. [01:54], [02:03] - **Nano Banana Pro Excels Text**: Nano Banana Pro is Google DeepMind's new state-of-the-art image generation model built on Gemini 3 Pro, delivering studio quality with standout accurate text rendering even small text that's readable and not gibberish, plus researched infographics. [04:10], [04:27] - **Microsoft Funds Anthropic $5B**: Microsoft, Nvidia, and Anthropic teamed up with Microsoft committing up to 5 billion into Anthropic despite its huge ownership in competitor OpenAI, and Anthropic committing to purchase 30 billion of Azure compute capacity. [10:38], [10:51] - **SAM 3 Segments Everything**: Meta's SAM 3 lets you click or type on any video or image to highlight and segment objects like people, fish, forks, or soccer balls across entire videos, enabling effects like glows, fire, or magnification for video editors. [13:39], [14:42] - **GPT 5.1 Codeex Handles Millions**: OpenAI's GPT 5.1 Codeex Max is a frontier agentic coding model natively trained for multiple context windows via compaction, working over millions of tokens for project-scale refactors, deep debugging, and multi-hour agent loops. [18:52], [19:11]
Topics Covered
- Google Ships Flagship Reasoning Model
- Nano Banana Masters Text Infographics
- Microsoft Funds Anthropic Rival
- SAM 3 Tracks Objects in Video
- OpenAI Unlocks Infinite Context Coding
Full Transcript
Hey, how's it going? What have you been up to?
>> Did you miss me?
>> This week's been an absolutely insane week in AI news. We had Microsoft Ignite out in San Francisco. Meta launched SAM
3 and SAM 3D. XAI launched Gro 4.1.
OpenAI released a crazy good new coding model. And so much more. But first,
model. And so much more. But first,
we've got to talk about how Google launched Gemini 3 and Nano Banana 2 Pro extra sequel, The Reckoning Tokyo Drift.
So, without further ado, let's jump right in.
Starting with the new Gemini 3 launch.
Now, I'm not going to go super deep on this one because I did make a full breakdown video on this one called Gemini 3 Rumors Are Confirmed. It's very
good. and that breaks down all the details and does a ton of demos and testing with it. However, I'm not going to leave you hanging in this video. So,
here's the TLDDR. Gemini 3 is Google's new flagship thinking model, and it comes with major jumps in reasoning, coding, multimodal understanding, and long context performance, meaning you
can give it a ton of text input and also get a ton of text output from it. And it
basically crushed all the benchmarks to pretty much be the top dog across the board. And the days of Google actually
board. And the days of Google actually announcing products but not shipping are seemingly over because this is available everywhere as of this week right now.
You can use this inside the Gemini web app. You can switch to Gemini 3 right
app. You can switch to Gemini 3 right now. It's also already powering AI mode
now. It's also already powering AI mode in Google search for Google AI Pro and Ultra subscribers in the US. Gemini 3 is live in Google's AI Studio where you can actually test it for free as far as I
can tell. It's also available in the
can tell. It's also available in the Gemini API and the Gemini CLI, so you can build apps and agents with it as well. On top of the model itself, Google
well. On top of the model itself, Google also launched the Gemini agent, which is a new experimental mode in the Gemini web app. It's not just chatting. It can
web app. It's not just chatting. It can
actually take action. So like browse the web, check your Gmail, analyze your calendar, pull documents from Drive, build slide decks, and plan multi-step tasks all on its own. And finally,
Google announced anti-gravity, a brand new AI powered cross-platform IDE for Mac, Windows, and Linux. Now, if you're not a coder and you don't know what that means, don't worry about it. Probably
not super relevant to you. But this new IDE integrates directly with Gemini 3 and the Gemini CLI, and it's designed to help you build software faster with
AIdriven coding, refactoring, debugging, and agentic workflows. Bottom line,
Gemini 3 is here. It's smarter. It's
more capable across every modality. And
starting today, it's baked directly into search, the Gemini app, agent mode, AI studio, the API, the CLI, and the new anti-gravity ID. That was a lot of
anti-gravity ID. That was a lot of acronyms. And here are some of the coolest things I've seen people doing with it. Dane JW here made this
with it. Dane JW here made this visualization of a neural network training right in front of him and then even got it to represent itself in 3D space. That's super cool. Vortex was
space. That's super cool. Vortex was
able to make a full 3D RTS game. Pro
here was able to give it one image and it converted that image into a pixel perfect website from the image. So like
everything that was in the image ended up on the website perfectly. Zara here
made an app that asks her questions and then she can just sit there and talk to her computer answering these questions and then it exports a video for her at the end. It's like a sort of automated
the end. It's like a sort of automated video journaling app. People have been creating some absolutely wild stuff with Gemini 3. So, make sure you check out my
Gemini 3. So, make sure you check out my other video breakdown if you want to see even more examples as well as some examples of some of the best prompts that I've used with it. And then we've got Nano Banana Pro, which came out this
week from Google as well and just blew everybody's minds. This is another one
everybody's minds. This is another one that I did do a deep dive breakdown on all of the details of it. The video is called Nano Banana Pro is Here, New Fures Unlocked. So check that one out if
Fures Unlocked. So check that one out if you want all of the details, plus a ton of demos and example prompts. But again,
here's the TLDDR if you happen to miss it. Nano Banana Pro is Google DeepMind's
it. Nano Banana Pro is Google DeepMind's new state-of-the-art image generation and editing model built on top of Gemini 3 Pro. It delivers studio quality
3 Pro. It delivers studio quality designs with improved control, accurate text rendering in multiple languages, and enhanced real world knowledge. So
for me, the standout features were the text rendering. is actually really
text rendering. is actually really really good at rendering text on images.
Like even small text and background text is readable and not sort of gibberish like we're used to. It also uses live information from Gemini 3. So it's
actually going to generate infographics and things like that. And the text that it overlays is text that it researched for you. So you can ask it to create an
for you. So you can ask it to create an infographic on a specific topic. It will
research that topic and then use the information that it found in that infographic. like really cool. Wow.
infographic. like really cool. Wow.
Number three, you can blend up to 14 images. In my original video, I
images. In my original video, I mentioned you can blend up to six, but that turned out to be just the sort of pre-launch information. The actual
pre-launch information. The actual launch version allows 14 images.
However, they do still recommend using five or six images for the best results.
You can also change the aspect ratio of any image to any other aspect ratio without it distorting the image. Now,
this seems like such a small thing, but it's super useful to take like a 16x9 image and then reconfigure it for a 9x16 image for a different social platform.
Old models would have like compressed or distorted the image. Nano Banana does it perfectly. And number five, there's even
perfectly. And number five, there's even more advanced camera controls, so you can do things like tweak the lighting and camera angles and match the color grading from other images. And it's got
style transfer. So you can give it like
style transfer. So you can give it like one image that you like the style of and then another image that you want to restyle and it will take the second image and apply the style of the first image to the second image. Nano Banana
Pro is going to be people's new go-to platform for things like infographics, flyers, banners, and converting images from one size to another for sharing across multiple social platforms. It can
also output in both 2K and 4K resolution. and it's available right now
resolution. and it's available right now in the Gemini app if you turn on thinking mode and the Nano Banana mode.
Once both those are on, you're using Nano Banana Pro. Free users get a limited quota of generations before getting bumped down to the OG Nano Banana. And I don't know how many
Banana. And I don't know how many generations you get. They didn't tell me. If I had to guess, it's probably a
me. If I had to guess, it's probably a sliding scale based on demand, but I don't know for sure. It's also being made available in the Gemini API, Google
AI Studio, Vert.Ex AI, and the new anti-gravity IDE for coders. So, coders
and developers, they're getting it built into all the Google stuff as well. Third
party platforms like Adobe, Figma, and Leonardo are also getting it baked in.
Now, if you are going to use it in the API, it is a little bit pricey. It costs
about 13.5 cents per image or 24 cents per image if you want to generate in 4K.
Compare that to less than 4 cents per image on the original Nano Banana. So,
this quality bump does also come with a cost bump if you're using it in the API.
But here's some other cool examples of what I've seen others do with Nano Banana Pro. Here's some cool images from
Banana Pro. Here's some cool images from Matt Vid Pro. He made like a Calvin and Hobbes comic book and it's all legible and looks like a legit comic. You'd see
here's a temporal displacement device that he designed a pixel art better call Saul. I think Google CEO posted this
Saul. I think Google CEO posted this like deconstructed hamburger. And keep
in mind all of the text you're seeing on this is based on research that Gemini did and then implemented into the image.
Here's some examples of some really cool annotated diagrams from Sahil. Apollo 11
lunar landing. Look, it sort of drew all over the image with details that again it researched. Here's another one of
it researched. Here's another one of this rocket ship here. One from a McLaren MCL60F1 car. I mean, if you need any
car. I mean, if you need any infographics, you got the tool. Here's
some of my favorite generations from my other video. I took a watercolor
other video. I took a watercolor painting and a photo of me and some of my buddies here and took the style from the watercolor and applied it to the image. I took this image of this group
image. I took this image of this group here and had it turned it into like an 80s synth pop band poster. Here's an
image where I sketched a giant person in the background and told the model what to do with it. And here was the resulting image. I made this menu with
resulting image. I made this menu with both English and Japanese text. And
what's so impressive is even this small text here is legible and not gibberish.
So, Nano Banana again, super super impressive. I did a full breakdown video
impressive. I did a full breakdown video on it. You definitely need to check that
on it. You definitely need to check that out. I love customizing everything I
out. I love customizing everything I can, and my MacBook setup here is no exception. So, I want to show you guys a
exception. So, I want to show you guys a shortcut I found. My first thought was, >> can an AI product actually do this for me?
>> And it turns out the answer is yes. A
lot of people think Warp is an AI vibe coding tool, but it's really more of a vibe computing tool. There's so much you can do with it right in your computer terminal. For example, I can ask Warps
terminal. For example, I can ask Warps AI to remove all the app icons in my doc that I haven't opened in the past 30 days. And I can actually see the AI
days. And I can actually see the AI thinking through the process inside my Warp Terminal, and instead of having to go through each app myself, it just does it for me. I can also ask it to scan my home directory, find duplicate files
older than 90 days, and move them into an archive folder. And I didn't have to go into my files once. And this this is one of my favorites. Download all of the embedded images from this Google Doc in
high resolution. If you've ever tried to
high resolution. If you've ever tried to download an image from a Google doc before, you know why this is actually a pretty killer shortcut. And there's so much more that you can do with Warp, both in terms of helping you code and
also just helping you become the biggest power user of your own computer. And
while other AI tools try to oneshot the task, Warp keeps you in the loop so you and the agent can iterate together until it's exactly how you want it. It's a
great example of a product that's not trying to replace developers, but just trying to make their lives easier. So,
if you're interested in an AI product that's built into your computer, try it out for free at the link in the description. Even though Google pretty
description. Even though Google pretty much dominated the news cycle this week, there was a lot more that happened. This
week was also Microsoft's annual Ignite event where they typically make a bunch of announcements and well, this year's announcements were very focused on AI.
One of the super interesting things to me that happened was that Microsoft, Nvidia, and Anthropic all sort of teamed up together. And the reason this is
up together. And the reason this is interesting is because Microsoft has a huge percentage ownership in OpenAI which is Anthropic's biggest competitor.
Microsoft is committing up to 5 billion into Anthropic. And Anthropic is
into Anthropic. And Anthropic is committed to purchasing 30 billion of Azure compute capacity. So despite
Microsoft and OpenAI being super tied together, Microsoft is branching out into Anthropic now too. Now there were over 70 announcements at Microsoft Ignite. So, I'm not going to get into
Ignite. So, I'm not going to get into all of them because many of them were geared towards enterprise, but here's the ones that I found the most interesting. Microsoft is integrating AI
interesting. Microsoft is integrating AI agents directly into the Windows 11 taskbar. You'll be able to invoke
taskbar. You'll be able to invoke Copilot and other agents from the taskbar or start menu to automate or perform tasks on your PC, get file summaries, and more. And you'll see the
agent status bar in real time in your taskbar. You can interact with the
taskbar. You can interact with the floating windows and get notifications as agents work in the background. These
taskbar features, they're going to be optin, so you have control over whether or not you actually want to use them with your computer. Copilot is now going to be embedded in File Explorer, letting you summarize documents, answer
questions, and draft emails with one click. We're also getting dedicated AI
click. We're also getting dedicated AI agents for Word, Excel, and PowerPoint, so you can create documents, spreadsheets, and presentations from simple text prompts. You can then ask follow-up questions to further tailor
the content that it created. Now, these
aren't available yet for most consumers, but they claim they're coming soon. And
also, across the co-pilot ecosystem, you're also getting access to the anthropic claude models as a result of that partnership we just talked about.
So, in my opinion, those were the real standout features of Microsoft Ignite.
But again, there was over 70 announcements. So, if you work in like
announcements. So, if you work in like enterprise or you're a developer and you want to know specifically what they announced for you, make sure you check out the futuretools.io/news
page because I shared all of the updates as they were coming out over on that website. On Monday of this week, we got
website. On Monday of this week, we got a new model out of XAI with Grock 4.1 and for a moment it was pretty much the best model on the market. Gemini 3 came
out the very next day, but Grock 4.1 was a pretty big upgrade from the previous Gro model. We can see that from the LM
Gro model. We can see that from the LM Marina text leaderboard, Grock 4.1 Thinking pretty much won the day. And by
day, I mean like literally a day because the very next day, Gemini 3 Pro came out and well, knocked it out of the lead.
Apparently, it's a leader in emotional intelligence. There's an EQ bench and
intelligence. There's an EQ bench and Grock 4.1 Thinking is leading the pack there. In creative writing, it performed
there. In creative writing, it performed just slightly below GPT 5.1. And I think one of the real breakthroughs of this model was the much lower hallucination rate from previous models. Meta also
released a couple of new models this week that are pretty dang impressive.
Quite honestly, had it not been such an insane week with Google and Microsoft and everything else. These probably
would have been like the top story of the week. So, first one was segment
the week. So, first one was segment anything model 3 or SAM 3. And this is a model where you can give it any sort of video or image and you can either click on a person or an object and it will
highlight it. Or you could type in
highlight it. Or you could type in something like people and it will highlight the people or highlight the fish in the image. And then you could even separate them out. So you can see three penguins and it found them all
separately. So look at this image that
separately. So look at this image that they uploaded and look at all of the different things that it was able to segment out inside of this image. like
it was able to pick out every little thing, every fork, every bowl, every wine or champagne glass, even the lights on the ceiling. It was able to pick each one of them out individually. If you go
to aid demos.mmeta.com/segment
anything, you can actually play with this one as well. I had early access to this one, so I have spent a little bit of time with it already, but you can do some really cool stuff. Like, let me just pick one of their demo videos here.
We've got this video of somebody playing soccer. Well, let's go ahead and search
soccer. Well, let's go ahead and search for an object. Let's search for soccer ball. It's searching for the soccer
ball. It's searching for the soccer ball. It found it. Now, let's tell it to
ball. It found it. Now, let's tell it to search the entire video. It's going to scan the entire video and find the soccer ball throughout the entire thing.
We can see it's following along. If
you're a video editor, this type of thing saves so much time. Now that we've got the soccer ball tracked, I can continue to effects and we can do things like put a contour around the soccer
ball. And now it follows the soccer ball
ball. And now it follows the soccer ball around with these giant lines. Let's put
a glow on the soccer ball. Now we have a yellow glowing soccer ball. Let's make
it oranges so it looks like the soccer ball's on fire. I could change the radius on the soccer ball as well and make it look just like a small orange outline or make it look like a big glow.
Or I can magnify the soccer ball. Let's
make the soccer ball bigger on the screen. Let's scale it up even more. Now
screen. Let's scale it up even more. Now
we have a giant soccer ball that they're kicking around. I could also mess with
kicking around. I could also mess with everything but the soccer ball and change the background so that the background's blurred and the soccer ball is the only thing in focus. So really,
really cool, fun stuff that you can do with video editing. But that wasn't the only model they released. They also
released this one. Now, don't confuse this with SAM 3. This one is SAM 3D D.
Kind of the same idea where you can give it images and videos and select anything in it, and it will actually turn the thing you selected into a 3D object. So,
here's an image of a chair, and somebody selected that chair, and then they were able to visualize the 3D version of it in their room using an augmented reality app on their phone. Here's them
highlighting an accordion and pulling just the accordion from this image. So,
I think I misspoke. I think I said it works with video and images. I'm
thinking it's just images, but pretty impressive. I mean, look at all these
impressive. I mean, look at all these things. They selected multiple objects.
things. They selected multiple objects.
It went through SAM 3D, did all of its, you know, AI stuff in the background and pulled all of those in as a 3D object.
I'm imagining in the future once this gets really good, you'll be able to take pictures of something, convert it to 3D, and then 3D print versions of what you just took a picture of. And this one's also available to play with for free
over in the segment anything playground.
You can create 3D scenes with SAM 3D or create 3D bodies. Let's try 3D scenes first. We have an image of some people
first. We have an image of some people sitting around like coding and I can just click on what I see in the scene.
Let's look at this like little planter thing in the background. We can see it highlighted that. Let's also make sure
highlighted that. Let's also make sure it's got the plants coming out of it as well. And click generate 3D. And we can
well. And click generate 3D. And we can actually see it forming in 3D. And that
actually happened pretty quickly. And
look at how coherent that is from this image in the background here. Let's
remove this stuff. And let's get this dude and his chair. Just going to click until the whole thing's highlighted. And
let's generate that.
I mean, not perfect, but still pretty impressive, especially for how fast that was. Let's do create 3D bodies here.
was. Let's do create 3D bodies here.
This one's probably a little bit better for people. Here's some people dancing.
for people. Here's some people dancing.
And it actually already found all the people. So, and I didn't actually have
people. So, and I didn't actually have to prompt anything or click on anything.
It just found the people for me. So,
let's select this dude here. And it's
just automatically generating the 3D version. Oh, it's even got like the
version. Oh, it's even got like the little skeleton inside. So definitely
much better at getting people when you use this version of the model. I can
also change this to a people reference view and see the 3D model among the rest of the people. Pretty pretty pretty cool. Like I said, this is probably
cool. Like I said, this is probably something a lot more people would be talking about if Google hadn't dropped so much new stuff this week. Now for a quick recap of the AI drama from this
week.
Let's start with this super spicy tweet from Sam Alman. Hey, what the but actual drama, Larry Summers, one of the board members at OpenAI, decided to step down
this week due to some dealings with some person named Epstein. I don't know what that's all about. All right, continuing on with some more OpenAI news. They
actually released a new model this week called GPT 5.1 Codeex Max. Yeah, they're
still pretty good at naming things. This
is their new frontier agentic coding model and it's available inside of codeex. So this is more relevant if
codeex. So this is more relevant if you're a coder but it's a huge leap for coders because of the additional context length of this new model. This is their first model natively trained to operate
across multiple context windows through a process called compaction coherently working over millions of tokens in a single task. So this unlocks project
single task. So this unlocks project scale refactors, deep debugging sessions and multi-hour agent loops. So you can set their agent off on a task and let it
just go do the work for like 24 hours.
Compaction enables GPT 5.1 Codeex Max to complete tasks that would have previously failed due to context window limits such as complex refactors and longunning agent loops by pruning its
history while preserving the most important context over long horizons. So
if you're a chat GPT plus pro, business edu or enterprise user, you have access to GPT 5.1 codecs max. I haven't tested this model myself yet, but from what I'm
hearing, it's a pretty dang impressive coding model. Hey, just had to jump in
coding model. Hey, just had to jump in real quick because there's something I forgot to mention that OpenAI also rolled out. They actually rolled out a
rolled out. They actually rolled out a group chat feature. So, when you're inside of your chat GPT account, up in the top right, you'll see this little start group chat button. If you click it and then select start group chat, it'll
give you an invite link that you can share with anybody. And now you can collaboratively chat inside of chat GPT with multiple people in the same conversation. Here's what it looks like
conversation. Here's what it looks like in practice. I went into chat GPT and
in practice. I went into chat GPT and said, "Give me 10 ideas for cool YouTube videos about AI." Gave me these and then my producer Dave jumped in and said, "Now incorporate Gary Buucy into all of
these ideas." And well, it did that. So,
these ideas." And well, it did that. So,
a pretty cool new feature. I didn't want this video to go live without me mentioning it because I think a lot of people would be interested in it. But
all right, let me get back to the rest of the video. Now, this is a cool little thing OpenAI is doing as well. There's a
free version of ChatGpt built just for teachers. It's a secure ChatGpt
teachers. It's a secure ChatGpt workplace that supports teachers in their everyday work so they can focus on what matters. Teachers get free access
what matters. Teachers get free access to this through June of 2027. Here's
what it comes with. Education grade
security and compliance. So, anything
you share with this model is not used to train their models. And it's built to protect student data. It's got
personalized teaching support. So you
can tell it to remember details like grade level, curriculum, preferred format so that the responses are tailored to your teaching style. You can
connect it to tools like Canva, Google Drive, Microsoft 365. It's got ready to use ideas and prompts for teachers in there. You can create templates with
there. You can create templates with other teachers and share them around.
And it's got special admin controls for schools and district leaders. So if
you're a teacher, I'd hit that up. And
in the final bit of OpenAI news that I'm going to share today, Inuit signed a $100 million deal with OpenAI to bring its apps to ChatGpt. Which I actually think might be kind of cool. Like if all
of your accounting software is connected to ChatGpt, maybe you'll just be able to say, "Chat GPT, go do my taxes in the future."
future." >> Yes.
>> Taxman.
>> Taxman. Taxman.
>> That'll be fun. And I'm not done yet.
There's a few more quick things that I need to share with you this week. So,
let's jump into a rapid fire.
Starting with the fact that Replet announced a new design feature. So, if
you're a Replet user, it actually now leverages Gemini 3.0 to help you create like really, really good designs. One of
the problems with AI coding is if you're trying to build websites or apps, they all kind of look the same. I've heard
people talk about how like the purple background and buttons is like the equivalent of the M dash for coders.
It's the dead giveaway that you used AI for coding. And this new design feature
for coding. And this new design feature actually creates like unique and goodlooking designs. Now, quick
goodlooking designs. Now, quick disclaimer. I do have a little bit of an
disclaimer. I do have a little bit of an investment in Replet, but I would have shared the news regardless because I do think it actually makes pretty cool looking designs. In an interesting move,
looking designs. In an interesting move, 11 Labs introduced image and video into their platform. With 11 Labs, you can
their platform. With 11 Labs, you can now bring ideas to life in one complete creative workflow. Use models like Vio,
creative workflow. Use models like Vio, Sora, Clling, Juan, and Seed Dance to create high-quality visuals, then bring them to life with the best voices, music, and sound effects from 11 Labs.
So, 11 Labs is kind of pulling in all the features from all of the platforms and is almost trying to be like what Leonardo and Ka and Higsfield and some of these other platforms are. It seems
like 11 Labs wants to be a one-stop shop for everything creative AI, but in most people's minds, they're pretty much cemented as like the audio generation
platform. Manis this week introduced the
platform. Manis this week introduced the Manis browser operator. This is an extension that works on any of the Chromium browsers, including of course Chrome. You install this extension and
Chrome. You install this extension and it can do agentic things right inside of your browser. take control of your
your browser. take control of your screen, click around for you, do searches for you, book restaurants for you. A lot of the stuff that we've seen
you. A lot of the stuff that we've seen a lot of these other agentic browsers do. Well, now you can do it with the
do. Well, now you can do it with the Manis browser operator extension inside of Chrome as well. Also, Midjourney is back in the news this week.
>> Now, that's a name I've not heard in a long time.
>> They just rolled out a new profiles feature. I haven't talked about
feature. I haven't talked about MidJourney in a long time. So, I wanted to point this out to show you that I am still paying attention to MidJourney. I
just haven't felt like a lot of the announcements they've made recently have been that exciting or exciting enough to share. But this one's actually a pretty
share. But this one's actually a pretty cool upgrade. They're sort of building
cool upgrade. They're sort of building in community features inside of Midjourney where you can have your own profile and people can view the images you generated and you can share your
various social media platforms. My profile is midjourney.com/misterflow.
As you can see, I haven't finished doing anything with it, like setting profile images or banner images, but I did put my social media links there. Now, I've
got one last thing I want to share with you that I really think you're going to enjoy. But before I do, I do want to
enjoy. But before I do, I do want to confess that I detourred a little bit on this channel from sharing all of the AI news. However, I quickly realized that I
news. However, I quickly realized that I absolutely love staying informed on the AI news, reading the articles, talking to the people, playing with the tools, demoing stuff. Like, that's what I live
demoing stuff. Like, that's what I live for. I absolutely love it. and I pretty
for. I absolutely love it. and I pretty quickly started to miss sharing the news. So, I finished my detour. I'm back
news. So, I finished my detour. I'm back
on track. I'm going to start releasing these weekly news videos again and giving you the breakdowns as well as making individual news videos for the big events like I did this week. I'm
getting this channel back on track, doing the thing that probably got you to follow my channel in the first place. I
apologize for the detour, but I just like sharing the news too much. It's too
fun to me. I don't know why I even decided to pause it in the first place.
With that being said, I want to end this video with something that I came across online that is now probably my favorite video on the internet. I introduced to
you Russia unveiling their first humanoid autonomous robot.
Thank you so much for nerding out with me today. If you like videos like this,
me today. If you like videos like this, make sure to give it a thumbs up and subscribe to this channel. I'll make
sure more videos like this show up in your YouTube feed. And if you haven't already, check out futurtools.io where I share all the coolest AI tools and all the latest AI news and there's an awesome free newsletter. Thanks again.
Really appreciate you. See you in the next one.
Loading video analysis...