Sam Altman Announces a Massive Breakthrough! OpenAI’s Giant Project, Microsoft’s New AIs.
By AI research
Summary
## Key takeaways - **Honor Phone: A Robot Camera in Your Pocket**: Honor's new smartphone features a pop-out robot camera powered by AI that can track subjects and stabilize footage autonomously, redefining the phone as a personal, thinking cameraman. [00:48] - **CapCut: From Editor to AI E-commerce Platform**: CapCut has evolved into a full AI platform for brands, enabling users to create professional product photos, CGI ads, and virtual influencers, streamlining campaign preparation for businesses. [01:57] - **AI in Medicine: Digital Stethoscope for Diagnosis**: The Kiku 2.0 digital stethoscope uses AI to analyze heart and lung sounds in real-time, assisting doctors in diagnosis and reducing administrative burdens by automatically generating clinical notes. [03:13] - **OpenAI's $1 Trillion 'Stargate' Project**: OpenAI is building a massive computing infrastructure called Stargate, projected to cost $1 trillion, to power future AI generations and potentially become a cloud-based AI energy exchange. [05:30] - **Ohio Considers Banning AI Marriages**: A proposed bill in Ohio seeks to legally prohibit artificial intelligence from being recognized as a person and ban humans from marrying AI, setting boundaries between humans and machines. [07:36] - **AI Writes 90% of Code, Engineers Focus on Complex Tasks**: Anthropic's AI now writes 90% of their code, but CEO Dario Amodei states engineers are more crucial than ever, focusing on the challenging 10% of logic and AI coordination, leading to tenfold productivity gains. [09:36]
Topics Covered
- Gadgets are now autonomous participants, not just tools.
- Stargate: OpenAI's $1 Trillion Bet on AI Infrastructure.
- AI writes 90% of code; engineers are more vital.
- Windows 11: Your PC becomes an AI agent.
- Sora 2 sparks Hollywood copyright and job fears.
Full Transcript
Hey everyone. In this episode, we're
diving into the biggest and most
exciting updates from the world of
technology and artificial intelligence.
Open AAI is launching a 1 trillion
project, the future computing backbone
for the next generation of AI. Microsoft
is turning Windows into a full AI
powered PC. Anthropic is teaching AI to
write 90% of the code without replacing
human engineers. Cap Cut is no longer
just an editor. It's now a complete AI
platform for online brands. And Google
is firing back at open AI. After the
massive debut of Sora 2, the company
unveiled its own answer in the AI video
race. All that and more in this episode.
Stick around till the end so you don't
miss a thing.
Honor has unveiled a new smartphone that
feels like a step into the future. From
its body, a tiny robot camera literally
pops out. It can rotate, track subjects,
stabilize footage, and record while in
motion, all automatically without user
input. The camera reacts to what's
happening in front of it, almost as if
it has a mind of its own. This concept
redefineses what a mobile phone can be.
If cameras once just captured moments,
now they can act like a real operator,
assessing movement, choosing the right
angle, and keeping the subject perfectly
in frame. Honor says the system is
powered by artificial intelligence that
analyzes the image in real time. It
looks both fascinating and a little
eerie. When a small robot with lenslike
eyes emerges from a phone, it feels like
the device has come to life. But that
seems to be where modern tech is
heading. Gadgets that are not just
tools, but autonomous participants. The
phone is no longer just a camera in your
pocket. It's slowly becoming your
personal cameraman. One that can see,
think, and move on its own. Caput has
transformed from a simple video editor
into a full-fledged ecommerce tool. Now,
it's not just for editing videos. Users
can create professional product photos,
CGI ads, and even virtual influencers.
The company has launched a new desktop
version of the platform that brings all
these features together in one place,
powered by state-of-the-art AI models.
The key innovation is the AI design
agent. It can analyze a product, choose
the right background and style, and turn
an ordinary photo into a brand level
image. Essentially,
business owners no longer need to hire
designers or photographers. Cap Cut
handles everything from visuals to
finished ads. This update arrives just
in time for the holiday sales season
with developers hinting that brands can
now prepare their Black Friday campaigns
in minutes. New users also get 40% off
the pro plan and those who retweet the
official post can receive a free guide
and one month of pro access. Step by
step, Cap Cut is evolving into an
all-in-one AI platform for business.
Fast, easy to use, and capable of
producing visuals that once required an
entire creative team, Lapsy Health has
unveiled the new version of its digital
stethoscope, Kiku 2.0.
This device doesn't just listen to and
record heart and lung sounds, it also
helps doctors diagnose conditions with
the assistance of artificial
intelligence in real time. The company
describes KU 2.0 as the first multimodal
tool that combines clinical,
administrative, and AI capabilities in
one compact device. According to CEO
Jonathan Bringing Demetriotis, doctors
can now listen, document, and analyze
patient data simultaneously without
wasting time on paperwork. This saves
valuable minutes and allows physicians
to focus more on patients rather than
forms. The American Medical Association
estimates that US primary care doctors
spend an average of 36 minutes per visit
managing electronic health records,
often extending into their personal
time. KU 2.0 aims to streamline that
process by automatically generating
clinical notes and sending them directly
into electronic medical record systems.
It's compatible with all major platforms
and secures data in compliance with
HIPAA standards. Beyond documentation,
the device also assists in
decision-making. Its built-in sensors
and AI based acoustic analysis can
detect abnormal heart or lung sounds
that may signal a condition. By
converting sound into structured data,
KU helps doctors make faster and more
accurate diagnoses and treatment plans.
Its highquality microphones capture
clinical grade sound even in noisy
environments. And the portable design
makes it easy to carry between exam
rooms or during hospital rounds. The
stethoscope connects via Bluetooth and
future versions will be able to work
completely autonomously. KU 2.0 is
classified as a class 2 medical device
and ready for use in professional
healthcare settings. The digital
diagnostics market is growing rapidly.
Earlier this year, California-based
Ecoalth began deploying an AI powered
screening system in the UK that can
detect or rule out three heart
conditions in just 15 seconds. Now,
Lapsi Health is joining the race, taking
another step toward a future where
artificial intelligence becomes a
trusted partner in medicine. Open AAI is
gearing up for the most ambitious
project in its history, the construction
of a massive computing infrastructure
called Stargate. Its power output will
be comparable to 20 nuclear reactors and
its purpose is to build the foundation
for future generations of artificial
intelligence from multimodal agents to
self-arning systems capable of operating
in real time. The total cost of the
project is estimated at $1 trillion
making it the largest technological
initiative in the company's history. To
make it happen, OpenAI plans to combine
its own revenue, debt financing, and
strategic partnerships. Today, the
company earns around 13 billion a year,
with most of that coming from Chat GPT's
paid subscription. Only about 5% of its
800 million users currently pay, and
OpenAI aims to double that share through
new pricing tiers, including more
affordable plans. The rest of the
funding is expected to come from bonds
and loans, helping OpenAI raise capital
while remaining independent, as well as
from corporate partners who will
co-inance parts of the infrastructure in
exchange for access to computing power.
Stargate isn't just an internal project.
It's also designed to become a revenue
generating platform. Open AAI plans to
sell part of its computing capacity back
to the market, effectively creating a
cloud-based AI energy exchange. This
approach could offset the company's
massive expenses, twothirds of which are
tied to semiconductors, primarily
supplied by Nvidia and Broadcom. Despite
reporting $8 billion in losses in the
first half of the year, OpenAI's
leadership remains confident that the
project will pay off. As hardware costs
fall and demand for AI continues to
rise, the company expects to not only
reach self- sustainability, but also
transform into an infrastructure hub for
the global AI industry. For open AI,
Stargate isn't just an investment in
technology. It's a bet on a future where
computing power becomes the new oil of
the digital age. An unusual bill has
been introduced in the state of Ohio.
Lawmakers want to officially prohibit
recognizing artificial intelligence as a
legal person and ban humans from
marrying it. The proposal authored by
Republican Representative Thaddius
Claget aims to draw a clear line between
humans and machines before technology
goes too far. The legislation known as
House Bill 469 defines AI systems as
nonsensient entities and denies them any
legal human rights. This means that AI
would not be allowed to own property,
open bank accounts, run companies, or
hold executive positions. The bill also
explicitly prohibits marriages between
humans and AI, as well as between two AI
systems. According to Claget, the goal
isn't to fight technology, but to
prevent situations where AI begins to
take on roles that should belong only to
people. He emphasized that no one is
talking about actual wedding ceremonies
with robots yet. But it's important to
set legal boundaries before such
scenarios become reality.
Claget pointed out that AI is already
capable of performing many human tasks
from writing and data analysis to
managing finances and warned against
allowing it to make decisions that
directly affect people's lives. The
issue gained attention after a recent
survey by marketing firm Fractal found
that 22% of respondents had developed
emotional attachments to chat bots,
while 3% considered them romantic
partners. Another 16% said they had
wondered if AI might be sentient after
extended conversations.
Claget noted that artificial
intelligence is now broader and smarter
than any one human being, which is why
clear legal limits are needed. Similar
measures have already been adopted in
Utah and proposed in Missouri. For now,
House Bill 469 is awaiting its first
hearing in the Ohio House of
Representatives.
But its introduction shows how the
question of where to draw the line
between human and machine is rapidly
moving from science fiction into real
world politics. Artificial intelligence
is taking over more of the coding
process, but engineers remain essential.
That was the message from Anthropic CEO
Daario Amod during a conversation with
Salesforce founder Mark Beni off at the
Dreamforce conference. According to
Amadeay, Anthropic's claude model now
writes up to 90% of the code used by the
company's teams. Yet, instead of
replacing engineers, this shift has made
their work even more important. Amod
noted that his earlier prediction that
90% of code will be written by AI within
6 months has already become a reality
for Anthropic and several of its
partners. Still, he cautioned against
taking that number too literally. AI
handles routine work while humans focus
on the most challenging tasks. If Claude
writes 90% of the code, it doesn't mean
we need fewer engineers. On the
contrary, they can now do more. They
concentrate on the hardest 10% editing
logic or coordinating groups of AI
models. As a result, productivity has
increased 10fold, he explained.
According to Amadeay, this isn't about
replacing people, but redistributing
roles. Machines take over the repetitive
work while humans guide the process,
make creative decisions, and ensure
quality. This partnership, he believes,
marks the beginning of a new era in
software development. Anthropic isn't
the only company moving in this
direction. Data from the startup
accelerator Y Combinator shows that
about a quarter of startups in its
winter 2025 batch are generating up to
95% of their code using AI. But the
rapid adoption of such tools is already
reshaping the job market. A Stanford
University study found that entry-level
developer positions have dropped by
nearly 20% since late 2022, the time
when chat GPT was first released.
Experienced engineers, however, are less
affected. Their ability to understand
complex systems remains in high demand.
For younger developers, the path has
become more difficult as competition
with AI grows. Still, experts say those
who learn to work with AI rather than
against it will have a major advantage.
AI isn't just changing how code is
written. It's redefining what it means
to be a programmer. The job is no longer
about typing lines of code, but about
directing the intelligent tools that do
it for you. Microsoft has rolled out a
major update that turns every Windows 11
PC into a true AI powered device.
Copilot is no longer just a built-in
assistant. It's now the central part of
the entire system, integrated deeply
into the interface and everyday apps.
The company says this marks a shift
toward a computer that understands,
sees, and acts, not just responds to
commands. The goal is to make
interaction with your PC as natural as
talking to another person. With the new
wake phrase, "Hey, Copilot," you can
simply speak to your computer, and it
will respond instantly, showing you
where to click, explaining how to use an
app, or helping improve a document. The
new co-pilot vision feature analyzes
what's on your screen and guides you
through tasks. Whether it's editing
photos, improving presentations, or
learning new tools. One of the biggest
changes is the arrival of Agentic
features. Copilot can now act on its
own. It can sort files, find documents,
or even build a complete website using
local materials without uploading
anything online. A built-in agent called
Manis handles these tasks automatically.
You just describe what you want done and
Manis completes it while you focus on
something else. Copilot also got smarter
at handling your personal content.
Through new connectors, it links to
services like One Drive, Outlook, Gmail,
and Google Drive, letting you ask
natural questions such as, "Find my
dentist appointment or show my school
paper from Econ 2011." It retrieves
exactly what you need in seconds. You
can even export Copilot's responses
directly into Word, Excel, or
PowerPoint. The system automatically
formats the result into a document,
spreadsheet, or presentation.
AI is also expanding into gaming on ROG
Xbox ally devices. The new gaming
co-pilot acts as a personal in-game
assistant, giving tips, walkthroughs,
and real-time insights without leaving
the game screen. Microsoft emphasizes
that privacy and control remain top
priorities. Copilot only acts with your
permission and you can see what it's
doing at any time. Every action can be
paused or stopped instantly. At the same
time, hardware partners are releasing a
new generation of Copilot Plus PCs
equipped with neural processing units
that let them handle AI tasks locally
without relying on the cloud. Acer,
Asus Dell HP Lenovo Samsung and
Surface are among the first to launch
models that promise faster performance,
longer battery life, and powerful
creative tools. In short, Windows 11 is
evolving from a simple operating system
into a true AI platform, one where
artificial intelligence is not just an
app, but an integral part of the
experience. Your computer can now
listen, see, and act on your behalf,
helping you work, learn, play, and
create more naturally than ever before.
A year after the release of the first
version of Sora, Sam Alman has unveiled
Sora 2, an upgraded tool that can now do
much more than generate short clips. The
new model can insert real people into
artificially created scenes, add sound
effects, and even generate dialogue.
While users were impressed by the
technology, Hollywood reacted with
alarm. According to the Los Angeles
Times, studios and unions fear that this
innovation could undermine copyright
laws and threaten the livelihoods of
actors whose likenesses can now be
recreated without their consent. The
main concern centers on who owns the
rights to digital replicas of actors and
how they should be compensated when
their image is used. The Motion Picture
Association has demanded clarification
from Open AI. While the Actors Union,
SAG After called the company's approach
a dangerous precedent. The Beverly Hills
Talent Agency WME, which represents
stars like Michael Jordan and Oprah
Winfrey, said many of its clients plan
to reject any projects involving Sora 2.
Sources told the Los Angeles Times that
OpenAI held talks with major studios and
talent agencies, asking them to compile
a list of characters and actors whose
likenesses cannot be used in
Soraenerated videos. But SAG Ara pushed
back saying this opt out approach where
everything is allowed unless explicitly
forbidden is unacceptable.
Actor Shaun Aton, best known for playing
Sam Wise Gamji in The Lord of the Rings,
warned that such a system threatens the
economic foundation of our industry and
could lead to costly lawsuits. Under
mounting pressure, Sam Alman promised in
a blog post that Open AAI will give
rights holders new tools to control how
their images are used, receive
compensation, and request the removal of
unwanted content. The company also noted
that its technology already includes
mechanisms to block the generation of
copyrighted characters. With Sora 2,
Open AAI has taken a bold step forward
in generative video, but also ignited
one of the biggest clashes yet between
artificial intelligence and Hollywood's
creative establishment. In this new era,
it's becoming harder than ever to tell
who's real and who was made by a
machine. Artificial intelligence just
took another step toward understanding
us without words. A new tool called
Chaplain can read lips and turn silent
video into text in real time. In the
demo, the creator simply speaks to the
camera and the AI instantly types out
every word, even with the sound
completely off. Chaplain runs fully
offline and doesn't require an internet
connection. Everything happens locally
on your computer, making it not only
convenient, but also secure. Your data
never leaves the device. The system
opens new possibilities for
communication, especially for people
with hearing impairments or in
situations where sound isn't an option,
and it's completely free. Neural
networks can now truly read lips faster
than we can say a word. Chat GPT can now
transcribe videos, and you don't even
need to come up with a complicated
prompt. Just drop a clip into the chat,
ask it to turn it into text, and within
a few seconds, you'll have the full
transcription. The AI automatically
recognizes speech, processes the audio,
and delivers clean, readable text. It's
another example of how quickly Open AI
is turning advanced technology into
something ordinary users can rely on.
What once required separate apps for
transcription and translation is now
built right into Chat GPT. The new
feature will be especially useful for
journalists, researchers, and content
creators, anyone working with
interviews, lectures, or video notes.
All it takes is a single upload, and the
AI handles the rest. Google has unveiled
VO 3.1, the latest version of its video
generation model, and one of the most
significant updates in AI film making to
date. Videos created with VO now look
like real short films. Movements are
smoother, lighting and shadows more
natural, and object physics finally feel
realistic. But the biggest upgrade is
full audio integration. With VO3.1,
you can not only generate a video, but
also add complete soundtracks,
dialogues, footsteps, ambient city
noise, wind, or even music. All
automatically synced. No external
editing required. The model is built
into Flow, Google's creative playground
for AI filmm and is available through
the Gemini API. In the coming months, VO
will also roll out to Vertex AI,
allowing companies to use it for ads,
visualizations, and presentations.
Pricing remains the same, 40 cents per
second for the standard version, and 15
cents per second for the fast one.
There's no free tier. You only pay for
successful generations. VO3.1 supports
720p and 1080p video at 24 frames per
second, up to 2 and 1/2 minutes long. It
features an extend function that
literally continues a clip from the last
frame of the previous one, allowing for
seamless storytelling, like filming in a
single take. This level of continuity
and precision gives directors and
designers full narrative control without
cutting between scenes. Google has also
reworked the visual workflow. Users can
lock in a style based on one or several
reference images, and VO3.1 will
maintain that look consistently across
every frame. New tools allow creators to
add or remove objects midscene and
interpolate between the first and last
frames to achieve smooth transitions,
effectively editing inside the AI
itself. For professionals, this is no
longer just a video generator, but a
full-scale post-production environment.
Safety and transparency are built in.
Every VO3.1 clip carries an invisible
synth ID watermark, a digital signature
embedded at the pixel level that doesn't
affect quality, but verifies origin and
deters misuse. The system also
automatically checks content for
potential copyright or privacy issues.
User feedback has been largely positive.
Many say the new version is far more
convenient. Scenes can now be edited
without rebuilding the entire video.
Artists and marketers praise its
flexibility and speed, which make it
ideal for both advertising and creative
work. However, opinions remain split.
Some professionals still believe Sora 2
from Open AI looks more lielike,
especially in emotional expression and
motion realism. Even so, critics agree
that VO3.1 excels in control and
integration, turning video creation into
a precise, streamlined process. In the
end, Google has taken a major step
toward merging artificial intelligence
with real film making. While Sora
focuses on cinematic spectacle, VO3.1
gives creators a tool for fine- grained
direction. It's not just an AI that
draws video anymore. It's a digital
cinematographer, director, and sound
engineer allin-one. That's all from me
for today. Hope you enjoyed it. I'll see
you in the next videos here on the
channel.
Loading video analysis...