LongCut logo

CES 2026 Keynote with AMD's Dr. Lisa Su

By CES

Summary

Topics Covered

  • AI Demands 10,000x Compute Surge
  • Helios Rack Delivers 2.9 Exaflops
  • Ryzen AI 400 Powers Local Agents
  • Spatial AI Generates Navigable Worlds
  • AI Accelerates Drug Discovery 50%

Full Transcript

At CES 2026, innovators from every corner of the globe show up to the most powerful tech event in the world. We'll

rally together, imagining what's next.

And we'll make it real. Solving the

greatest challenges facing our world and proving that technology isn't just moving us forward. It's creating new possibilities.

Here, AI isn't an idea. its intelligence

in motion.

The innovators, the storytellers, and the game changers come together to redefine the landscape of content, creativity, and culture.

Pioneers will push healthcare beyond what we ever thought possible.

From connected roads to autonomous flight, from nextgen marine tech to innovations that feed the planet. This

is what drives us forward. In the halls of CES, technology doesn't just compute, it collaborates.

Quantum thinkers, cyber security leaders, fintech visionaries, and robotics engineers are rewriting the rules of enterprise itself.

Because when the world's most determined minds come together, we don't just predict the future, we build it.

This is where bold ideas meet a global force.

Where industries converge, partnerships ignite, and breakthroughs take center stage.

Innovation isn't a solo act. It's a

shared pursuit.

CES 2026 innovators show up.

CEO of the Consumer Technology Association, Gary Shapiro.

Thank you very much. Good evening,

everyone. Welcome to Las Vegas CES 2026 with our first keynote of the year.

[Music] Every year, CES gives us a front row seat to the ideas and breakthroughs that shape the next decade. But it's actually

the people behind those breakthroughs, the leaders challenging what's possible who truly define the moment. Tonight we

get a head start to CES 2026 with one such leader. You know, CES has always

such leader. You know, CES has always been a platform for bold vision and ambition. It's where ideas become

ambition. It's where ideas become industries and where the next era of human progress first takes shape. And

this year, as AI accelerates change across every sector, it's more important than ever that we hear directly from the architects building the systems and breakthroughs that will define our very

future. And that's why it is a

future. And that's why it is a tremendous honor to in to introduce to you a leader quite frankly, one of the bastions of CES whose vision and impact

can be felt across the entire technology landscape, Dr. Lisa Sue. Lisa is no stranger to this stage. She has

keynoteed CES before and each time she's done so, she's helped set the tone for the industry and the year ahead. Since

becoming CEO in 2014, Lisa has led the company through one of the most remarkable transformations in modern technology. Under her leadership, AMD

technology. Under her leadership, AMD reinvented itself through relentless innovation in high performance and AI computing, delivering products that now

power AI training and inference, scientific research, enterprise workloads, cloud infrastructure, and the devices and experiences which millions

of people rely on every single day.

Today, AMD is a central force in the global AI transformation. Its CPUs,

GPUs, and adaptive computing solutions help unlock new capabilities across cloud, enterprise, edge, and PC. And

while the industry has spent years discussing what AI could become, Lisa has been focused on building the computing foundation that makes AI real

and accessible at scale. But what truly distinguishes Lisa is her leadership.

She's analytical and deeply technical, yet always grounded in purpose. She

brings a rare combination of scientific brilliance, strategic clarity, and human- centered thinking. She believes

in partnering deeply to engineer solutions that matter, solutions that advance society, strengthen industries, and expand opportunity. That's the kind of leadership CES is designed to elevate

and that's the kind of leadership we all need as we navigate a world of accelerated innovation. Tonight, Lisa

accelerated innovation. Tonight, Lisa will share AMD's vision for how high performance computing and advanced AI

architectures will transform every part of our digital and physical world from research, healthcare, and space exploration to education and

productivity. She'll speak to the

productivity. She'll speak to the extraordinary pace of AI, the breakthroughs happening now, including the opportunities and responsibilities

that come with building the future.

[Music] Hello and welcome to this unique moment in human history.

A moment where what's possible might soon forget what's impossible.

Where any game you play now has the power to play by your rules.

where AI not only helps model the possibilities of what a city can be, but make sure our kids never forget what

our cities used to be.

[Music] A moment where no hope meets the treatment plans of AIC genomes.

where no driver officially has better reflexes than any driver who's ever lived. And where no signal can no longer

lived. And where no signal can no longer stop you from sharing >> show me, show me, show me. This

[Music] a moment where AI is helping design a renewable energy source as powerful as the sun itself and helping make travel time across the

Atlantic just another puddle jumper.

But as fast as everything's changing, there's one thing that won't.

We're working tirelessly to create a world where the most advanced AI capabilities end up in the right hands.

Yours.

[Music] So just keep walking even So ladies and gentlemen, it is my privilege to welcome to the stage a

globally respected technologist, an industrydefining CEO, and a leader whose work continues to shape the very trajectory of modern computing. Ladies

and gentlemen, please join me in welcoming to the stage chair and CEO of AMD, Dr. Lisa Sue.

Thank you.

Thank you.

>> All right. What an audience. How are you guys doing tonight?

That sounds wonderful. First of all, thank you, Gary, and welcome to everyone here in Las Vegas and joining us online.

It's great to be here with all of you to kick off CES 2026. And I have to say, every year, I love coming to CES to see all the latest and greatest tech and

catch up with so many friends and partners. But this year, I'm especially

partners. But this year, I'm especially honored to be here with all of you to open CES. Now, we have a completely

open CES. Now, we have a completely packed show for you tonight, and it will come as no surprise that tonight is all about AI.

Although the rate and pace of AI innovation has been incredible over the last few years, my theme for tonight is you ain't seen nothing yet.

We are just starting to realize the power of AI. And tonight I'm going to show you a number of examples of where we're headed. And I'll be joined by some

we're headed. And I'll be joined by some of the leading experts in the world from industry giants to breakthrough startups. And together we are working to

startups. And together we are working to bring AI everywhere and for everyone. So

let's get started.

At AMD, our mission is to push the boundaries of high performance and AI computing to help solve the world's most important challenges. Today, I'm

important challenges. Today, I'm incredibly proud to say that AMD technology touches the lives of billions of people every day. From the largest

cloud data centers to the world's fastest supercomputers to 5G networks, transportation and gaming, every one of these areas is being

transformed by AI.

AI is the most important technology of the last 50 years, and I can say it's absolutely our number one priority at AMD.

It's already touching every major industry. Whether you're going to talk

industry. Whether you're going to talk about health care or science or manufacturing or commerce, and we're just scratching the surface. AI is going to be everywhere over the next few

years. And most importantly, AI is for

years. And most importantly, AI is for everyone. It makes us smarter. It makes

everyone. It makes us smarter. It makes

us more capable. It enables each one of us to be a more productive version of ourselves.

And at AMD, we're building the compute foundation to make that future real for every company and for every person.

Now, since the launch of Chat PPT a few years ago, I'm sure we all remember the first time we tried it. We've gone from a million people using AI to now more

than a billion active users. This is

just an incredible ramp. It looked, it took the internet decades to reach that same milestone.

Now what we are projecting is even more amazing. We see the adoption of AI

amazing. We see the adoption of AI growing to over five billion active users as AI truly becomes indispensable to every part of our lives. Just like

the cell phone and the internet of today.

Now the foundation of AI is compute.

With all of that user growth, we have seen a huge surge in demand in the global compute infrastructure, growing from about one zoflop in 2022 to more

than a 100 zoflops in 2025.

Now, that sounds big. That's actually a hundred times in just a few years. But

what you're going to hear tonight from everyone is we won't have we don't have nearly enough compute for everything that we can possibly do. We have

incredible innovation happening. Models

are becoming much more capable. They're

thinking and reasoning. They're making

better decisions. And that goes even further when we extend that to agents overall.

So to enable AI everywhere, we need to increase the world's compute capacity another hundred times over the next few years to more than 10 Yoda flops over

the next five years. Now let me take a survey. How many of you know what a yl

survey. How many of you know what a yl flop is?

Raise your hand, please.

A yl flop is a one followed by 24 zeros.

So 10 yl flops is 10,000 times more compute than we had in 2022.

There's just never ever been anything like this in the history of computing.

And that's really because there's never been a technology like AI.

Now to enable this you need AI in every compute platform. So what we're going to

compute platform. So what we're going to talk about tonight is the whole gamut.

You know we're going to talk about the cloud where it runs continuously delivering intelligence globally. We're

going to talk about PCs where it helps us work smarter and personalize every experience that we have. And we're going to talk about the edge where it powers machines that can make real-time

decisions in the real world.

AMD is the only company that has the full range of compute engines to make this vision a reality. You really need to have the right compute for each

workload. And that means GPUs, that

workload. And that means GPUs, that means CPUs, that means NPUs, that means custom accelerators. We have them all.

custom accelerators. We have them all.

And each of them can be tuned for the application to give you the best performance as well as the most cost-effective solution.

So tonight we're going to go on a journey. So you're going to go with me

journey. So you're going to go with me through several chapters as we showcase the latest AI innovations across cloud, PCs, healthcare, and much more. So let's

go ahead and start with the first chapter, which is the cloud.

The cloud is really where the largest models are trained and where intelligence is delivered to billions of users in real time. For developers, the cloud gives them instant access to

massive compute, the latest tools, and the ability to deploy and scale as use cases take off. The cloud is also where most of us experience AI today. So

whether you're using chat GPT or Gemini or Grock or you're coding with co-pilots, all of these powerful models are running

in the cloud. Now today, AMD is powering AI at every level of the cloud. Every

major cloud provider runs on AMD epic CPUs and eight of the top 10 AI companies use Instinct accelerators to power their most advanced models and the

demand for more compute is just continuing to go up. Let me just show you a few graphs.

Over the past decade, the compute needed to train the leading AI models has increased more than four times every year. And that trend is just continuing.

year. And that trend is just continuing.

That's how we're getting today's models that are dramatically smarter and more useful.

At the same time, as more people are using AI, we've seen an explosion over the last two years of inference, growing the number of tokens a 100 times, really

hitting an inflection point. You can

just see how much that inference is is really taking off. And to keep up with this compute demand, you really need the entire ecosystem to come together. So

what we like to say is the real challenge is how do we put AI infrastructure at yacht scale and that requires more than just raw performance.

It starts with leadership compute, CPUs, GPUs, networking coming together. It

takes an open modular rack design that can evolve over product generations. It

requires high-speed networking to connect thousands of accelerators into a single unified system and it has to be really easy to deploy. So we want full

turnkey solutions.

That's exactly why we built Helios, our next generation rack scale platform for the Yatoscale AI error.

Helios requires innovation at every single level, hardware, software and systems. It starts with our engineering teams who designed our next generation

Instinct MI455 accelerators to deliver the largest generational performance increase we've ever achieved.

MI455 GPUs are built using leading edge 2n and 3 nanometer process technologies and advanced 3D chiplet packaging with ultra fast high bandwidth uh HBM4

memory.

This is integrated into a compute tray with our Epic CPUs and Pensando networking chips to create a tightly integrated platform.

Each tray is then connected with high-speed ultra accelerator link protocol tunnled over Ethernet which enables the 72 GPUs in the rack to

function as a single compute unit. And

then from there we can connect thousands of Helios racks to build powerful AI clusters using industry standard ultra ethernet nicks and pensando programmable

DPUs that can accelerate AI performance even more by offloading some of the tasks from the GPUs.

Now we are at CES. It is a little bit about showand tell. So I am proud to show you Helios right here in Vegas. The

world's best AI rack.

[Music] [Music] [Music] Is that beautiful or what?

[Music] >> Now, for those of you who have not seen Iraq before, let me tell you, Helios is

a monster of Iraq. This is no regular rack, okay? This is a double wide design

rack, okay? This is a double wide design based on the OCP open rackwide standard developed in collaboration with Meta and

it weighs nearly 7,000 pounds.

So Gary, it took us a bit to get it up here just so you know. U but we wanted to show you what is really powering all this AI. It is actually more than two

this AI. It is actually more than two compact cars. Now the way we've designed

compact cars. Now the way we've designed Helios is was really working closely with our lead customers and we chose this design so that we could optimize serviceability, manufacturability and

reliability for next generation AI data data centers.

Now let me show you a few other things.

At the center of Helios is the compute tray. So let's take a closer look at

tray. So let's take a closer look at what one of those trays look like.

Now, I can tell you I probably cannot lift this compute tray, so it had to come out. Um, but let me just describe

come out. Um, but let me just describe it a little bit. Each Helios compute tray includes four MI455 GPUs, and they're paired with the nextG epic

Venice CPU and Pensando networking chips. And all of this is liquid cooled

chips. And all of this is liquid cooled so that we can maximize performance.

At the heart of Helios is our next generation Instinct GPUs. And you guys have seen me hold up a lot of chips in my career. But today I can tell you I am

my career. But today I can tell you I am genuinely excited to hold up this chip.

So let me show you MI455X for the very first time.

MI455 is the most advanced chip we've ever built. It's pretty darn big. It has

ever built. It's pretty darn big. It has

320 billion transistors, 70% more than MI355.

It includes 12 2nm and 3nanometer compute and IO chiplets and 432 GB of ultraast HBM4.

all connected with our nextg 3D chip stacking technology. So we put four of

stacking technology. So we put four of these into the compute trays up here.

And then driving those GPUs is our next generation epic CPU code named Venice.

Venice extends our leadership across every dimension that matters in the data center. More performance, better

center. More performance, better efficiency, and lower total cost of ownership. Now let me show you Venice

ownership. Now let me show you Venice for the first time.

I have to say this is another beautiful chip.

I I do love our chips, so I can say that for sure. Uh, Venice is built with two

for sure. Uh, Venice is built with two nanometer process technology and features up to 256 of our newest high performance Zen6 cores.

And the key here is we actually designed Venice to be the best AI CPU. We doubled

the memory and GPU bandwidth from our prior generation. So Venice can feed

prior generation. So Venice can feed MI455 with data at full speed even at rack scale. So this is really about

rack scale. So this is really about co-engineering. And we tie it all

co-engineering. And we tie it all together with our 800 gig Ethernet Pensando Volcano and Selena networking chips delivering ultra high bandwidth as

well as ultra low latency. So tens of thousands of Helios racks can scale across a data center. Now just to give you a little bit of the scale of what

this means, that means that each Helios rack has more than 18,000 CDNA 5GPU compute units and more than 4600 Zen 6

CPU cores, delivering up to 2.9 exoflops of performance.

Each rack also includes 31 terabytes of HBM4 memory, an industry-leading 260 terabytes per second of scaleup bandwidth, and 43 terabytes per second

of aggregate scale out bandwidth to move data in and out incredibly fast. Suffice

it to say, those numbers are big.

When we launch Helios later this year, and I'm happy to say Helios is exactly on track to launch uh later this year, we expect it will set the new benchmark

for AI performance. And to just to put this performance in context, just over six months ago, we launched MI355 and we delivered up to 3x more inference

throughput versus the prior generation.

And now with MI455, we're bending that curve further, delivering up to 10 times more performance across a wide range of models and workloads.

That is gamechanging.

MI55 455 allows developers to build larger models, more capable agents, and more powerful applications. And no one

is pushing faster and further in each one of these areas than open AI. to talk

about where AI is headed and the work that we're doing together. I'm extremely

happy to welcome the president and co-founder of Open AI, Greg Brockman, to the stage.

Greg, it is so great to have you here.

Thank you for being here. Um, you know, OpenAI truly started all of this with the release of Chat GPT a few years ago and the progress you've made is just incredible. We're absolutely thrilled

incredible. We're absolutely thrilled about our deep partnership. Can you just give us a picture of where are things today? What are you seeing and um how

today? What are you seeing and um how are we working together?

>> Well, first of all, it's great to be here. Thank you for having me. Uh,

here. Thank you for having me. Uh,

ChatGBT is very much the overnight success that was seven years in the making, right? that we started OpenAI

making, right? that we started OpenAI back in 2015 with a vision that deep learning could lead to artificial general intelligence to very powerful systems that could benefit everyone. And

we wanted to help to actually realize that technology and bring it to the world and democratize it. And we spent a long time just making progress where year over year that the benchmarks will

look better and better. But the first time we had something that was so useful that many people around the world wanted to use it was chat GBT. And we were just blown away by the creativity and the

ways in which people found how to really leverage the models we had produced in their daily lives. And so just out of curiosity, how many people in the room are chatbt users?

>> That's pretty much the whole room. I

would say I'm glad to hear it. But very

importantly, how many of you have had an experience that was very key to your life or the life of a loved one, whether it's in healthcare, in helping manage a newborn, uh, in any other walk of of

your life.

And to me, that's the metric that we want to optimize. And that seeing that number go up and to the right has been something that has been really different over 20 25, right? that we really move from being just a text box that you ask

a question, you get an answer, something very simple contained to people really using it for very personal, very important things in their lives. And

it's not just in personal lives for healthcare and and aspects like that.

It's also in the enterprise, right? And

really starting to bring models like codecs to be able to transform software engineering. And I think that this year

engineering. And I think that this year we're really going to see enterprise agents really take off. We're seeing

scientific discovery start to be really accelerated whether it's developing novel math proofs. The first time we saw that was just a couple months ago and the progress is continuing and it's really across every single endeavor of

human knowledge work where there's human intelligence that can be leveraged right that you can amplify it. We now have an assistant right we now have a tool we have an adviser uh that is able to amplify what people want to do.

>> I I completely agree with you Greg. I

think we we have seen just um an enormous acceleration of what we're using this tech for. Now, I would say I think every single time I see you, you tell me you need more compute.

>> It's true. It's true.

>> It's almost like a broken record like you could just, you know, just play the Greg wants more compute. Um can you talk about just some of the things that you're seeing in the infrastructure, some of the bottlenecks, and you know,

where where do you think we should be focusing as an industry? Well, the why why do we need more compute is the most important question, right? Which is

really when the models are not that capable, right? And where we were in

capable, right? And where we were in 2015, 2016, 2017, so forth, is that you basically just want to train a model and evaluate it, right? And maybe there'd be a very narrow task it'd be useful for.

But as we've made this exponential progress on the models, then there's actually exponential utility to them.

People want to bring it into their lives in a very scalable way. And I think that what we're seeing is as we move from you ask a question, you get an answer to agentic workflows where you ask the

model to write some software for you and it goes off for minutes or hours or soon even days and you're not just operating one agent, you're operating a fleet of agents, right? You can have 10 different

agents, right? You can have 10 different work streams all going at once on your behalf for a single developer, right?

And that it should be the case that you wake up in the morning and this is the kind of thing we are going to build by the way. uh and chatbt has taken items

the way. uh and chatbt has taken items off your to-do list at home and at work and that all of that that's going to require that you know that big graph that you had of how much compute the world's going to need. That's going to

require far more compute than we have right now. Like I would love to have a

right now. Like I would love to have a GPU running in the background for every single person in the world because I think it can deliver value for them. But

that's billions of GPUs. No one has a plan to build that kind of scale. And so

what we're really seeing is benefits in people's lives, right? We're seeing for example some of my favorite applications and some of the ones I think are the most important are in healthcare right that we actually see people's lives

being saved through chatbt just over the holidays uh one of my co-workers that her husband had leg pain they went to the hospital they went to the ER and

they got it x-rayed and the doctors are like ah it's a pulled muscle just wait it out you'll be fine they went home it got a little bit worse type the symptoms into chatbt chatbtd said go back to the

ER this could be a blood clot. And in

fact, it was. It was deep brain vein fibrosis in the uh in the leg in addition to two uh blood clots on on the lungs. And if they just waited it out,

lungs. And if they just waited it out, that would have been likely fatal. Um

and it's not a unique story. Fiji, our

CEO of applications who I work very closely with every day, Chach literally saved her life, too, right? She was in the hospital um for a kidney stone and had an infection. they're about to

inject a antibiotic and she said wait just a moment. She asked chat GBT whether that one was safe for her. Chat

EBT which has all of her medical history said no no because you had this other infection two years ago that could re-trigger it and that could actually be life-threatening as well. And so she showed it to the doctor. The doctor's

like wait what? You had this condition?

I didn't know. I only had five minutes to review your medical history.

>> I I I completely agree, Greg. I mean

that's one of the things that we can always all of us can use a helper.

>> Yes.

>> And that's really what we have here.

>> Um look I mean I think you've painted a vivid picture of why we need more compute of what we can do with AI. Um I

think we feel exactly the same way. Now

we've also done incredible amount of work with your engineering teams. Um MI455 and Helios is actually um a lot of it is through some of the the feedback from our engineering teams working

closely together. you know, can you talk

closely together. you know, can you talk a little bit about that infrastructure and and what are you c what are your customers wanting and how are you going to use MI455?

>> Well, one of the key things with how AI is evolving is thinking about the balance of different resources on the GPU. Um, and so we have a slide to show

GPU. Um, and so we have a slide to show uh how how we've seen the evolution of this uh the balance of resources across different MI uh generations. Uh so you see this slide that that I very

painstakingly put together. Actually, I

did not painstakingly put it together. I

asked Chachi to go put it to go create the slide. And so it literally did all

the slide. And so it literally did all the the re the research and you can see uh some sources at the bottom I and it actually went and read a bunch of different uh you know mi uh you know AMD

materials created these charts put together the title put together all these headers and produced not just an answer for me to then go do a bunch of work. It produced an artifact an

work. It produced an artifact an artifact that I can show. And this is just one simple example that you can do today with catchb right that we are moving to a world where you are going to

be able to have an agent that does all this work for you. And for that we're going to need to have hardware that is really tuned to our applications. What

we have in mind is that we're moving to a world where human attention human intent becomes the most precious resource. And so that there should be

resource. And so that there should be very low latency interaction anytime a human's involved. But there should be an

human's involved. But there should be an ocean of agentic compute that's constantly running that's very high throughput and these two different regimes of low latency and high throughput yield a bunch of different

pressures on uh hardware manufacturers such as yourself. So it's a pleasure to be working together. We

>> we we like building GPUs for you. That

works well. Um look uh lastly Greg let's talk a little bit about the future. You

know paint a picture. You know, one of the things that um we've talked about is there's some people out there who are wondering, you know, is the demand really there? Can AI compute like do we

really there? Can AI compute like do we really need all of this AI compute? And

I know you and I have talked about it. I

think people don't have a view of the future that you see. I mean, you have like a special seat. So, paint the world for what this looks like in a few years.

>> Well, looking backwards, we have been tripling our compute every single year for the past couple years and we've also tripled our revenue. And the thing that we find within OpenAI is every time we

want to release a new feature, we want to produce a new model, we want to bring this technology to the world, we have a big fight internally over compute because there are so many things we want to launch and produce for all of you

that we simply cannot because we are computed. And I think we're moving to a

computed. And I think we're moving to a world where GDP growth will itself be driven by the amount of compute that is available in a particular country in a particular region. And I think that

particular region. And I think that we're starting to see the first in inklings of this. And I think over the next couple years, we'll see it start to hit in a real way. And I think that AI is something where I think data centers

can actually be very beneficial to local communities. I think that's a really

communities. I think that's a really important thing for us to really prove to people. But also the AI technology

to people. But also the AI technology you produce, that is also something where in terms of scientific advances, you think about what has been the most fundamental driver of increase of quality of life, right? It really is

about science. And every time we've gone

about science. And every time we've gone into specific domains, you just see how much limitation there is from how things are done because that's just, you know, there's a particular discipline that's built with a bunch of expertise. There

are small number of experts and that it's hard for them to propagate that to future generations. For example, in

future generations. For example, in biology, we hooked up GPD5 to a wet lab setup and had, you know, humans described what the wet lab looked like.

The model said, "Here are a couple ideas to try." the humans would go try it and

to try." the humans would go try it and it actually produced a 79x almost 100xfold improvement in the efficiency of a particular protocol. And that's

just one particular reaction that people have spent some time actually optimizing but not like a ton and ton of time because it's just there's so much surface area available in biology that no human can possibly get to all of it.

No human can be an expert across every single sub field. And I think what we're going to see is AIs that really bridge across disciplines that humanity has been unable to bridge. Right? You see

this within healthcare where as humans learn more, we specialize more, but we're going to have AI is going to amplify. And so I think it'll be for

amplify. And so I think it'll be for hard problems that AI will be brought to bear. This will be true for enterprise.

bear. This will be true for enterprise.

Every single application I think we'll have an agent that is accelerating what people want to do. And I think the hardest problem for humanity will be deciding how do we use the limited resources we have to get the most

benefit for everyone.

>> That is an incredible vision, Greg. We

are so excited to be working with you.

Um I think there's no question in the world that we have uh the power to really change people's lives. Thank you

for the partnership and really look forward to it.

>> Thank you.

[Applause] So, as you heard from Greg, compute is key and MI455 is a gamecher. But with

the MI400 series, we've designed a full portfolio of solutions for cloud enterprise supercomputing and sovereign AI. At the top is Helios that's built

AI. At the top is Helios that's built for the bleeding edge performance, hypers scale training, and distributed inference at rack scale. For enterprise

AI deployments, we have Instinct MI440X GPUs that deliver leadership training and inference performance in a compact 8GPU server designed for easy use in

today's existing data center infrastructure. And for sovereign AI and

infrastructure. And for sovereign AI and supercomputing where extreme accuracy matters the most, we have the MI430X platform that delivers leadership hybrid

computing capabilities for both high precision scientific and AI data types.

This is something unique that we at AMD do because of our chiplet technology. We

can actually have the right compute for the right application. Now, hardware is only part of the story. We believe an open ecosystem is essential to the

future of AI. Time and time again, we've seen that innovation actually gets faster when the industry comes together and aligns around an open infrastructure

and shared technology standards. And AMD

is the only company delivering openness across the full stack. That's hardware,

software, and the broader solutions ecosystem. Our software strategy starts

ecosystem. Our software strategy starts with Rockom. Rockom is the industry's

with Rockom. Rockom is the industry's highest performance open software stack for AI. We have day zero support for the

for AI. We have day zero support for the most widely used frameworks, tools, and model hubs. And it's also natively

model hubs. And it's also natively supported by the top open-source projects like PyTorch, VLM, SG Lang, HuggingFace, and others that are

downloaded more than a 100red million times a month and run out of the box on Instinct, making it easier than ever for developers to build, deploy, and scale

on AMD. One of the exciting AI companies

on AMD. One of the exciting AI companies using AMD and Rockom to power their models is Luma AI. Please join me in welcoming the Luma AI CEO and co-founder

Amit Jane to the stage.

[Applause] >> Hey, >> hello A.

How are you? It's great to have you here with us. Um, you're doing some

with us. Um, you're doing some incredible work in video generation and multimodal models. Can you tell us a

multimodal models. Can you tell us a little bit about Luma and what you're doing?

>> Absolutely, Lisa. Thank you so much for having me here.

>> Of course. Luma's mission is to build multimodal and general intelligence so AI can understand our world and help us simulate and and improve it. Most AI

video and image models today uh they're in early early stages and they're they're used to generate pixels. They're

used to produce you know pretty pictures. What is needed in the world

pictures. What is needed in the world are more intelligent models that combine audio, video, language, image all together. So at Luma we are training

together. So at Luma we are training these systems that simulate physics causality are able to go out do research called tools and then finally render out

the results in audio video image text whatever is appropriate for the information that you're trying to work with. In short we are modeling and

with. In short we are modeling and generating worlds. So as an example let

generating worlds. So as an example let me show you some of our some results from our latest model Ray 3.

Uh, by the way, Ray 3 is the world's first reasoning video model. So, what it means is it actually is able to think first in pixels and latence and and and decide whether what it's about to generate is good. And it's also the

world's first model that can generate in 4K and HDR. So, uh, please take a look.

>> When you close your eyes, >> what do you see?

When we talk about reality, how do you know what is real or what is simply

your imagination?

Your imagination.

[Music] Now open your eyes.

[Music] [Music] I could say it that looked pretty incredible.

>> So tell us how are customers using Ray 3 today?

>> Uh so we are working with very large enterprises as well as individual creators across the spectrum and we work with them in in advertising, media, entertainment and and industries where

like you know you want to tell your story. Um 2025 was the year when they

story. Um 2025 was the year when they started to deploy our models and experiment with them and and towards the end of it we have we seeing largecale deployments where people are using our

models for as much as actually making a 90minute featurelength movie. What

customers are also asking us a lot for as as they're using it more and more is control and precision. How can they get their particular vision out onto the screen? And what we have realized and

screen? And what we have realized and and through our research is that control comes from intelligence, not just better prompts. You can't keep like you know

prompts. You can't keep like you know typing in again and again and actually do those things. So we have built a whole new model on top of Ray 3 called Ray 3 modify that allows you to edit the

world. So let me show you actually what

world. So let me show you actually what that looks like. This won't have audio and I'm going to tell you a little bit about what you're seeing. So what's

playing on the screen is a demo of Ray 3's world editing capabilities. It can

take any real or AI footage. the footage

from cameras or footage that you generated and change it as little or as much as you want to realize the creative goals. It's a powerful system that we

goals. It's a powerful system that we have developed for our most ambitious customers who are most demanding and and they they spread the gamuts across

entertainment, advertising uh and and this has allowed us to enable a new era of hybrid human AI productions. the

human becomes the prompt through motion timing and direction like you know you act it out and and then the model can produce it. What that means in practice

produce it. What that means in practice is that filmmakers and creators can create entire cinematic universes now without elaborate sets and then edit and

modify anything to get to the result they want. This has never been possible

they want. This has never been possible before. But in 2026 we are focused on

before. But in 2026 we are focused on actually going much further. 2026 will

be the year of agents where AI will be able to help you to accomplish more of the task or hopefully the full end to end of the task rather than doing some patchwork. So our teams have been

patchwork. So our teams have been working diligently building the world's most powerful multimodal agent models using Luma will suddenly feel like you

have a large team of capable creatives who are working with you in your creative pursuit. I want to show you a

creative pursuit. I want to show you a brief demo what that would feel like.

So what you're seeing here is a new multimodal agent uh that can take a whole script from of ideas with like you know characters and everything and start imagining that in front of you. Now this

is not script to movie. This is human AI interaction and our next generation of models provide the ability to analyze multiple frames, long form video and make selection and maintaining the

fidelity of the characters, scenes, and story and only editing when it's needed.

Here you're seeing human and AI collaborate in designing characters, environment, shots, and the whole world.

And with our agents, we believe that creatives will be able to make entire stories what used to take a large production before. Again, this has never

production before. Again, this has never been possible before and and we have been using it heavily internally and and we couldn't be more excited. Individual

creatives or small teams will suddenly have the power of doing what entire Hollywood studios do.

>> That's um pretty amazing. It's really

nice to see how these Luma agents come together and make this happen. Now, I

know you have a lot of choices in compute and when we first started uh talking actually you called me and said you needed compute and I said I thought I could help. Can you tell us um a bit

about why you chose AMD and and what has your experience been?

>> Yeah, we bet on AMD very early on. That

call was in 2024, early 2024. And since

then, our partnership has grown into a largecale uh uh collaboration between our teams. So much so that today 60% of Luma's rapidly growing inference

workloads actually run on AMD cards. Uh

today we also so you know initially when we started out uh we used to do a bunch of engineering but today we are at a point where most any operators most any workloads that we can imagine run out of

the box on AMD and this is huge props to your software teams and and the diligent work that is going into the rocom ecosystem. We are building multimodel

ecosystem. We are building multimodel models and actually these workloads are very complex compared to text models.

Uh, one example of that is these consumes hundreds of times thousands of times more tokens. A video that you you know you saw a 10-second video is about

100,000 tokens easily. Compare that to a response from an LLM. It's about 200 to 300 tokens. So when we are working with

300 tokens. So when we are working with this much information, TCO and inference economy is is absolutely critical to our business otherwise there's no way to

serve all the demand that is coming our way. Through our collaboration with the

way. Through our collaboration with the AMD team, we have been able to achieve some of the best TCO TCO total cost of ownership that we have ever seen in our stack. Uh, and we believe as we build

stack. Uh, and we believe as we build these more complex models that are able to do auto reggressive diffusion and that are able to do text and image and audio and video all at the same time.

This collaboration will allow us to significantly differentiate on cost and efficiency which as you know in AI is a big deal. So through this collaboration

big deal. So through this collaboration we have developed such degree of confidence that in in 2026 we are expanding our partnership to a tune of

about 10 times what we have done before and these 455 cards uh uh and I cannot be more excited for MI455X because this the rack scale

solution and the memory and and the infrastructure that you're building is essential for us to be able to build these wall simulation models. Well, we

love hearing that, Amit. And look, um, our goal is to deliver you more powerful hardware. Um, your goal is to make it do

hardware. Um, your goal is to make it do amazing things. So, just give us a

amazing things. So, just give us a little brief view of what do you see customers doing over the next few years that just isn't even possible today, >> right? So, uh, as Greg was mentioning

>> right? So, uh, as Greg was mentioning early on in LLM land, right? Like, you

know, in 2022, 2023, they were great for writing copy, small emails, things like that. We could have never imagined that

that. We could have never imagined that we would actually put these models uh uh you know into into real-time systems into into uh healthcare and these kind

of things through accuracy and architecture and scaling. LLMs have now gotten to that point. Video models are currently in that early stage. Today

they're great for generating video and pretty pictures. But soon by scaling

pretty pictures. But soon by scaling these models up by by improving the accuracy and data we would end up with a place where these models will help us

simulate real physical processes in the world like CAD architecture fluid flows help us design entire rocket engines plan cities and this is not outrageous.

This is what we do today manually with big giant teams in simulation environments. These models will allow us

environments. These models will allow us to do that and automate that to a great degree. And as they become more and more

degree. And as they become more and more accurate, multimodel models is what we need for the backbone of general purpose

robotics. Your home robot uh you know

robotics. Your home robot uh you know will run hundreds of simulations in its head in image and video and then try out like okay how how do I do this? How do I solve this? So that is able to do a lot

solve this? So that is able to do a lot more than current generation of LLM and VLM robots are able to do. This is how the human brain works. Humans are

natively multimodal. our AI systems will be as well.

>> That sounds wonderful, Ahmed. Look,

thank you so much for being here today.

Thank you for the partnership and we really look forward to all that you're going to do next.

>> Thank you so much.

>> Thank you.

>> So, you've heard from Greg and Amit.

What they said is they need more compute to build and run their nextgen models.

And it is the same across every single customer that we have which is why the demand for compute is growing faster than ever. Now meeting that demand means

than ever. Now meeting that demand means continuing to push the envelope on performance far beyond where we are today. MI400 series was a major

today. MI400 series was a major inflection point in terms of delivering leadership training across of all workloads inference scientific computing. But we are not stopping

computing. But we are not stopping there. Development of our nextg MI500

there. Development of our nextg MI500 series is already well underway.

With MI500, we take another major leap in performance. It's built on our nextG

in performance. It's built on our nextG CDNA 6 architecture manufactured on 2nmter process technology and uses

higher speed HBM4E memory. And with the launch of MI500 in 2027, we're on track to deliver 1,000 times uh 1,00x increase

in AI performance over the last four years, making more powerful AI accessible to all.

So with that, let's thank you.

So with that, now let's shift from the cloud to the devices that make AI more personal. PCs.

personal. PCs.

So for decades, the PC has been about a powerful device helping us be more productive, whether at work or at school. But with AI, the PC has become

school. But with AI, the PC has become not just a tool, but it's a powerful, essential part of our lives as an active partner. It learns how you work and it

partner. It learns how you work and it adapts to your habits. And it can help you do things faster than you've ever expected, even when you're offline.

AIPCs are starting to deliver real value across a wide range of everyday tasks, from content creation and productivity to intelligent personal assistance.

Let's just take a look at of a few of the AIPC applications today.

Starting with content creation, these videos were created from simple text prompts on a Ryzen AI Max PC. So, not in the cloud, but in a local environment.

Anyone can generate professional quality photos and videos in minutes with no design expertise.

Microsoft has been a key enabler of AIPC's, helping bring next generation capabilities directly into our productivity tools. For example,

productivity tools. For example, managing your meetings, summarizing meetings, summarizing emails, quickly, finding files that you need, using real-time translation on video

conferences, and with Microsoft Copilot, advanced AI capabilities are being built directly into the Windows experience to complete tasks faster. You just describe

what you need and the PC takes it from there. Now, at AMD, we saw the AIPC wave

there. Now, at AMD, we saw the AIPC wave early and we invested. That's why we've led every inflection point. We were the

first to integrate a dedicated onchip AI engine in 2023 and the first to deliver Copilot Plus x86 PCs in 2024.

And with Ryzen AI Max, we created the first single chip x86 platform that could run a 200 billion parameter model locally.

And now we're extending that leadership again with our nextgen Ryzen AI notebook and desktop processors. So today, I'm proud to announce the new Ryzen AI 400

series, the industry's broadest and most advanced family of AIPC processors.

Ryzen AI400 combines up to 12 high performance Zen 5 CPU cores, 16 RDNA 5,

RDNA 3.5 GPU cores, our latest XDNA2 NPU delivering up to 60 tops of AI compute and support for faster memory speeds.

These flagship Ryzen AI mobile processors deliver significantly faster content creation and multitasking performance compared to the competition.

Now, there's a lot of excitement for the Ryzen AI 400 series, and if you're walking around CES this week, you're going to see many notebooks launching this week. The first Ryzen AI 400 series

this week. The first Ryzen AI 400 series PCs begin shipping later this month with more than 120 ultra thin gaming and commercial PCs launching throughout the

year from every major OEM across every AIPC form factor.

Now, powering the next generation of AIPC experiences takes more than just hardware. It takes smarter software with

hardware. It takes smarter software with models that are lighter, faster, and can run directly on device. These are

different than what you're seeing in the cloud. So, to talk more about this next

cloud. So, to talk more about this next wave of model innovation, please welcome Ramine Hassani, co-founder and CEO of Liquid AI.

[Music] Rainine, it's great to have you here.

Um, I'm very excited about the work that you guys are doing at Liquid. You were

really taking a different approach to models. Uh, can you talk a little bit to

models. Uh, can you talk a little bit to the audience about what Liquid is doing and, you know, why it's different from others?

>> Absolutely, Lisa. It is great to be here. We are a foundation model company

here. We are a foundation model company spun out of uh MIT two and a half years ago. We're building efficient generative

ago. We're building efficient generative AI models that can run fast on any processor inside and outside of data centers. We designed from scratch

centers. We designed from scratch multimodal models with a hardware in the loop approach that allows us to optimize neural architectures for a given hardware. We are not building

hardware. We are not building transformer models. We're building

transformer models. We're building liquid foundation models. powerful, fast

and processor optimized generative models. The goal is to substantially

models. The goal is to substantially reduce the computational cost of intelligence from first principles without sacrificing quality.

That means liquid models deliver frontier model quality right on a device. Device could be a phone, could

device. Device could be a phone, could be a laptop, could be a robot, could be a coffee machine and could be an airplane.

Basically anywhere compute exists with three value propositions privacy, speed and continuity. It can

work seamlessly across online and offline workloads.

>> Reine you know our teams have been working really closely on bringing more capable models to AIPCs. Can you share a bit about that work?

>> Absolutely. Today I've got two new product announcements.

One, we are excited to announce liquid foundation models 2.5, the most advanced tiny class of models on the on the

market. At only 1.2 billion parameters,

market. At only 1.2 billion parameters, the model performs best on instruction following capabilities between its class and models that are

larger in its class. LFM 2.5 instances are the building blocks of reliable AI agents on any device. To put this in perspective for you, this model delivers

instruction following capabilities better than the uh you know deepseek models and Gemini pro kind of models Gemini 2.5 Pro right on the device.

We're releasing five model instances. A

chat model, an instruct model, a Japanese enhanced language model, a vision language model, and a lightweight audio model. Audio language model.

audio model. Audio language model.

Basically, these are highly optimized for AMD Ryzen AI CPUs, GPUs, and NPUs.

And um today they are available for download on Hugging Face and on our own platform, Leap. You can enjoy them.

platform, Leap. You can enjoy them.

>> That's pretty cool.

[Applause] >> So, we can stack these elephant 2.5 instances together to build aic workflows. But then it would be really

workflows. But then it would be really amazing if we can bring in all these modalities into one place. So that

brings me to my second announcement. LFM

tree. LFM tree. It is designed natively multimodel to process text, vision and audio as input and deliver audio and text as an output in 10 different

languages with subund millisecond latency for audiovisisual data. You will

get LM3 later in the year.

>> All right. That's fantastic. So

now remain help our audience understand like why should they be so excited about LFM3? Like what can we do with these

LFM3? Like what can we do with these models on an AIPC?

>> Absolutely. So most assistants AI assistants co-pilots today are reactive agents. You open an app then um you ask

agents. You open an app then um you ask a question it responds.

But when the AI is running fast on the device and is always on, it can be working you know on the tasks proactively for you. They the task can

be done in the background. So let me show you a quick demo a reference design to inspire what is possible to build on PCs with LFM instances. Let's jump in.

Imagine you're a sales leader working on your AMD Ryzen laptop with LFM3 backbone proactive agents activated. You're in

full focus mode working on an spreadsheet. Notifications start piling

spreadsheet. Notifications start piling up. You get a calendar notification for

up. You get a calendar notification for a sales meeting but want to continue your work in def focus. A liquid for active agents notices the meeting and

offers to join on your behalf. You allow

the agent to join and while you focus on your data analysis task in the background, the meeting is in progress with your agent representing you.

>> Are you sure we can trust this agent? I

think >> I'm a little worried there remain >> this system can actually transcribe more than transcribing your systems you know and really understanding what is going

on and the system can also be hooked up to your email platform it can analyze your emails like as your emails are actually coming out it can perform a deep research functionality so with the

deep research functionality you can analyze every email and draft the response for you again everything under your own control you know you're not going to go uh uh this is not going to

go rogue everything is offline locally on the device so this system can deliver you know a summary and can do the jobs better than what you have expected what you have seen from reactive agents I

think this year is going to be the year of um you know proactive agents and I'm very excited to announce that uh we are bringing we're working we're collaborating with zoom to bring these

features to the zoom platform actually That's fantastic, [Applause] Reine. We're really excited about what

Reine. We're really excited about what you're doing. I think you've just given

you're doing. I think you've just given people just a glimpse of what we can do when we bring, you know, true AI uh capability to our PCs. So, thank you.

We're excited and uh we look forward to all we're going to do together.

>> Thank you so much. Thank you for having me. Thank you.

me. Thank you.

So, now you've seen a little bit about what's possible with local AI, but the latest PCs aren't just running AI apps.

They're actually building them. That's

why we created Ryzen AI Max, the ultimate PC processor for creators, gamers, and AI developers. It's the most powerful AIPC platform in the world with

16 high performance Zen 5 CPU cores and 40 RDNA 3.5GPU compute units and an XDNA2 NPU delivering up to 50 tops of AI

performance. All connected by a unified

performance. All connected by a unified memory architecture that supports up to 128 gigabytes of shared memory between the CPU and GPU. In premium laptops,

Ryzen AI Max is significantly faster in both AI and content creation applications compared to the latest MacBook Pro. In small form factor

MacBook Pro. In small form factor workstations, Ryzen AI Max delivers comparable performance to at much lower price than Nvidia's DGX Spark,

generating up to 1.7 times more tokens per second per dollar when running the latest GPT OSS models. And because Ryzen AI Max supports both Windows and Linux

natively, developers maintain full access to their preferred software environment, tools, and workflows.

Now, there are more than 30 Ryzen AI Mac systems in market today with new laptops, all-in- ones, and compact workstations launching at CES and

rolling out throughout the year. But our

mission is to advance AI everywhere for everyone. The truth is there are AI

everyone. The truth is there are AI developers, many of you in this room, who want access to platforms that enable you to develop on the fly. So, we took

this one step further. Today, I'm

excited to announce the AMD Ryzen AI Halo, a new reference platform for local AI deployment.

[Applause] Now, I would say this is pretty beautiful. Do you guys agree?

beautiful. Do you guys agree?

So let me tell you what it is. This is

the smallest AI development system in the world, capable of running models with up to 200 billion parameters locally, not connected to anything. It's

powered by our highest end Ryzen AI max processor with 128 GB of high-speed unified memory that is shared by CPU, GPU, and NPU. This architecture

accelerates system performance and makes it possible to efficiently run large AI models on a compact desktop PC that fits in your hand.

Thank you.

Halo supports multiple operating systems natively, ships with our latest Rockom software stack, comes preloaded with the leading open source developer tools, and runs hundreds of models out of the box.

And this really gives developers everything you need to build, test, and deploy local agents and AI applications directly on the PC. Now, for all of you who are wondering, Halo is launching in

the second quarter of this year, and we can't wait for folks to get their hands on them.

So, now let's turn to the world of gaming and content creation.

A few gamers out there.

I think there are a lot of gamers out there. Um, look, every day gamers and

there. Um, look, every day gamers and creators rely on AMD across Ryzen and Radeon PCs, Thread Ripper workstations, and consoles from Sony and Microsoft to

deliver tens of billions of frames. And

while the visual quality of those frames has advanced dramatically over the years, the way we build those worlds really hasn't. It still takes teams

really hasn't. It still takes teams months or even years to bring a 3D experience to life. Now, AI is really starting to change that. To show what's

next in 3D world creation, I'm honored to introduce one of the most influential figures in AI. Known as the godmother of AI, her work has transformed how

machines see and understand the world.

Please welcome the co-founder and CEO of World Labs, Dr. Fay Lee.

[Applause] Fay, we are so excited to have you here.

You know, you've been one of the leaders shaping AI for decades. Can you just give us a little bit of your perspective? Where are we today and why

perspective? Where are we today and why did you start World Labs? Yeah, first of all, thank you Lisa for inviting me to be here. Congratulations to all the

be here. Congratulations to all the announcement. I can't wait to use some

announcement. I can't wait to use some of them. So it's true that there has uh

of them. So it's true that there has uh truly been great breakthroughs in uh AI progress in the past few years and as you said I've been around the block for

a while for more than two decades and I really cannot be more excited than now by where things are going. So in the past few years languagebased

intelligence in AI technology really has taken the world by storm. We're seeing

the proliferation of all kinds of capabilities and applications.

But the truth is there's a lot more than just language intelligence. Even for us humans, there's more than passively looking at life in the world. We are

incredible spatial intelligent animals.

And we have profound capabilities that use our own spatial intelligence that connects perception with action. Think

about think about all of you being here.

How you brave through airports this morning. I'm one of them. Uh or woke up

morning. I'm one of them. Uh or woke up in your hotel room and get to the nice coffee shop or find your way in this uh maze in Vegas to be here. All this

requires spatial intelligence. So what

excites me is that there's now a new wave of Gen AI technology for both inbody AI and generative AI that we can

finally give machines something closer to the human level spatial intelligence.

It's the ability to not only perceive but create 3D or even 4D worlds, reason about objects and people, and imagine

entirely new environments that still obey the laws of physics and uh dynamics in worlds, virtual or real. So that's

why I started World Labs. I really want to bring spatial intelligence to life and deliver value to people. I I

remember the first time I talked to you about your concept for World Labs and your passion about um what this could bring. Uh tell us a little bit about

bring. Uh tell us a little bit about what your models do so the audience gets a feel for what does this really mean?

Yeah. Well um I heard that there are gamers out there. So this is very exciting. So who traditionally building

exciting. So who traditionally building 3D scenes requires laser scanners or calibrated cameras or handbuilt models using pretty sophisticated and

complicated software. But at World Labs,

complicated software. But at World Labs, we're creating a new generation of models that can use the recent JAI technology to learn the structure, not

only just flat pixel structure. I'm

talking about 3D 4D structure of the world directly from data a lot of data.

So give the model a few images and even one image it the model itself can fill in the missing details predicts what's

behind objects and generate rich consistent permanent navigable 3D worlds. So what you're seeing here on

worlds. So what you're seeing here on screen is um uh is a hobbit world that's created by our uh worldaps model called

marble. We just give it a handful of

marble. We just give it a handful of images and they created these uh 3D scenes that are persistent and you can navigate. You can even see a top view

navigate. You can even see a top view and it our system transformed a few visual inputs into a fully navigable

expansive 3D world. And it shows how these models not just reconstructs environment. They really imagine

environment. They really imagine cohesive worlds, wondrous worlds. And

once these worlds exist, they flow together and uh allowing effortless transition from one environment to the

next and scaling into something much larger. And this is much closer to how

larger. And this is much closer to how humans piece together a place from a few glances.

>> Well, it looks um you know pretty amazing that you can do that with such little input. Now, can you just show us

little input. Now, can you just show us a little bit about how the technology works? Yes, definitely. Let's just

works? Yes, definitely. Let's just

ground it in the real world a bit more from uh the the hobbit world and let's let's do something that you're very familiar with. So over the break uh our

familiar with. So over the break uh our team went to AMD's Silicon Valley office. I I hope they got your

office. I I hope they got your permission and >> they they did not, but that's okay.

>> Okay. Well, now here we are. We just use some regular phone cameras. There's no

special equipment, just uh just phones to capture a few images and then we put them into World Wars uh generative 3D

generative model called Marble and then our model that can use AMD's MI325X

chip and the Rockam uh stack software stack can create a 3D version of that environment and including windows,

doors, furniture size and and sense uh sense of depth and scale. And keep in mind, you're not looking at photos, you're not looking at videos, you're

looking at truly 3D consistent worlds.

>> And then our team started to have a little more fun and decided to >> you decided to remodel.

>> Exactly. For free for you for different design styles, right?

I don't know which one you guys like the most. I personally really like the

most. I personally really like the Egyptian one, but uh um maybe that's because I'm going there in a few months.

And uh while this transformation is keeping the geometric consistency and and the 3D uh inputs. So um you can

imagine this can be such powerful tools for many use cases whether you're doing robotic simulation or game development or design. This can what would

or design. This can what would traditionally take months to do in a typical workload. We really could do it

typical workload. We really could do it in minutes now. And we can even navigate into an entirely different world like

actually the Venetian hotel.

And we just did that yesterday by talking taking a picture and then put it in the model and then did had a little fun and then then it turned this whole

place into a 3D imaginative uh imaginative space.

And now I'm sure you guys can take pictures and send it to Marble and experience this yourself. But what you don't see here behind the scene is how

much computation is happening and why inference speed really matters. The

faster we can run these models, the more responsive the world becomes. Instant

camera moves, instant edits at a scene that stays coherent as you actually navigate and explore. And that's what's really important, >> you know. Um, Fe, I think you're going

to have a few people going out to your website to try Marvel um, this year.

>> We'll keep the server up.

>> But, um, look, that looked uh, really amazing. Can you just share a little bit

amazing. Can you just share a little bit about your experience working with AMD and and our work on Instinct and Rockom?

>> Yeah, of course. And, uh, even though we are old friends, our partnership is relatively new. And I got to be honest,

relatively new. And I got to be honest, we're I'm very impressed by how quickly this came together. Our part of our

model is a real-time frame generative frame model. It was running on MI325X

frame model. It was running on MI325X in under just a week. And though with AMD Instinct at Rockm, our teams were

able to iterate really rapidly over a course of a few weeks to improve performance by more than fourfold. And

that was really impressive. That matters

because spatial intelligence is fundamentally different from what came before. Teaching AI to understand and

before. Teaching AI to understand and navigate 3D structure, have motion, understand physics requires enormous

memory, massive parallelism, and very fast inference.

And uh I was seeing your announcement. I

can't wait to see platforms like MI450 uh continue to scale and they will give us the ability to train larger world

models and just as importantly to run them fast enough that these environments can feel alive, react instantly as the

user or agents move, explore, interact and create. Now that's um that's

and create. Now that's um that's wonderful. Thank you for those comments.

wonderful. Thank you for those comments.

Your team has been fantastic working together.

>> Um, so Feay, with all the compute performance that we're going to give you and all of the innovation in your models, give the audience a view of what to expect over the next few years.

>> Yeah, I know. And uh, as as you know me, I would I don't like to hype. I think

the world >> this is like called underhype they say.

So >> no, I I think we should just share what it is. It is going to be a changing

it is. It is going to be a changing world. a lot of workflows, a lot of

world. a lot of workflows, a lot of things that were difficult to do will actually go through a revolution because of the incredible technology. So for

example, creators can now experience uh and create realw world scenes uh scenes in real time shaping what's in their

mind's eye experimenting with the space the light the movement as if they are sketching in uh inside a living world.

And then intelligent agents, whether it's robots or vehicles or even tools, can learn inside very rich physicsaware

digital worlds before they even need to be deployed into the real one, making them much safer, make the development much faster, more capable, and more

helpful to people. And designers, for example, architects can walk through ideas before anything is built,

exploring form, flow, materials, and navigate spaces rather than just looking at staring at static plans. So, what

excites me most is that this represents a shift um in how AI shows up in our lives.

We're moving from systems that understand words and images passively to systems that not only understand but can

help us to interact with the world. So

Lisa, what we are sharing today um which is turning a handful of images or or photos into coherent explorable world in

real time is not a glimpse of the distant future anymore. It is really the beginning of the next chapter. So you

and I talk about this even offline, we know as powerful as AI technology is, it's also our responsibility to deploy

and develop it in ways that reflect true human values that augment that augment human creativity,

productivity, and our care for each other while keeping people firmly at the center of this story however powerful

technologies are and I'm very excited to partner with AMD and with you on this journey I think I speak it for everyone um you are uh you know really an inspiration to

the AI world congratulations on all the great progress and thank you for joining us tonight >> thank you Lisa Okay, next shift. Now, let's turn to the world of healthcare.

[Music] [Music] Of all the ways AI is advancing the world, healthcare impacts us all. And

AMD technology is enabling the incredible to become possible. Cancer

detection is happening earlier with supercomputers analyzing data at massive scale.

Patients are receiving therapies sooner with compute modeling complex biological systems. Promising treatments are moving forward

faster through molecular simulation.

Medicine is becoming more personalized through genome research.

and patient outcomes are improving through robot assisted surgeries.

Our partners are using AI to accelerate science and better human health.

Advanced by AMD.

So look, as you saw in that video, AMD technology is already at work across healthcare. This is one of the most

healthcare. This is one of the most meaningful applications. You've already

meaningful applications. You've already heard about some of the stories tonight of how high performance computing and AI is one of the areas that I am most personally passionate about is how you

can bring health care there. There's

nothing more important in our lives than our health and the health of our loved ones. And using technology to improve

ones. And using technology to improve health care outcomes means we measure progress in terms of lives saved. I'm

very happy to be joined tonight by three experts who are leading the way in applying AI to real world healthcare challenges. Please join me in welcoming

challenges. Please join me in welcoming Shawn Mlan, CEO of Absai, Jacob, CEO of Aluminina, and Ula Enis, head of

molecular AI at Astroenica.

How are you? How are you?

How are you?

>> All right, guys. Thank you so much for being here. You can see there's a lot of

being here. You can see there's a lot of excitement about healthcare. Thank you

for the tremendous partnership. You

know, Sean, at Absai, you're using generative models and synthetic biology to design new drugs from scratch. Can

you walk through a little bit of how that works? Yeah, thank you so much for

that works? Yeah, thank you so much for having us here today. Biology is hard.

It's complex. It's messy.

Drug discovery and development is this archaic way of going about discovering drugs.

Ultimately, it's this trial and error process where you ultimately are searching for a needle in the haststack.

But with generative AI and what we're doing at ABSI, you're actually able to start creating that needle and being able to actually engineer in the biology

that you want. Being able to go after the diseases that have large unmet medical need and being able to have the manufacturability, the the developability that you want in the

drug.

We're actually able to start having precision engineering now because of AI with biology. Just like Apple is

with biology. Just like Apple is engineering an iPhone or you all are engineering the 455s, we're able to start engineering biology. And what is

that actually doing? It's allowing us to start tackling some of the hardest uh most challenging diseases that still exist that have high unmet medical need

where standard of care is poor. And at

Absi, this is exactly what we want to tackle. We want to tackle these hard

tackle. We want to tackle these hard challenging diseases. Two of them that

challenging diseases. Two of them that we're focused on at ABSI is androgenic alipcia. So, think common baldness. We

alipcia. So, think common baldness. We

actually have the opportunity in the not too distant future to have AI cure baldness. Wouldn't that be incredible?

baldness. Wouldn't that be incredible?

And not only that, be able to focus in areas that have been neglected. Women's

health. For far too long, women's health has been pushed aside. And we have a drug that we are developing for endometriosis that affects one in 10

women with the opportunity to potentially deliver a disease modifying therapy for these women. This is what AI

and drug discovery is all about.

And this wouldn't be possible without the compute partnership that we've had with AMD. Lisa, you and Murf papermaster

with AMD. Lisa, you and Murf papermaster invested in ABSI roughly a year ago. And

within that year, we've been able to scale the inference and be able to in a single day screen over a million drugs in one single day. That's incredible.

And additionally, we're getting on to the 355s and the memory there is going to allow us to contextualize the biology in a way that we haven't been able to before and ultimately create better

models for drug discovery. The the

future is really bright in in AI and drug discovery.

>> That's fantastic, Sean. Well, look, um, thank you for the partnership. We're

really excited about, um, all the work we're doing together. Now, Jacob,

Aluminina is really a leader in reading and understanding the human genome to improve health. How is AI helping in

improve health. How is AI helping in your work and, you know, talk a little bit about, you know, what the impact is for the future of precision medicine?

>> Yeah, absolutely, Ethan. I'm super

excited to be here and we definitely share a deep passion for impacting health. So, looking forward to

health. So, looking forward to everything we can do, the two companies together, both what we have done and what we're going to do. Um but uh and of course Sean, I'm rooting for that drug

against Balders. Sure. Um so let me talk

against Balders. Sure. Um so let me talk a little bit about Illuminina. Uh we are the world leader in DNA sequencing and DNA as you know is the blueprint of life

which makes all of us unique. Um and

therefore it's essential to be able to measure that for prevent, diagnose and treat diseases. Um, in a simplified way,

treat diseases. Um, in a simplified way, you can think about the human genome as three billion letters. Um, so that actually is like a book with 200,000

pages in and that is in each of our cells. Now, if there's just one mistake,

cells. Now, if there's just one mistake, spelling mistake in that book, that can actually mean the difference between a long and healthy life and a short and

terrible life. So accurate DNA

terrible life. So accurate DNA sequencing is extremely important but is super data and computer intensive. In

fact we are generating on our sequences more data than is generated on YouTube every day and therefore the relationship uh with AMD is super important. We are

using your FPGA and Epic processes in our sequences every day and that's the only way we can compute all that and translate that into insights. Over the

past decade, our technology has already now been used as we talked about in in in drug discovery, but also impacting healthcare. Today, it's used for

healthcare. Today, it's used for profiling terrible diseases like cancer and inherited diseases and it's really uh very important to make sure and we impacting a lot of people's health out

there and have saved a lot of millions of people's life. But we're just getting started. But biology is super complex.

started. But biology is super complex.

So now and and our brain can't really comprehend all that. But the combination of using generative AI, genome, proteomics together is poised to

completely change our understanding of biology over the next period of time. It

will impact drug discovery, but it will also impact on how we prevent and treat early diseases. So really, it will

early diseases. So really, it will change our way we think about longevity and healthier life. And we can only do that with the collaboration between us and all of us on the states and the whole ecosystem. So I'm really excited

whole ecosystem. So I'm really excited about that.

>> That's fantastic. Jacob and Ola at um Astroenica, you're scaling AI across one of the largest drug discovery pipelines there is. Talk about how AI is changing

there is. Talk about how AI is changing the way you develop new medicines. Okay,

thanks Lisa and also thanks for the invitation of and so say to at center can we really apply AI end to end from early drug discovery to manufacturing to

healthcare delivery and for us AI is not only about productivity it's a lot about innovation how can we work in a different way how can we do new things

with AI and one area that I'm personally very passionate about is how can we deliver candidate drugs quicker with the help of generative AI. So how we're

working there is that we train our generative AI model on all our experimental data that we have generated over several decades and then we use

those models to assess virtually assess in the computer which hypothesis which IDs of candidate drugs might work or not

and then we can assess millions of different potential candidate drugs and then we take the best only the one that we think is really good into the

experimental lab and really validate the hypothesis there. uh so we use a

hypothesis there. uh so we use a generative AI model to to generate candidate drugs to modify them to optimize to really reduce the number of

experiments we need to do in the laboratory and we be applying a new way of working through the whole Astroenica small molecule pipeline and we see that

we can deliver candidate drugs uh 50% faster with the new way of working and also improve clinical success later and we can't do that alone own. We need to

do that in a collaboration. So we

collaborate with academia, with AI startup and with companies like AMD and one very important area for us is hypers

scaling because we have a lot of great data and we really want to create the most optimal best models we can and there we work in a collaboration with

AMD to basically scale our drug discovery engine semlow so it can handle these large new data sets. So basically

we optimize the whole workflow with the help of AMD.

>> That's fantastic. Ola, look all of your stories are really amazing and we're thrilled to be working with you to bring these things to life. Now let's wrap up and think about, you know, what's the

one thing each of you are most excited about when it comes to how AI will improve healthcare? And maybe Jacob,

improve healthcare? And maybe Jacob, we'll start with you.

>> Yeah, I'm just excited about the the time we in. This is the first time that you have technology can create massive amount of data. The first time you have the compute power and the generative AI

models that will truly change our understanding on as I mentioned before biology that will be translated into huge impact on healthcare.

>> Ola.

>> So I think with AI we can really transform the understanding of biology.

we can go to not only to treat diseases but we should have have the ambition as a community that in future we can prevent chronical diseases

>> that's fantastic Sean bring us home >> absolutely so to riff a little bit on what will said I want to live in a world

where we can interact with people before they get sick where we can provide drugs and treatments to allow them to continue

to live their healthy life where they're metabolically healthy. They have a full

metabolically healthy. They have a full head of hair and they have that vitality that we all look for. Being able to go from sick

look for. Being able to go from sick care to preventative care to ultimately

regenerative biology and medicine where aging no longer is linear. That's the

world that I want to live in that AI is going to help us create.

It's an exciting time.

I think we can all say Sean we are super inspired. I mean look this is what I

inspired. I mean look this is what I heard. We should expect AI should helped

heard. We should expect AI should helped us predict sickness, prevent sickness and personalize treatments such that we can get um really extend lives and you

guys are really at the forefront of it.

So it is our honor to be your partner.

Thank you each for joining us today and uh we look forward to really you know moving this frontier forward over the next few years together.

>> Amazing.

>> Thank you. Thank you.

>> All right. Now we're entering the world of physical AI. This is where AI enters the real world. powered by high performance CPUs and leadership adaptive

computing that enables machines to understand their surroundings and take action to achieve complex goals. At AMD,

we've spent more than two decades building the foundation of physical AI.

Today, AMD processors power factory robots with micron level precision, guide systems that inspect infrastructure as it's being built, and enables less invasive surgical

procedures that speed recovery times.

and we're doing it together with a broad ecosystem of partners. Physical AI is one of the toughest challenges in technology. It requires building

technology. It requires building machines that seamlessly integrate multiple types of processing to understand their environment, make real-time decisions, and take precise

action without any human input. And all

of this is happening with no margin for error. Delivering that kind of

error. Delivering that kind of intelligence takes a full stack approach. high performance CPUs for

approach. high performance CPUs for motion control and coordination, dedicated accelerators to process real-time vision and environmental data, and with an open software ecosystem,

developers can move fast and seamlessly across platforms and applications. Now,

seeing is believing. So, to show how some of this work is unlocking the next generation of robotics, please welcome CEO and co-founder of Generative

Bionics, Daniela Puchi, to the stage.

Hello Denny. It's great to have you.

Your team is doing some amazing work.

Can you just give us some background about what you're doing?

>> Lisa, generative bionics is the is the industrial spin out of more than 20 years of of research in physical AI and biomechanics at the Italianist of

technology. But when we look back

technology. But when we look back actually everything started from a simple but profound question.

If an artificial agent needs to understand the human world, doesn't it need a humanlike body to experience it?

To answer this, we built some of the most advanced humanist platforms in the world. IAB for cognitive research, then

world. IAB for cognitive research, then Erggoab for safe industrial collaboration, and then we built Iron Cab, the only jet powered flying

humanoid robot in the world. Throughout

the process of building these robots, however, Lisa, there has been one belief that has never changed.

The real working technology is that one that amplifies human potential and that is built around people, not the other way around. Now this belief has become

way around. Now this belief has become the mission of generative bionics but to make it real we need computer is fast deterministic and local a human for instance touch balance safety loops

cannot wait for the cloud that's why our collaboration with MD is for is so fundamental AMD in fact gives us a unified continuum from embedded edge

platforms such as reasoni embedded and versi edge running physical eye on the robot to AMD CPUs and GPUs

powering simulation, training and large scale development. So Lisa, one computer

scale development. So Lisa, one computer architecture from one partner end to end.

>> I like that. I like that a lot. Now

look, let's talk a little bit about your philosophy and approach to, you know, how are you building these things and and what are your use cases?

>> We think Lisa that now human robots have to be elevated to another level. So our

approach to physical AI is to build a platform around the humanoid robot and then the platform is designed to achieve human level intelligence safe physical

human robot interaction and engineered into real products. Now let's start from the robot here. We are really inspired by uh by biomechanics. In fact if you

look at human movements they rely on fast reflexes. We walk by falling

fast reflexes. We walk by falling forward and our nervous system basically exploits our biomechanics. So we are exploiting the same principles into our

humanoid robots. Then humans basically

humanoid robots. Then humans basically learn also through touch which is a primary source of intelligence. So we

believe that humanoid robots really need the sense of the sense of touch. And

finally let's talk about the platform.

So we we are developing an open platform around the humanoid robot to enable the next generation of humanoid robots. So

just to give you an example, the same tactile sensors that we used for the humanoid robot, we are basically used also into a sensorized shoe that that is

being used in healthcare to help patient recover better and faster. But more

importantly, the shoe acts as another robot sensor so that the robot has the feeling of whether or not and how to help the patient. So Lisa, we are not

building a robot. We are not only building a robot, we are not only building a product, we are building basically a platform to close the loop between humans and humanoid robots and

enabling what we call human centric physical AI.

>> That's super cool, Denny. Now we are at CES and you know people like to see things. So what exciting news do you

things. So what exciting news do you have for us?

>> So Lisa, we focused on a new product identity and our first human drop design basically that defines our DNA in terms

of products gene one. And I'm really happy to say that Gene one today has been ready to be released right now.

Is this gorgeous or what?

>> Danny, tell us about Gene One. So our

vision is a future where humans remain at the center supported by technology.

That's why we focus on building human robots that people can trust and accept.

For us, acceptability means beauty, grace, and safety. Ginwan is Italian by design.

>> Is it really Italian?

>> Yeah.

And but basically what really sets uh sets gene gene one apart is touch. A

distributed tile skin across the robot body allow gene one to feel pressure contact intention making and intention making touch a primary source of intelligence. Just to give you examples

intelligence. Just to give you examples why this is so important. In factories,

touch makes possible basically to allow human robot collaboration and in healthcare that is going to be pivor basically a patient can hold the robot

and it can feel how to help the patient in the best way. So this enables safer decisions and more natural interaction in the real world. powered by MD

computes platforms. Our first commercial human will be manufactured in the second half of 2026 and we are already working with industrial partners including a

leading steel manufacturer to to deploy these robots in safety critical environments. Lisa, this is not science

environments. Lisa, this is not science fiction and we're making happen thanks to you.

>> Thank you so much Danny. This is uh truly exciting. We are super excited

truly exciting. We are super excited about what Gene 1 can do. Thank you for being here.

>> Thank you.

Okay, now let's turn to one more demanding environment for robotics and automation and that is space.

AMD technology is powering critical space missions today. From delivering

satellite internet connectivity to remote communities to enabling autonomous exploration of Mars, the moons of Jupiter and beyond. On Mars,

AMD adaptive computing enables the Perseverance rover to operate autonomously. That same technology is

autonomously. That same technology is also powering robotic systems at NASA, JPL and both the European and Indian space agencies, delivering reliable

compute in some of the harshest and most unforgiving environments. One of the

unforgiving environments. One of the leaders in space exploration and a company using AMD technology to help build the next generation of spacecraft and lunar infrastructure is Blue Origin.

Please welcome John Kures, senior vice president of Lunar Permanence at Blue Origin to the stage.

>> Hi Lisa.

>> John, thank you so much for being here.

you know, Blue Origin is doing just some amazing things. Talk to us a little bit

amazing things. Talk to us a little bit about your mission and and what you're working on.

>> Yes, thank you everyone for having me here. I'm very excited to tell you about

here. I'm very excited to tell you about what we're doing. Um Jeff Bezos, our founder, likes to say that Earth is the best planet in the solar system. It

sustained life for millions of years.

And as we explore the solar system, Earth will be the origin of that life.

that pale blue dot that is Earth. And

that's why our company is named Blue Origin.

To protect that planet, we want to move heavy industry eventually off the Earth.

As we look to build things such as solar power satellites in low Earth orbit, settle the moon, settle Mars, explore the asteroid belt, we'll move on and

we'll build that infrastructure so that eventually millions of people will be living and working in space for the benefit of Earth. And that starts with

one person. Originally, we had Yuri

one person. Originally, we had Yuri Gagarin and Alan Shepard explore. than

the Apollo astronauts. Then just

recently the International Space Station celebrated 25 years of continuous human presence in space. The next step for us

is Lunar Permanence. And our business unit that I'm lucky enough to be a part of is named Lunar Permanence specifically so that everyone knows immediately what we're trying to do is

establish a permanent presence of humanity on the moon. And that requires reliable, repeatable, lowcost operations and reliable and repeatable lowcost

equipment and vehicles. And AMD is a critical partner of ours to make that happen.

>> Well, thank you so much for that, John.

And look, talk a little bit about why high performance computing is so important in your work and especially as your missions are getting more complicated.

>> Certainly. So, space is the ultimate edge environment. The flight computers

edge environment. The flight computers that we build are the heart and soul of our vehicles. That compute stack needs

our vehicles. That compute stack needs to be reliable, deterministic, and resilient. It needs to survive the

resilient. It needs to survive the environment of space. And what that means is we have mass constraints, power

constraints, radiation considerations, and the AMD embedded architecture allows us to reduce mass, save power on these

vehicles and tolerate the demanding radiation environment of deep space.

And I think when we, you know, think about all of this looking ahead, you know, talk a little bit about how AI is playing a bigger role in your future missions.

>> Certainly. So AI's impact on Earthbased systems is well known. In fact, I've got to say, Lisa, and I kind of surprised you earlier today, uh AMD has been a

phenomenal partner of Blue Origin.

We only a few months ago started to talk to AMD about using the Versal 2 in our flight computer stack. And within a few months, the AMD team and the Blue Origin

team worked tirelessly and were able to get ship us units that we were then able to incorporate into development flight computers. We've now in a couple of

computers. We've now in a couple of months built the development flight computers that are flying in our vehicle test bed and those will eventually power

our Mark 2 lander. That Mark 2 lander will land astronauts on the moon as early as 2028. In fact, it was so impressive we had a team of Blue Origin

engineers working over the holidays and we took the entire flight computer stack and were able to successfully simulate a landing on the moon. This has saved

months and months of schedule. Now you

take that to the AI use and how important it is for us. Right now at Blue Origin, AI use on Earth is critical. Every

employee at Blue Origin has access to AI tools. Whether it be for design, for

tools. Whether it be for design, for analysis, for just basic pinging back and forth. AI has sped our development

and forth. AI has sped our development process so quickly that um we're now looking how do we bring this to spaceflight and so what that means for

spaceflight for us that's the next great step where AI becomes a complement to the astronaut a co-pilot if you will identifying landing sites looking out

for hazards being able to do that level of compute in a real-time environment is critically important to us for me personally though uh you think of edge

AI.

What is really interesting is as we go to explore the solar system, radioastronomy has been a passion of mine. And what radio astronomy is is

mine. And what radio astronomy is is you're looking for weak signal radio frequencies that are being emitted throughout the universe.

The problem we have is that Earth is a great emitter of radio frequency noise and interference. So it's hard to

and interference. So it's hard to identify that the far side of the moon provides a natural shelter, a barrier to that noise.

So if we could land a Mark1 vehicle on the far side of the moon, we could start to explore this untapped

radio frequency environment.

If we have edge AI, we can now utilize that to do the deep exploration to actually identify where we should be looking next. Because relay back to

looking next. Because relay back to comms doing this latency really hurts our ability to explore. So by having it on the far side of the moon, the Mark1

vehicle with edge AI will tell us land the next vehicle here to optimize your exploration.

>> That's really what excites me.

>> I mean that's super super cool. Look,

John, um we're honored to work with you.

I think it is an incredible uh mission that you have. Thank you so much for the partnership and we look forward to what you do next.

>> Absolutely. Thank you very much.

>> Thank you.

>> Okay, guys. Now, let's turn to our last chapter of the night, science and the supercomputers used for the most advanced scientific research. We're

incredibly we are incredibly proud of our leadership in high performance computing and we have continued to push the bleeding edge of performance here. We're

actually seeing a convergence between traditional high performance computing systems and AI as we bring together the best of both of these worlds. Today, AMD

powers the two fastest supercomputers in the world and more than half of the 50 most energy efficient systems. These systems are using massive amounts of compute to solve previously impossible

problems. In Finland, the Lumi supercomputer has cut climate model update times by more than 85%, enabling earlier warnings and better preparation

for extreme weather events. Energy giant

ENI is using an AMD powered supercomputer to develop longerlasting batteries and cleaner fuels. At Oakidge

National Labs, the world's first exoscale supercomputer is running Orbit 2, a hyperresolution global model that allows us to predict unprecedented

forecasting detail with nearly 99% accuracy.

And at Lawrence Liverour National Labs, the world's fastest supercomputer, El Capitan, is modeling how viruses might mutate and evolve, enabling scientists

design to design more resilient antibbody treatments and respond faster to future pandemics.

Going forward, there's a lot more that we can do to power the future of scientific discovery. We are actually

scientific discovery. We are actually working very closely with the US Department of Energy and America's national labs as part of the Genesis mission. Genesis is a national program

mission. Genesis is a national program launched late last year to accelerate the convergence of AI supercomputing and quantum computing. Together with Oakidge

quantum computing. Together with Oakidge National Labs, we recently announced two new supercomputers that are part of the Genesis mission. The Lux computer that

Genesis mission. The Lux computer that is the first dedicated US AI factory for science that will come online early this year and Discovery, the next flagship

supercomputer planned in 2028.

Genesis is the most ambitious public private technology initiative in decades. Leading this historic effort is

decades. Leading this historic effort is Michael Katzios who has shaped national policy at the highest levels as a former US CTO and under secretary of defense

for research and engineering. Please

join me in welcoming the president's chief science and technology policy adviser, Michael Katzios, to the stage.

Michael, thank you so much for being here. I know just how busy you are. Um,

here. I know just how busy you are. Um,

you've described this Genesis mission as a real moonshot with the largest mobilization of federal scientific resources in decades. Can you talk a bit about why Genesis is so important? The

Genesis mission is a great example of how President Trump has moved fast in less than a year in order for the US to

lean in and win the AI race. Genesis is

the largest marshalling of federal scientific resources in recent history and at a scale and an urgency at the Apollo mission or even the Manhattan

project. We are bringing together the

project. We are bringing together the unmatched power of our national laboratories, supercomputers, and the nation's top scientific and innovative

minds with the goal of doubling the productivity and impact of American science within a decade. This whole of government approach represents a

historic mobilization of resources tasking the department of energy to integrate its worldclass supercomputers and data sets into a unified closed loop

AI platform. Integrating this data, the

AI platform. Integrating this data, the Genesis mission leverages the power of AI to automate experiment design to accelerate simulations and generate

predictive models that accelerate federal R&D productivity.

Priority areas of focus include the greatest scientific challenges of our time. They can dramatically improve our

time. They can dramatically improve our nation's economic and national security.

These span biotechnology, critical minerals, nuclear energy, space exploration, quantum semiconductors, micro electronics, and a few weeks from now, a few weeks ago, we announced the

first wave of industry partnerships to the Genesis mission, and that included AMD. So, thank you for that. As a next

AMD. So, thank you for that. As a next step, we're working towards bringing even more federal resources into the Genesis mission. And this is going to

Genesis mission. And this is going to include a variety of agencies, including the National Science Foundation, National Institutes for Health and the National Institute for Standards and Technology.

>> Well, look, um, thank you, Michael. We

are very proud to be part of Genesis, super excited. Um, if you look back, you

super excited. Um, if you look back, you know, so many of the technologies that we have today really started with long-term public and private partnerships. So where do you see

partnerships. So where do you see Genesis actually making the biggest impact beyond science?

>> Well, through Genesis, we will create the world's largest and highest quality scientific data sets to train the next generation of AI systems, pushing them

beyond their current mastery of language and code into the realm of science. Now,

as you can imagine, this will lead to tremendous spillover effects across healthcare, drug discovery, energy, and manufacturing. Fundamentally, we are

manufacturing. Fundamentally, we are seeing a massive shift in America's science and technology enterprise. We

are now at a place where the US government, private sector, and universities together are investing over a trillion dollars in R&D every year with the private sector leading the way

by carrying out twothirds of that R&D alone. Now, the Genesis mission

alone. Now, the Genesis mission understands this and leverages the full strength of that entire ecosystem.

>> That's wonderful, Michael. Now we have talked a lot about how important it is for the US to lead in AI. Can you talk about what are the biggest things that we must do to get right such that we

lead in AI?

>> Absolutely. There are three strategic priorities the US needs to get right as laid out in President Trump's AI action plan. The first is we need to remove

plan. The first is we need to remove barriers to innovation and accelerate research and development. We are already at work looking for regulatory roadblocks innovation and seeing where

we can update or remove them entirely.

This effort will ensure the US is a home for the next great technologies to be created and to be commercialized. Next,

we need to get AI infrastructure and energy production right. We've taken

significant actions to streamline permitting to data center construction and support all forms of energy, including advanced nuclear reactors.

Looking beyond our borders, it's all about AI diplomacy and exporting American technologies to the world. The

US government is underway in establishing the American AI export program to bring American innovators and innovations to our partners and allies around the world. The Department of

Commerce will be issuing an RFP this month seeking proposals to create a turnkey AI stack, including everything from infrastructure and chips to models and applications.

Last but not least, another strategic priority for President Trump and First Lady Melania Trump is AI and education.

I talked earlier about winning the AI race. Focusing on AI and education is

race. Focusing on AI and education is about truly winning our AI future today.

It starts by helping parents and teachers and students navigate AI's opportunities and challenges in the classroom. I am thrilled that in a

classroom. I am thrilled that in a matter of months, we are seeing tremendous participation with over 5,000 students and a thousand educators across all 50 states signing up for the

presidential AI challenge. Now,

submissions close on January 20th. So,

please visit ai.gov to to participate and also look for these regional competitions that are ultimately going to culminate a championship at the White House this summer.

We've also secured over 200 pledges from leaders like AMD for free AI educational resources, including apprenticeships, access to AI top models, and curricula

for so many teachers around the country.

>> Uh, Michael, look, we're incredibly proud to support the AI education pledge and really to help expand access to AI education with more hands-on opportunities for students to learn and

build. That's why we've actually

build. That's why we've actually committed $150 million to programs that bring AI into more classrooms and

communities across the country.

We're we're investing in the next generation of AI research and talent.

We're building research research collaborations with more than 800 educational institutions around the world, including many of the top engineering and computer programs. And we're also committing to developing

coursework to promote our open ecosystem. So we're offering free online

ecosystem. So we're offering free online AI courses to reach over 150,000 students this year. So Michael, I want to say thank you for your leadership on

this topic and the first lady's leadership. I can tell you that it

leadership. I can tell you that it certainly is making a difference in galvanizing the industry. Now before you go, I have a very fun thing for us to do. Um, it is really a moment for us to

do. Um, it is really a moment for us to highlight some really amazing work as a direct outcome of the AI education pledge. So, a little bit of background.

pledge. So, a little bit of background.

We recently partnered with Hack Club on a nationwide AI and robotics campaign.

More than 15,000 high school students signed up with the top teams coming together in Silicon Valley last month for an in-person hackathon to bring their designs to life. It's actually

incredible to see what these students were able to build in just one weekend.

You can imagine it was a little bit competitive at this uh hackathon. And as

part of the recognition, we invited the top three teams to be here at CES right here in the front row so they could experience the biggest tech event of the year firsthand and we could congratulate

them in person. So, let's give them a big round of applause.

And to tell us a little bit more about their project, I'd like to invite the hackathon gold medal winners, Emmy McDonald, Rana Gaboan, and Afia Ava of

Team Armender to the stage.

>> Thank you.

All right, you guys are amazing.

Congratulations on the incredible work.

Now, before you talk about the project, can you just share a little bit about yourselves? Like, where are you from and

yourselves? Like, where are you from and when did you start coding?

>> Sure. I'm Emmy. I'm 17 and I'm from Chapel Hill, North Carolina. I started

I started coding when I was around 12 and I joined Hack Club when I was 16.

Good evening everyone. It's great to see you all. My name is Rosanna Gaboan and I

you all. My name is Rosanna Gaboan and I am a 17-year-old student from Cleveland, Ohio. I started coding when I was 12 and

Ohio. I started coding when I was 12 and I joined Hack Club as soon as I turned 16.

Uh, hi everyone. My name is Afia. I'm 18

from Beaver Dam, Wisconsin. I started

coding about two years ago through Hack Club.

>> That is fantastic. Now, tell us a little bit about your project.

>> Of course. Together with Rosanna and Afia, my hot club teammates, we built an AI robot barista. It's a robotic arm that autonomously serves beverages using a motorized wheel that spins to select a

soft drink. We trained a single unified

soft drink. We trained a single unified vision language model to multitask using the AMD developer cloud with MI300X GPUs. The robotic arm runs entirely on

GPUs. The robotic arm runs entirely on an AMD Ryzen AI laptop using three cameras. And we came to the hackathon

cameras. And we came to the hackathon with no previous AI training experience.

>> Now, can you believe that? No previous

AI training experience. And this is what they did.

You guys are clearly doing great things.

Just give me an idea like what are you most excited about working on next?

>> Thanks so much, Lisa. Um, so my mom actually works at our local fire department. She's in the audience. Hi,

department. She's in the audience. Hi,

mom. And she is a former firefighter.

One thing that we noticed when training arm tender is that it was able to capture complex human behaviors. Like it

was able to try again to grab a can after it missed it without us programming that specifically. I want to build a robot that can be used in firefighting to go into building fires in case the building before firefighters

go in. And the complexity of motion that

go in. And the complexity of motion that we saw AI exhibiting could make that possible in a way that wasn't before. I

love that. What do you guys think?

Well, look, we are all about encouraging and inspiring young people to pursue their dreams. And tonight, we actually have a special surprise for you guys. So

AMD is awarding each of you a $20,000 educational grant to invest in your future as innovators to help you keep building.

>> Thank you so much. Thank you so much.

>> Look, you guys are just a great example of what the AI education pledge is all about because what you've created, it's clear that um there's so much we can do.

So congratulations to all of you and Michael, thank you for being here and really helping really bridge all of this together. We appreciate everything

together. We appreciate everything you're doing for the country and for the industry. Thank you so much.

industry. Thank you so much.

>> Thank you guys. Congratulations.

>> Look, it's been fantastic being with you tonight, but it's time to wrap this up.

So, I hope you all saw tonight what I see every single day. This moment in tech not only feels different, AI is

different. AI is the most powerful

different. AI is the most powerful technology that has ever been created and it can be everywhere for everyone.

We're entering this era of yatoscale computing where the deployment of more powerful models everywhere will require a massive increase in the amount of compute in the world. Meeting that

demand will take a broad portfolio of solutions from the largest systems in the cloud to AIPCs to embedded computing. And just as important, it

computing. And just as important, it takes an open ecosystem built on industry standards. That's what you saw

industry standards. That's what you saw on stage tonight. We wanted to bring you the entire spectrum from amazing technology to deep co-inovation with

industry leaders across the ecosystem to very strong public private partnerships.

All of us are working together to bring AI everywhere for everyone. On behalf of the 30,000 AMDers around the world, we're proud to be building the future together with all of you because the

world's most important challenges can only be solved by bringing the industry ecosystem together. Thank you for

ecosystem together. Thank you for joining us tonight and enjoy the rest of CES 2026.

Heat.

Heat.

Hey.

Heat. Hey, Heat.

Loading...

Loading video analysis...