LongCut logo

$AMD Advanced Micro Devices Q1 2026 Earnings Conference Call

By EARNMOAR

Summary

Topics Covered

  • Server CPU TAM Doubles to $120B by 2030 Driven by Agentic AI
  • AMD Expects Tens of Billions in Data Center AI Revenue by 2027
  • CPUs Are Critical, Not Marginal—They're Growing Alongside Accelerators
  • AI Infrastructure Is Reshaping the CPU-to-GPU Ratio

Full Transcript

Greetings and welcome to the AMD first quarter 2026 conference call. At this

time, all participants are in a listenonly mode. A question and answer

listenonly mode. A question and answer session will follow the formal presentation. If anyone should require

presentation. If anyone should require operator assistance during the conference, please press star zero on your telephone keypad. And please note that this conference is being recorded.

I will now turn the conference over to Matt Ramsey, vice president of financial strategy and IR. Thank you, Matt. You

may begin.

Thank you and welcome to AMD's first quarter 2026 financial results conference call. By now, you should have

conference call. By now, you should have had the opportunity to review a copy of our earnings press release and the accompanying slides. If you have not had

accompanying slides. If you have not had a chance to review these materials, they can be found on the investor relations page of amd.com.

We will refer primarily to non-GAAP financial measures during today's call.

The full non-gaptogap reconciliations are available in today's press release and slides posted on our website.

Participants on today's conference call are Dr. Lisa Sue, our chair and CEO, and Gene Hu, executive vice president, CFO, and treasurer. This is a live call and

and treasurer. This is a live call and will be replayed via webcast on our website.

Before we begin the call, I would like to note that Gene Hoo will present at the Bank of America Global TMT conference on Tuesday, June 2nd.

um in San Francisco.

Today's discussion contains forward-looking statements based on current beliefs, assumptions, and expectations, speak only as of today, and as such involve risks and

uncertainties that could cause actual results to differ materially from our current expectations. Please refer to

current expectations. Please refer to our cautionary statement in our press release for more information on factors that could cause actual results to differ materially. With that, I will

differ materially. With that, I will hand the call over to Lisa.

Thank you, Matt, and good afternoon to all those listening in. Today, we

delivered an outstanding start to the year driven by accelerating demand for AI infrastructure across our portfolio.

Growth was broad-based with every segment increasing year-over-year, led by 57% data center revenue growth.

First quarter revenue increased 38% year-over-year to 10.3 billion. Earnings

grew more than 40% and free cash flow more than tripled to a record 2.6 billion driven by significantly higher sales of epic CPUs, Instinct GPUs, and

Ryzen processors.

These results mark a clear inflection in our growth trajectory and a structural shift in our business. Data center is now the primary driver of our revenue

and earnings growth. And as AI adoption scales, demand is increasing not only for accelerators, but also for the high performance CPUs that power and

orchestrate those workloads.

Turning to our segments, data center revenue increased 57% year-over-year to a record 5.8 billion, led by strong demand for our Epic CPUs and Instinct

GPUs.

In server, we delivered our fourth consecutive quarter of record server CPU revenue. Revenue increased more than 50%

revenue. Revenue increased more than 50% year-over-year with sales to both cloud and enterprise customers each growing more than 50%.

Share gains accelerated year-over-year, reflecting the ramp of fifth gen Epic turn CPUs and continued strength of fourth gen Epic processors across a wide

range of workloads.

In cloud, AI was the primary driver of growth in the quarter as every major cloud provider expanded their epic footprint to support a broad range of AI workloads from general purpose compute

and data processing to head nodes for accelerators and emerging agentic applications.

Epic powered cloud instances increased nearly 50% year-over-year to more than 1,600 with instances optimized for virtually every enterprise workload and

expanded availability across the largest global cloud providers.

In enterprise, demand accelerated with record revenue and record sell-through in the quarter.

We expanded our customer base with new wins across financial services, healthcare, industrial, and digital infrastructure companies while also building momentum with mid-market and

SMB customers.

We are well positioned to continue gaining share as more enterprises standardize on Epic across on-prem and hybrid environments based on our

leadership performance and TCO.

Looking ahead, our sixth gen Epic Venice processor built on our Zen 6 architecture and 2nmter process technology is designed to extend our leadership across cloud, enterprise, and

AI workloads.

The Venice family spans a broad set of CPUs optimized for throughput, performance per watt, and performance per dollar, including Verono, our first C epic CPU purpose-built for AI

infrastructure.

Across the portfolio, Venice widens our competitive advantage, delivering substantially higher performance per socket and per watt versus competitive x86 offerings and more than 2x

throughput per socket versus leading ARMbased AI solutions.

Customer demand is very strong with more customers validating and ramping platforms at this stage than with any prior epic generation and we remain on

track to launch Venice later this year.

Looking more broadly, we are seeing a meaningful acceleration in customer demand driven by the rapid scaling of AI workloads across both cloud and enterprise.

Inferencing and Agentic AI are increasing the need for server CPU compute as these workloads require additional CPU processing for orchestration, data movement, and

parallel execution in addition to serving as the headnotes for GPUs and accelerators.

As a result, we are seeing both stronger near-term demand and deeper engagement with customers on long-term capacity planning.

At our financial analyst day in November, we outlined a server CPU market growing at approximately 18% annually over the next 3 to 5 years.

Based on the demand signals we are seeing today and the structural increase in CPU compute requirements driven by Agentic AI, we now expect the server CPU

TAM to grow at greater than 35% annually, reaching over $120 billion by 2030.

In response to this demand, we are working closely with our supply chain partners to meaningfully increase our wafer and backend capacities to support this growth.

As a result, we now expect server CPU revenue to grow by more than 70% year-over-year in the second quarter with robust growth continuing through

the second half of 2026 and into 2027 as we ramp our next generation Epic processors.

Now turning to our data center AI business, revenue grew by a significant double-digit percentage year-over-year as adoption of Instinct accelerates

across cloud, enterprise, sovereign, and supercomputing customers.

We're seeing strong momentum as customers move from pilots to large-scale production deployments, particularly in inference, where our leadership memory capacity and bandwidth

are key advantages.

This momentum is driving deeper long-term customer engagements, including large-scale multi-generation deployments.

A key example is our expanded strategic partnership with Meta to deploy up to 6 gawatts of AMD Instinct GPUs spanning several product generations.

Our agreement includes a custom GPU accelerator based on our MI450 architecture co-designed to support Meta's next generation AI workloads.

Shipments are on track to begin in the second half of the year, leveraging our Helios rack scale architecture, which integrates Instinct GPUs with Epic

Venice CPUs to deliver fully optimized high performance AI infrastructure.

Together with our previouslyannounced OpenAI partnership, these engagements position AMD as a core partner to the world's largest AI infrastructure builders with deep co-engineering

relationships and multi-year visibility into large-scale deployments.

More broadly, Instinct adoption continues to expand across AI native and enterprise customers for both training and inference workloads.

Existing partners are expanding instincts across the broader set of workloads while a growing number of new partners are deploying production AI workloads on Instinct highlighting the maturity of our hardware and software

stack.

On the software front we continue to make strong progress with Rockom improving performance scalability and enabling customers to reach production faster.

In our latest ML Perf results, MI355X delivered strong competitive performance across the full suite with leadership results in multiple categories.

We also expanded day zero support for the leading open models, including the latest Google Gemma 4 family, Quen, Kimmy, and others, enabling customers to

deploy new models quickly with optimized performance.

To build on this momentum, we have significantly accelerated our Rockom development cadence through increased software investments and agent-based coding workflows, enabling faster

performance improvements and more rapid deployment of new capabilities.

Looking ahead, customer pull for Helios is very strong, driven by our leadership performance, memory bandwidth, and scaleout capacity.

Helios development is progressing well with strong execution across silicon software and systems. As we advance through key milestones, we have begun sampling MI450 series GPUs to lead

customers and remain on track to ramp Helios production shipments in the second half of the year.

As we approach production, demand for MI450 series GPUs continues to strengthen with lead customer forecasts now exceeding our initial plans and a

growing number of new customers engaging on large-scale deployments, including additional multi-gawatt opportunities.

With this expanded visibility, we have strong and increasing confidence in our ability to deliver tens of billions of dollars in annual data center AI revenue

in 2027 and to exceed our long-term growth target of greater than 80% in the coming years.

I look forward to sharing more on our next generation Instinct GPUs, Epic processors, Helios Rackcale platform, and our growing customer engagements at

our advancing AI event in July.

Turning to client and gaming segment revenue increased 23% year-over-year to 3.6 billion.

In client revenue grew 26% year-over-year to 2.9 billion, led by strong sales of our latest Ryzen processors and continued share gains

across consumer and commercial markets.

In desktop, we strengthened our Ryzen lineup, including our latest X3D processors that deliver leadership performance across gaming, content

creation, and professional workloads.

We also introduced the Ryzen AI 400 series and Ryzen AI Pro 400 series desktop CPUs, extending our AIPC offerings across both consumer and

commercial systems. In mobile, we delivered strong growth driven by a richer product mix as Ryzen 400 mobile PC shipments ramped and

commercial adoption increased.

Commercial was a key highlight in the quarter with sellrough of Ryzen Pro PCs increasing more than 50% year-over-year as Dell, HP, and Lenovo broadened their AMD offerings.

We also closed new enterprise wins across large technology, financial services, healthcare, and aerospace customers.

Looking ahead, we expect demand for our Ryzen CPUs to remain solid in the second quarter. However, we are planning for

quarter. However, we are planning for second half PC shipments to be lower due to higher memory and component costs.

Against this backdrop, we still expect our client revenue to grow year-over-year and outperform the market, driven by the strength of our rising portfolio and expanding commercial adoption.

In gaming, revenue increased 11% year-over-year to 720 million.

Semi-custom revenue declined year-over-year as expected at this stage of the console cycle, while engagements with customers on next generation platforms remain strong.

In graphics, revenue increased year-over-year, led by demand for our latest generation Radeon 9000 series GPUs.

We also strengthened our Radeon portfolio with updates to our FSR software that improve performance and visual quality across a broad set of gaming workloads.

Similar to the PC market, we believe that second half demand in gaming will be impacted by higher memory and component costs and we are planning the business accordingly.

Turning to our embedded segment, revenue increased 6% year-over-year to 873 million, driven by strength in test, measurement, and emulation, aerospace, and defense, and communications, as well

as increased adoption of our embedded x86 products.

Design win momentum grew by a double-digit percentage year-over-year with billions of dollars in new wins across markets reflecting the continued expansion of our embedded business from

a primarily FPGA focused portfolio to a broader set of adaptive embedded x86 and semi-custom solutions significantly expanding our TAM.

Our semi-custom engagements also expanded in the quarter as data center communications and other embedded customers leverage our broad IP portfolio and high performance expertise

to build differentiated solutions.

In summary, our first quarter results mark a clear step up in our growth trajectory with accelerating momentum across the business. Our client business continues to outperform the market

driven by rise in adoption and share gains while an embedded design momentum and demand are strengthening across our expanded adaptive and x86 portfolio.

At the same time, our data center business is inflecting with strong demand for both Epic and Instinct products driving significant growth.

While we are still in the early stages of the AI infrastructure cycle, the pace and scale of deployments we are seeing today reinforce both the magnitude and

durability of the opportunity ahead.

As inferencing and Agentic AI deployments scale, they are fundamentally increasing compute requirements driving both larger scale accelerator deployments and

significantly more CPU compute.

AMD is uniquely positioned to lead in this next phase of AI with leadership products across high performance server CPUs and AI accelerators and the ability

to optimize them together as fully integrated rack scale solutions.

We have a world-class supply chain and are making significant investments to expand capacity and execute at scale.

With the momentum we are seeing across the business and the expanding market opportunity, we see a clear path to exceed our long-term financial targets, including delivering more than $20 in

EPS over the strategic time frame. Now,

I will turn the call over to Gan to provide additional color on our first quarter results. Jean,

quarter results. Jean, thank you, Lisa, and good afternoon, everyone. I'll start with a review of

everyone. I'll start with a review of our first quarter financial results and then provide our current outlook for the second quarter of fiscal 2026.

We are pleased with our outstanding first quarter results delivering accelerated revenue growth and earnings expansion driven by strong execution and

operating leverage.

First quarter revenue was 10.3 billion, exceeding the high end of our guidance, growing 38% year-over-year, driven by strong growth in the data center and

client and gaming segments and the return to growth in the embedded segment.

Revenue was flat sequentially with continued growth in the data center segment offset by seasonality in the client and gaming segment and embedded

segment.

Gross margin was 55% up 170 basis points versus a year ago driven by a favorable product mix including a higher data

center revenue contribution.

Operating expenses were 3.1 billion and increase of 42% year-over-year as we continue to invest in R&D to support our AI road map and the long-term growth

opportunities and go to market activities.

As the business scales, operating income grew faster than topline revenue.

Operating income was 2.5 billion, representing a 25% operating margin.

Taxes, interest, and other resulting a net expense of approximately 275 million for the quarter. diluted the earning per

share was $1.37 up 43% year-over-year underscoring the significant operating leverage in our model as we scale.

Now turning to our reportable segment starting with the data center segment revenue was a record 5.8 8 billion up

57% year-over-year and 7% sequentially driven by strong demand for epic processors and continued ramp of Instinct GPUs.

Data center segment operating income was 1.6 billion or 28% of revenue compared to 932 million or 25% a year ago.

Client on the gaming segment revenue was 3.6 6 billion up 23% year-over-year on a sequential basis. Revenue was down 9%

sequential basis. Revenue was down 9% consistent with seasonality.

The client business revenue was 2.9 billion up 26% year-over-year driven by strong demand for our latest Ryzen processors favorable product mix and

continued share gains across consumer and commercial markets.

Sequentially, client revenue was down 7% due to seasonality.

The gaming business revenue was 720 million, up 11% year-over-year, primary driven by higher demand for radian GPUs, partially offset by lower semicustomer

revenue.

Sequentially, gaming revenue was down 15% consistent with our expectations.

In addition, as Lisa mentioned earlier, we expect second half demand in gaming to be impacted by higher memory and component cost. We now expect second

component cost. We now expect second half gaming revenue to decline more than 20% compared to the first half.

client on the gaming segment operating income was 575 million or 16% of revenue compared to 496

million or 17% a year ago. Embedded

segment revenue was 873 million up 6% year-over-year as demand strengthened across several end markets.

Sequentially embedded revenue was seasonally down 8%.

In bed segment, the operating income was 338 million or 39% of revenue compared to 328 million or 40% a year ago.

Turning to the balance sheet and the cash flow, during the quarter, we generated three billion in cash from continuing operations and a record 2.6

billion in free cash flow or 25% of revenue, demonstrating the cash generating power of our business model.

Inventory was roughly flat at 8 billion.

At the end of the quarter, cash cash equivalents and short-term investment were 12.3 billion.

In the quarter, we repurchased 1.1 million shares and returned 221 million to shareholders. We ended the quarter

to shareholders. We ended the quarter with 9.2 billion authorization remaining and our share repurchase program.

Now turning to our second quarter 2026 outlook.

We expect revenue to be approximately 11.2 billion plus or minus 300 million.

At the middle point of our guidance, revenue is expected to be up 46% year-over-year, driven by a very strong growth in our data center segment,

growth in our client and gaming segment, and the double-digit growth in our embedded segment.

Sequentially, we expect revenue to be up approximately 9%. Driven by doubledigit

approximately 9%. Driven by doubledigit growth in both our data center and the embedded segments and the modest growth in our client gaming segment.

In addition, we expect second quarter nongap growth margin to be approximately 56%.

Nongap operating expenses to be approximately 3.3 billion. Nongap other

income and expense to be again of approximately 60 million. Nongap

effective tax rate to be 13% and the diluted share count is expected to be approximately 1.66 billion shares.

In closing, the first quarter of 2026 was an outstanding quarter for AMD, reflecting strong momentum across the business with accelerated revenue and

earning expansion. We are very well

earning expansion. We are very well positioned to build on the momentum as we scale our data center business, expand margins, drive continued the

earnings growth and the long-term shareholder value creation. With that,

I'll turn it back to Matt for the Q&A session.

Thank you, Jean. Operator, we're ready to start the Q&A session now. I would

ask the callers to limit yourself to one question and one brief follow-up, but please go ahead and poll for questions.

Thank you.

Thank you, Matt. We will now be conducting a question and answer session. If you would like to ask a

session. If you would like to ask a question, please press star one on your telephone keypad. A confirmation tone

telephone keypad. A confirmation tone will indicate that your line is in the queue. You may press star two if you'd

queue. You may press star two if you'd like to remove a question from the queue. For participants using speaker

queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. We ask that you please limit

keys. We ask that you please limit yourself to one question and one follow-up thank you. One moment, please, while we pull for questions.

And the first question comes from the line of Joshua Buckalter with TD Cowan.

Please proceed with your question.

Hey guys, congrats on the results and thanks for taking my question. Um,

actually going to start with CPUs, which hasn't happened in a bit. Um, you know, it hasn't been that long since you announced the $60 billion server CPU TAM for 2030 at the analyst day and and it's

very quickly doubled. Um, Agentic AI has obviously gotten a lot of attention in recent months, but would be helpful to hear your thoughts on how this TAM is inflecting and changing so meaningfully in such a short amount of time and maybe

you could also speak to your confidence in hitting that greater than 50% share target from the analyst day as your x86 competitor seems to be you know improving its supply and also there

seems to be more momentum on the um merchant and custom ARM CPU side. Thank

you.

Yeah, sure Josh, thanks for the question. So um you know first of all uh

question. So um you know first of all uh back to the when we think about CPU TAM I mean we've always said that uh CPUs are a very critical part of data center infrastructure and you know that's been

uh where we've invested and we saw the first signs of let's call it AI demand uh really pulling CPU demand uh you know last year and that was you know the reason we updated the TAM to you know

let's call it the 18% KAGER or approximately 60 billion and you know what we've seen is you know all of the things that we believed D in terms of um you know aentic AI and inferencing and

all the CPU compute that is required um is is just happening and it's happening at a much faster pace. So you know over the last um you know let's call it the last few months as we've talked to our

customers and we've seen how AI adoption is uh really unfolding um you know we're seeing a significant more uh CPU demand from you really every uh major cloud

provider as well as um you know enterprise uh customers and you know that the way that comes across is as AI adoption scales you need more inferencing as inferencing scales you

know and you do more you have more um you know agent agent and aentic AI um they all require CPUs for you know all of the orchestration and the data

processing and and these other tasks. So

you know with that um we've looked at it both you know bottoms up you know in terms of talking to customers and having them um you know give us longer term forecasts as well as just doing um some

you know clear workload uh analysis and yeah I mean it's a very exciting time I think it's exciting to see you know CPUs growing you know greater than 35%

um to uh you know over $120 billion and then you know when you think about you know AMD in the context of that I mean C you know CPUs are are critical for so many tasks that you are seeing um a lot

of a lot more discussion about CPUs in the market um but we actually you know view it in three categories right there's general purpose uh compute um there's the head nodes that really uh

you know support the AI accelerators and then you know there are CPUs just for all of the agentic AI work and you know to do all of this um you know our belief is you need a broad

portfolio of CPUs and that's really what we have been focused on is building you know not just um you know one type but really a broader in terms of you know throughput optimized power optimized

cost optimized you know AI infrastructure optimized um as we've done in the Venice family so you know when you put all that together um we're very excited about the larger TAM and uh

we're also u you know very happy with the traction that we're getting um we're clearly uh feeling like we're seeing significant share gain as uh you know we're going um into our Turin portfolio

that has ramped very nicely uh Venice is extremely well positioned and we're working with customers right now on you know beyond Venice and what we're doing in those architectures. So we feel

really good about the market as well as you know our opportunity uh to grow to a greater than 50% share of that market.

Okay, thank you for all the color there.

Um then I want to ask about the instinct side. So in the press release you

side. So in the press release you mentioned that MI450 and Helios engagements are are strengthening with customer forecasts exceeding the expectations and the pipeline growing.

Um you know you you certainly have the the big public open AI and meta deals.

Was this comment referring to those engagements upsizing versus the announced initial deployments or was it other other customers and maybe is the

increase on the MI450 timeline or is it MI500 and beyond? Thank you. Sure

Josh. So uh we are very excited about MI450 um and Helios. Uh we're seeing um significant customer um interest in those products as well. So you know we have certainly talked about our our

large partnerships with OpenAI and Meta and those are going really well. Um um

we appreciate uh the deep co-engineering that has gone on there. um you know when we look at the totality of uh let's call it you know based on our current visibility you know how those forecasts

are coming in uh with um all of our customers we're actually seeing it above our you know initial plans that we had planned for 2027 and I think the encouraging thing is we're seeing a

breath of customers um who are now uh very interested in deploying at significant scale um MI450 series and those are for both training and inference workloads although The largest

deployments are for inference. And you

know based on you know all of that um and the scale of of new customer interests um you know we see a um a path to to really get to uh exceed our you

know original you know targets of greater than 80% kagger. Um and these are really you know 2027 time frame.

Obviously when we talk to customers we're talking to them about um you know MI355. Uh there's a lot of um you know

MI355. Uh there's a lot of um you know good traction we're seeing there. um

MI450 and Helios I think for significant large scale deployments and then many customers are also uh very engaged with us on the MI you know 500 series and um

all of the opportunities there. So um

you know we we feel like uh you know very very good progress um and you know the key is that we're you know continuing to to broaden and widen the scope of both customers as well as uh as

workloads.

And the next question comes from the line of Thomas Ali with Barclays. Please

proceed with your question.

Hey guys, thanks for taking my question.

Lisa, if I uh if I get your numbers correct here in the March quarter, it sounds like uh you know the the server processor side or the CPU side grew over 50%. If you take it just at the word, it

50%. If you take it just at the word, it looks like maybe the data center GPU side actually grew in Q1. So I was curious around the cadence of this year.

Kind of previously he had talked about really a back half weighted and then kind of more so Q4 weighted year. Could

you talk about if that's changed at all?

And then the second part of the question is as you go into 2027, clearly you're pointing out a lot of upside from the larger customers and then kind of the ecosystem around them with new customers as well. But when you look at supply,

as well. But when you look at supply, that's a major issue in the ecosystem today. Could you talk about where you're

today. Could you talk about where you're concerned on supply if you are and then any gating factors as you look into next year, whether that be power, data center buildouts, etc., or do you feel really good about the ability to grow? Thank

you very much.

Yeah. Okay. Um a lot of pieces of that question Tom so let me try to get through it. Um so first of all on the

through it. Um so first of all on the data sector center segment in Q1 um the server business uh was um you know greater than 50% year-over-year as we

said in the prepared remarks. Um uh the um data center AI was actually down modestly because of the China transition. We had um more China

transition. We had um more China revenue, I'm sorry, sequentially more China revenue in uh Q4 and it was um uh less in Q1. But as we go forward um I

think we see strong growth in both segments. So we guided data center uh Q2

segments. So we guided data center uh Q2 up uh sequentially double digits and that's double digits in both uh server as well as data center AI and the

progression as we go forward. So, first

on the server CPU side, uh we talked about growing to over 70% uh year-over-year in Q2 and that continuing into the second half of the year. And on

the data center AI side, we will be ramping Helios um in the second half of the year. So, let's call it starting

the year. So, let's call it starting with initial volume in Q3 um with a significant ramp in Q4 and then continuing to ramp in uh Q1. So, um you know, that's kind of a little bit of the

progression. And then to your questions

progression. And then to your questions about customers and supply, um I think I answered you know Josh um the the customer question. I think we have um

customer question. I think we have um you know very um you know good visibility now into um the deployments that are on track for 2027. And when I say good visibility, it's visibility

down to you know which data centers are uh the GPU is going to be installed in.

And um so that's um you know necessary just given all of the the constraints out there. Um we feel that uh there is

out there. Um we feel that uh there is tightness in the supply chain. There is

certainly tightness in you know sort of data center buildouts but uh we are confident in our ability to supply uh to uh the levels of growth that we're talking about and to exceed the levels

of growth that we're talking about. And

we're also working very closely uh with our um with our customers and our partners to ensure uh that we have good visibility to data center power. And

there is much more power that's coming online in 2027. Um, and so, uh, with all those things in mind, I think, you know, again, lots of things to manage. It's a

it's a complex ramp, but we're very, um, very pleased with the progress on the ramp.

All right, Tom, I think you uh shotguned approached the uh the multiple questions there. So, um, operator, maybe we can go

there. So, um, operator, maybe we can go on to the next caller, please. Thank

you.

Thank you. The next question comes from the line of Ross Seymour with Deutsche Bank. please proceed with your question.

Bank. please proceed with your question.

Hi, thanks for letting me ask a couple questions. Uh the first one is just on

questions. Uh the first one is just on the Epic competition. Lisa, you went through some of the statistics of you versus x86 and you versus ARM, but I wanted to dive a little bit deeper into that. How do you see AMD truly

that. How do you see AMD truly differentiating, especially when you're signing well, you see some of your competition signing up the same customers from the ARM side and the x86

competition having more supply. So I

just wanted to see if you could dig a little bit deeper into how you think the market share is going to trend over time.

Sure, Ross. So uh look, we're very uh you know, we're very engaged with you know, every um major hyperscaler and in terms of understanding their needs on

the CPU side. Um I think we have um very much wanted to um let's call it optimize our CPU roadmap for the various workloads. Um I think we were early to

workloads. Um I think we were early to call uh this um you know AI component of CPUs and so we've been actually optimizing uh very closely with those uh

customers. Um the way to think about

customers. Um the way to think about this Ross is that you know you're going to need a broad portfolio of CPUs. Like

not all CPUs are the same. Uh you know frankly you're going to need different uh CPUs for whether you're talking about general purpose um operations or you're talking about headnodes or you're talking about agentic AI tasks. they're

going to be optimized differently and we thought through that and we are um you know absolutely um optimizing across uh the various workloads. So from a

competitive standpoint um we feel very good about where things are and from a um you know deep relationship you know with the customer set I think we feel very good about that. So uh from our

current um you know standpoint I think the uh the depth of our road map just expands um as we go forward and you know you shouldn't think about it as you know

people are going to do one or the other um I think you're going to see people um actually use x86 um you know and ARM um for many of the large hyperscalers uh and you know even for those who are

developing their own um they're still buying lots of CPUs um in the merchant market uh for the reason that I just stated which is you need different CPUs for the different types of workloads and

um you know there's very high demand at the moment.

Thanks for that. I guess for my followup maybe more for Gene on the gross margin side of things. Uh it's nice to see the gross margin popping up in the second quarter guide but I just wanted to get some trends longer term maybe not

specific numbers but how should we think about when Helios and and the Instinct side really ramps in the fourth quarter and more so next year. I could see some offsets with that carrying a below corporate average gross margin, but then

everything that Lisa talked about with the epic side of things being significantly stronger might be more of an offset than it was in the past. So,

just walk us through the puts and takes of that and maybe directionally where you think gross margin goes over the next year or two.

Yeah, Ross, uh, thanks for the question.

Uh, we are very pleased with how our gross margin is trending. it came came in really strong in Q1 and also as you mentioned we guided the Q2 higher at the

56% uh uh I I think as we think about the second half quarter over quarter as you know uh there are some puts and takes right I would just say from a

tailwind perspective we actually have a multiple tailwinds uh really are going to help our gross margin first is a server CPU you know Lisa talk about the

server CPU expected to grow more than 70% uh in Q2 and uh you know continue to be really strong in second half that really helps our growth margin. Uh

secondly in the second half gaming actually is going to come down and our client business actually continue to go up the stack. So from client the gaming

segment growth margin actually is going to be also very helpful uh embedded actually is very accretive to our gross margin. uh it's momentum actually is

margin. uh it's momentum actually is continuing in the second half. So we are really pleased that all the tailwinds we

have on the other side uh MI450 will start to ramp in Q3 and the ramp significantly in Q4 that is below corporate average. So that will uh have

corporate average. So that will uh have different puts and the takes in Q4 in the gross margin side. But uh when we sit here when we look at uh all the

positive trends we have to really offset some of the gross margin dilution from uh MI450 side we actually feel really good about the setup uh of the gross

margin for 2026 and into next year I think some of the tailwinds I talk about that will actually continue uh that's why we feel confident about uh continue

to drive the growth margin we actually during our financial analyst we outlined the long-term growth margin in the range

of 55 to 58%. Uh we think for the first year we're making good progress there.

And the next question comes from the line of Timothy Aruri with UBS. Please

proceed with your question.

Thanks a lot. Um I wanted to ask about units versus ASP for server CPU. Um if I look at the June guidance, it sort of implies up 25 to 30% for server CPU. And

you know, Lisa, you had mentioned second half of the year, it sort of implies that server CPU could grow like 70%, you know, maybe a little more this year. And

so I guess my question is how much of that growth either in June or for the year is like units versus pricing? Is

the are these price increases sort of, you know, mostly captured in June or is that also helping in the back half of the year?

Yeah. Um Tim, the way I would say it is maybe let me let me bring you back to Q1 for a moment. So if you look at our significant growth in the the server business, it it was actually um although

we were up on a year-over-year basis uh for both ASPs and units, um it was actually much more unit driven. So we

are shipping more CPUs um you know across not just the high-end you know Turin family, but we're actually shipping a lot of Genoa um sort of the the Zenh 4 family as well. um as we go

forward um for Q2 and into the second half uh we are you know guiding for a significant amount of growth um I think there's a little bit of ASP in there but the you know the way we're thinking

about pricing uh to be fair is um you know we are in a range where the supply chain is tight and so there are some inflationary pressures um costs have gone up a bit and um we are you know

sharing some of that with our customers but we are also being very thoughtful in look this is you know we're playing out for the long term And you know that means that we are um our our our goal is

to ship more units and a lot more units.

And so from from that standpoint um you should imagine that uh the majority of the growth is uh unit driven and you know the the ASPs are just really to help cover um you know some of the

inflationary pressures.

And just to add what Lisa said our ASP is increasing because the mix where actually each new generation the call counts those are increasing. uh that

actually drives the ASP up.

Thanks a lot for that. And then I guess Lisa also so there's a lot of new um architectures that are being used from you know multi-tenency all the way to low latency and you know your competitor has talked about the low latency part of

the market being you know 20% plus and they of course added to their portfolio there. Can you talk about how you see

there. Can you talk about how you see that part of the market? I mean

obviously you have enough business right now you don't need to worry about that probably for now but can you talk about that? Thanks.

that? Thanks.

Yeah sure. Uh so look I think what we're seeing um is uh what we expected in the sense that you know as you go uh you know as the AI adoption continues uh you

know and the volumes you know continue to go up and the the overall market goes up you are going to see uh let's call it um uh different compute architectures

being used because uh you you want to get more cost optimization from that. So

we expect that you know even in that situation you know obviously the vast majority of the tam is still going to be um you know let's call it data center GPUs as the primary accelerator but you

may choose to do optimization around inference around you know low latency um around you know certain parts of the stack whether it's decode versus prefill I think that's very natural um the way

we look at it is you know we're developing a full compute portfolio so that's CPUs that's GPUs that's the ability to connect to all accelerators

um as well as uh the ability to do customization um for certain customers and we've also talked about you know our semi-custom capabilities and with all of those um you know sort of compute

capabilities in our tool chest um I think we will be able to uh to um address very effectively a large portion of this market including you know the low latency uh portion of the market so

um you know from our standpoint this is kind of a natural evolution now how fast it goes depends um you know a bit on the technology uh in terms of you know what share of the TAM these things become but

we should expect that there will be different variants and we're well prepared to address those different variants thank you and the next question comes from the line of VC Arya with Bank of

America please proceed with your question thanks for taking my question um Lisa do you think agentic CPU growth is incremental or is it coming at the

expense of uh GPUs conceptually. So if

you're raising server CPU TAM, are you also implicitly kind of raising AI time?

So just I'm I'm you know interested in your perspective on what did you think a server CPU uh was as a percentage of AI time before and what is it now with this

120 billion number?

Sure. So the way we're thinking about it is it's largely additive uh to the TAM.

So you should think about um you know we need all of the accelerators uh you know to run these you know foundational models and then as these agents do work

they they spawn you know more CPU tasks.

So I would say largely incremental um the key is um to make sure uh what you know we're seeing is um in these deployments uh the key is to make sure the ratio of CPUs to GPUs are um the

right ratio. So if you're installing a

right ratio. So if you're installing a gigawatt of compute um you know the ratio uh the percentage of you know CPU as part of that gigawatt will will increase. um you know some of the the

increase. um you know some of the the conversation in the industry has been about you know CPU to GPU ratios and you know it's very hard to call exactly but you know we we certainly see uh the uh

movement towards you know where in the past the the CPU to GPU ratio was primarily you know just as a host node you know in like a 1:4 1:8 configuration

you know now changing and getting closer to a a 1:1 configuration or um you know even you know you can even imagine if you get lots and lots of agents that you

could have more CPUs and GPUs. So, um

but that you know all all in all to answer your your question um I think it's largely additive to the TAM and you know the key is that everyone is now planning and thinking about CPUs at the

same time that they're thinking about you know their accelerator deployments uh which is which is a good thing.

Right. And for my uh followup Lisa um you know we we continue to see memory prices uh go up. I imagine that is both kind of a cost inflation uh for you but

perhaps an opportunity to to take price as well. I'm I'm curious how is that

as well. I'm I'm curious how is that dynamic playing out for AMD and and especially for your customers because u you know a greater part of their capex increase is really kind of this memory

um inflation tax right that that they have to pay. Um so how's this dynamic playing out for you uh and uh for your customers and and the the part that I'm really interested in is that have you

secured enough uh supply you know versus your other larger competitor who has uh disclosed a lot of uh pre- prepayments and and other things. So just how's this memory inflation dynamic playing out and

you know are you kind of adequately supplied uh from from that perspective.

Sure. So VC let me um answer the second one first. I think from a supply

one first. I think from a supply standpoint uh we are uh very um happy with our uh partnerships with the memory vendors and uh we have secured um uh

enough supply to you know certainly meet and exceed our um our targets. So it is a tight memory environment let me be clear uh but I think we have uh very deep partnerships um with the memory

providers. And then back to your your

providers. And then back to your your comments on the inflationary pressures.

I mean uh look this is something that everyone in the industry is working with um in the time of tight supply. Uh you

know we are seeing some cost um increases on the memory side. I think we are um all working through that. Um the

the the way we're seeing it unfold in the market is is actually on the data center side you know because of the um let's call it the demand for AI compute.

I mean people are largely um you know focused on supply and uh ensuring that the supply assurance is there. Um the

the correlary of that you know the larger um impact that we're watching is you know the impact on the consumer markets and you know as we said in the prepared remarks um you know we are

expecting that um there could be you know some you know demand um uh you know sort of the demand impact um as a as a result of the memory price increases on

you know things like the PC business in the second half of the year um as well as uh the gaming business. So we're

taking that into account in our in our overall model and um you know we continue to work closely with um the the memory providers as well as our customers to ensure that you know every

time we we ship a CPU or GPU that it's paired with um the memory on the other side so that we we don't have um you know compute that is not being deployed.

And the next question comes from the line of Aaron Rakers with Wells Fargo.

please proceed with your question.

Yeah, thanks for uh taking the question and congrats on the results. Uh I want to stick on the on the topic of CPU to GPU. And as we think about the chart

GPU. And as we think about the chart that you had outlined at at the analyst day there there was obviously broken out between traditional CPUs and then the AI bucket on top of that. Obviously I think

the new forecast has a lot to do with the AI, you know, CPU expansion. I'm

just curious when you're doing a CPU in an AI workload, is there structurally a different level of ASP tied to that kind of CPU optimized for AI relative to a

general uh purpose server CPU? Any any

kind of color or help on that would be uh would be uh useful.

Um sure Aaron. So uh let me start with the the broader question. uh the broader question you know regarding um you know the way we think about the CPU TAM is uh

again think about it as three categories so there is a you know traditional um you know CPUs let's call it general purpose uh CPU uh TAM that you know is increasing but let's call it increasing

at um you know a low rate maybe um let's call it low double digits then you have your um AI head node uh which is connecting to accelerators which is um you know also growing but it's it's um

it's smaller and then the largest piece of the growth is this agentic AI um you know piece which uh you know we think is is really stemming from all of the

agentic processes um I don't I I don't have a a number that I can tell you in terms of relative ASPs because it really depends on the workload that is being

run and um you know what we see going forward is you know as core counts increase you know obviously uh you know we will see ASP increase and you know that's um that's the direction that

we're going in as we uh as we go forward. But the the main point is the

forward. But the the main point is the um the largest portion of this is the you know the agentic AI um the CPUs that are serving these agentic AI workloads

in terms of the TAM increase.

Yep.

And as as a quick followup, I'm curious, you know, how do you characterize the competitive landscape as we see, you know, some of uh some of the ARM introductions um in the market? just

curious of of your views on on the competitive landscape and server CPUs.

Thank you.

Yeah, Erin, the the best way to think about the server CPU landscape is, you know, again, number one, everyone is talking about CPUs. So, that that tells you how, you know, critical they are for

the AI infrastructure and um I think that's a good thing. Um we feel like we're very well positioned. Uh no

question, you know, ARM is a is good architecture. It has a place in the data

architecture. It has a place in the data center market. um you know we view it as

center market. um you know we view it as more um you know point products relative to a a portfolio where you know from an

AMD standpoint uh we built this you know broad portfolio of um of CPUs going forward which you're going to need for all of these different workloads and you

know we have in the Venice time frame added an AI optimized um you know CPU uh with Verono in addition to our

throughput optimized and um uh you know sort of cost optimized points. So um

from that standpoint I think we're very competitive. Uh we're continuing to

competitive. Uh we're continuing to innovate on you know architecture. We're

continuing to innovate on you know both um advanced packaging as well as you know all of the um the architectural pieces. So we feel very well positioned

pieces. So we feel very well positioned going forward. And the key is that TAM

going forward. And the key is that TAM is much much larger than anybody thought. And so there's a lot of

thought. And so there's a lot of opportunity um for uh you know for different products to um to be successful in in this area.

And the next question comes from the line of CJ Muse with Caner Fitzgerald.

Please proceed with your question.

Yeah, good afternoon. Thank you for taking the question. Um I guess first question was hoping to speak a bit more about client for all of calendar 26. uh

you talked about growth uh expected growth but would love to hear you know your thoughts around uh seasonality in the second half and I'm assuming that you are repurposing certain logic tiles

from client uh over to the data center uh and and would love to kind of better understand what the implications are for um ASPs uh on the client side looking

into the second half.

Sure. So CJ, I think the client business has performed really well for us. Um I

think if we look at you know Q1 um it actually uh was a little bit uh stronger than what we expected. Um we are seeing some mixed shift in the client business.

Um the mix that we're seeing is the you know the MNC or the notebook business is actually growing especially the premium portion. Uh we're making um very good

portion. Uh we're making um very good progress in the commercial uh PC um arena with our AIPCs. Um we did see desktops a little bit um you know softer

uh just given desktop is a more consumer focused market and so in that market um it's more impacted by you know some of the memory pricing and the component um price increases. you know when we look

price increases. you know when we look at the full year our you know commentary is um we are planning for um you know some uh demand impact in the second half

uh due to the the memory pricing but even in that environment um you know what we're focused on is ensuring that we continue to make good progress on the uh on the commercial business and

continuing to focus on the premium segments of the market. So, we believe that we will um you know uh continue to grow um on a year-over-year basis uh for the client business um compared to uh

compared to last year and as it relates to you know ASPs again it's a little bit of puts and takes between uh notebook and desktop but you know overall I think we're feeling good about our opportunity

uh to outperform uh the market in client uh going forward helpful uh and I that that was perfect thank you and then I guess a question on

instinct gross margins um you know with compute essentially sold out and and obviously you're building a business so um you you know one has to be uh I guess conservative on that front but I would

think outside of kind of passing through HBM uh that you know given the very tight wafer environment that this would be a place where you know you could look to drive your your instinct margins

closer to your corporate average how are you thinking about that you know either today or you know in the coming 1 2 3 Hi CG, you know at this stage we really

focus on drive the topline revenue growth on our uh instinct family of products. I think on the gross margin

products. I think on the gross margin side you're absolutely right is you know it's really quite uh the demand for compute is tremendous. uh we actually

are very strategic how we think about the um how we work with the customers and of course different customer also have different gross margin. I think

over time once we start to ramp our revenue uh we will have a lot of opportunities to improve gross margin both you know on the ESP side but also

more importantly on the cost side when we scale our business.

Thank you. And the next question comes from the line of Stacy Rasgen with Bernstein Research. Please proceed with

Bernstein Research. Please proceed with your question.

Hi guys, thanks for taking my questions.

Um, for the first one, I I just wanted to make sure I have the near-term AI GPU trajectory correct. So, I know you said

trajectory correct. So, I know you said it was down sequentially in Q1 because of China. You had like 390 million of

of China. You had like 390 million of China revenue in there in Q4. Did the AI business in in in Q1 actually grow sequentially exchina because it doesn't

feel like it given the server outlook and then I look at what's maybe suggested for Q2. I'm are you thinking GPUs and servers kind of grow similar rate sequentially because it would

probably put GPUs in Q2 below the the overall level that you were at Q4 which seems low to me. I'm I'm just trying to tie all that out. Um could could you help me with that please?

Yeah. So, uh I think uh Stacy appreciate the question. Uh I think if you look at

the question. Uh I think if you look at the Q1, we didn't mention data center AI was done modestly subsequentially uh primary due to lower China revenue in

the quarter. I think on your second

the quarter. I think on your second question regarding Q2, you're right.

both data center AI and the server uh will grow double digit in Q2.

Yeah, but you didn't answer my my question. Did in Q1 did it grow

question. Did in Q1 did it grow sequentially X the China stepped down I guess is what I'm asking.

Uh the China for our business in Q1 it's not material.

So I think I will repeat what I just said is the yeah the the the revenue China revenue in Q1 is not material.

Okay.

Okay. So you don't want to you don't want Okay. Um

want Okay. Um second question.

OPEX um you for spending but it sort of continues to blow past the targets. you kind of give an opex

the targets. you kind of give an opex guide and and then it blows through it and then you guide higher. So again, I'm not I'm not bothered by the spending.

I'm just wondering why is the opex been so hard to forecast and how should we think about opex through the rest of given revenue growth?

Yeah. Uh thanks Stacy for that question.

I I think the most important thing is given the tremendous market opportunities we have, we actually are investing aggressively. uh if you look

investing aggressively. uh if you look at the past several quarters we really leaning in in investing but all the AI investment are driving the revenue

momentum. So if you look at the Q1 we

momentum. So if you look at the Q1 we revenue was 38% up then Q2 it was for we

guided 46% up the investment are driving the revenue momentum some of the um OPEX increase of course it's tied to the

revenue when you look at our beat on the revenue side versus our guidance we did beat on the revenue side right so that impact a little bit but also at the same

time, you know, we have a lot of customer engagement with our data center AI business. We do continue to make sure

AI business. We do continue to make sure we have the resources to support our different customers.

Um, thank you very much. Um, operator, I think we have time for one more caller on the call. Thank you very Thank you.

Thank you. Our final question comes from the line of Blaine Curtis with Jeff.

Please proceed with your question.

Hey, well, thanks for squeezing me in.

Um, Lisa, I just want to go back to the supply side. There was a lot of story

supply side. There was a lot of story about your competitor restarting 7 nanometer. I'm just kind of curious as

nanometer. I'm just kind of curious as you look at that landscape which is quite robust through the decade. Do you

think that the older products will stay around longer? And is there a way to

around longer? And is there a way to think about the implications for gross margin? It's such a strong market is

margin? It's such a strong market is that actually a negative?

Um actually Blaine, I don't think we see the older products hanging around longer. Um in our case um I think you

longer. Um in our case um I think you know it might be company specific stuff in in our case we actually see um uh first of all you know we you know turn

is very strong um we actually crossed over you know 50% of um our revenue being turnin uh this quarter uh Genoa is is very strong you know we're still

shipping some Milan but I would say uh that's come down over time so uh it in in general people want to use the um the

more uh the newer products because they're they're just more you know efficient in in every aspect from performance from cost structure from uh you know power standpoint. Um so that's

what we're seeing. Um by the way I should also mention you know in addition to you know what we're seeing in the cloud segment of you know server we're seeing really nice um you know strong

pickup in enterprise and there as well uh we're seeing our newer products do uh very well. So uh from from our

very well. So uh from from our standpoint it is um all about uh you know ensuring that you know we ship what the customer needs um and in this case

um it t it typically is our uh newer products and you know we expect that to continue um as we transition into Venice later this year uh we will you know expect Turan and Genua to continue

shipping but um there there's uh there's a lot of of goodness in going uh to the new products and on the supply chain side I know there's been you know a lot of discussion about how tight the supply

chain is. Um the supply chain is tight.

chain is. Um the supply chain is tight.

Uh I would definitely say that but I I also think you know this is an area where we excel. Uh we have very deep relationships across the supply chain um on the wafer side on the backend uh

capacity side and you know we are seeing meaningful improvements in that um and as our customers come to us with more demand uh we are you know getting more supply and uh the good thing about this

is we're now talking about 27 CPU demand we're talking about 28 CPU demand and so that allows us to just plan uh you know much better um as we go forward.

Thanks. So just a quick one for Gan. I

just curious to follow up on Stacy's question on OPX. I guess I was a little surprised that SGNA is kind of outpacing R&D. I was just kind of curious is that

R&D. I was just kind of curious is that startup costs I mean because in a strong market you wouldn't think you would have to discount or or have a big sales effort. Um so I'm just kind of curious

effort. Um so I'm just kind of curious for the year how you think about R&D growth versus SGNA.

I think for the year you should expect us to grow R&D much faster than SGNA. Uh

I think in the past few quarters uh we have been really building our go to market uh uh machine and we have been investing more in sales marketing side

but going forward you should expect the year-over-year growth R&D will grow faster than SGNA growth. Yeah, and if I just add to that, Blaine, the places that we invest uh Jean's absolutely

right, we're investing in R&D um ahead of um you know, sales and marketing, but the places that we're investing in sales and marketing are paying off. So, uh the investments are going into um enterprise

servers, they're going into commercial PCs, they're going into mid-market uh small and medium business. These are

places where AMD traditionally didn't invest. Um but now that you know we have

invest. Um but now that you know we have a much broader portfolio both on the um server CPU and on the uh commercial PC side it makes sense for us to invest

because you know that that's sort of the the very uh best part of of those markets.

All right thank you very much everybody for joining and your interest in AMG.

John you can go ahead and close the call now. Thanks.

now. Thanks.

Thank you. And ladies and gentlemen, that does conclude the question and answer session and that also concludes today's teleconference. We thank you for

today's teleconference. We thank you for your particip participation. Please

disconnect your lines and have a wonderful

Loading...

Loading video analysis...