LongCut logo

AWS re:Invent 2025 - Accelerate Telco Transformation: AT&T's AI-Powered Migration at Scale (IND201)

By AWS Events

Summary

Topics Covered

  • 70% Telco IT Still Legacy in 2025
  • Agentic AI Enables Autonomous Workflows
  • Mainframe Knowledge Vanishing Fast
  • AI Generates Missing Blueprints Instantly
  • Reimagine Discards Obsolete Legacy Code

Full Transcript

OK. Hello,

everybody. Good afternoon. Welcome to our session.

Very happy to have you here.

Today, we're gonna tackle one of the biggest challenges that we have in telecommunications.

How do you modernize an infrastructure that has been running for decades? We're talking

about systems with million lines of codes. It's running, it's processing maybe,

codes. It's running, it's processing maybe, you know, billions of transactions per day, and it's served by for millions of customers.

All of this while maintaining business continuity.

So, this has become a big puzzle throughout the years. And until now, uh,

the years. And until now, uh, all the migration, the traditional migration couldn't really handle it because it didn't have the ability to run it in the speed and the scale that is required to run these transformations.

But now, we're reaching a different era.

We have a genetic AIs that allows us to unblock these capabilities, and our session is going to be talking about this.

My name is Efrat Ni Berger. I'm a principal solution architect at AWS handling AWS for telecom.

I'm gonna be very, very happy to have you on stage later on. Rama

on. Rama Raghavan from AT&T. He's leading

enterprise architecture and hybrid cloud platform engineering, and also my peer, uh, Sanjay Agarwal, who is gonna be uh talking to you about, he's from uh senior solution architect from AWS.

They're all gonna work with you about the latest and greatest services that we have for you.

What's the agenda?

So, we're starting by understanding why telecom transformation is so challenging. What's

unique about these transformation.

Then we're gonna walk with you about the AWS uh generative AI capabilities, our latest and greatest services that allow us to run transformation differently than the way we've done before.

Um, we're gonna have Rama on stage talking to us about their main film modernization project, very interesting project that is being running right now. I want to close with a little bit of takeaways. How do you start your own journey?

So let's start with the baseline. What

do we have today?

70% of telco IT is still running on-prem. We're talking about 2025.

Interestingly enough, is that 70% of it, according to McKenzie, is actually systems that are more than 20 years old.

We're not talking about the back office long tail systems. We're talking about the billing systems and the customer management systems that drive your key organizations that are in the critical path.

So, how do we actually modernize these?

And then, one of the statistics that we know is that it takes around 1.5 years on average to do migration, but we know from practice, especially for some of the telco workload, it actually can take even longer. So we're talking about multiple

even longer. So we're talking about multiple years to run them.

So, what makes telecommunication uniquely challenging?

We're going to talk about multiple dimensions and each and every one of them amplifies the other.

Starting with mission critical.

Many of these systems, think about like online charging systems, are having very rigid availability requirements.

Talking about 59, meaning that I can have 5 minute downtime per year.

We cannot afford to have any kind of impact while we're doing any modernization.

Same about data volume and complexities.

It's not just about the traffic that goes through the system.

It's also about the business logic. Now, we're

talking about business logic that has been added for 2030 years.

Just think about it, um, international roaming, uh, enterprise hierarchies, uh, family plans. So all of this logic has been added throughout the system and need to be maintained through transformations.

Going into regulatory compliance, whoever is Intelco knows that so many compliance requirements need to be met for customer data.

We can talk about for payments policies. We can talk about in Europe for

policies. We can talk about in Europe for data privacy.

So all of these, if we're doing transformation, we have to comply and ensure that we are not we are complying with them through the modernization.

And also about integration complexity.

Every everybody knows the spaghetti of the ecosystem, so you have an ordering management system.

It's like dozens of applications surrounding it. Lastly, this is what we're

it. Lastly, this is what we're gonna talk about now is the legacy, uh, technology depth.

So according to Forrester uh research, we are talking about 20% of the IT spend actually going to maintain these legacy systems. And one of the things that we want to do is we want to invest in new capabilities in our system. So,

system. So, each and every one of them is challenging, but together, they're actually creating a real challenge for us. This is also

for us. This is also about how much time it takes to run traditional migration as we know it right now.

Like, think about the assessment. One of the biggest problems that we have is that we don't have enough knowledge.

You actually need to look at the code itself to get the business logic behind it.

The same thing is happening, so it's kind of like a reverse engineering of the code in order to understand.

Same thing happens for documentation.

You don't have a document, so you actually, the documentation of the business logic is in, within the code itself, which takes a lot of time to actually understand the flaws.

Second is about the actual work for code transformation.

One of the things that we're seeing is that if you need to do any kind of like cobalt to Java, this is something that can take a lot of hard manual work. And

again, I'm talking about maintaining all the complexities of the business logics that has been added into the code in the 20 plus years that we're running it.

Third one is about migration testing. Just imagine how much ed use cases

testing. Just imagine how much ed use cases have been added throughout the year.

Think about it, leap years, um, time zones, all of these, we need to test it. If

we're not testing it in the right way and the testing is not done in a comprehensive way, then this can impact customer production when we switch the systems. What is our goal?

Our goal and our vision is that we would like to accelerate, accelerate and automate 70% of this transformation.

Actually allowing us to move away from this legacy, but again, without any additional risk um to the system.

OK. So, it comes like comes to the question of what is Gentic AI? How is it even related?

Why is everybody talking about it? And how

is it even connected into um telecom IT transformations?

That's what we're gonna cover in the next few slides.

When we look at the genetic AI or even the evolution, it actually goes through the transformation of three stages.

It starts with the generative AI assistance.

It's kind of like what you used to have. It's

a prompt, you ask a question, you get an answer, very simplified, but you need to drive all the interaction with the um to get the prompts.

Now, what happens next is that we evolved. Into generative AI

we evolved. Into generative AI agents.

They are actually now much more autonomous. They

are able to reason. They are able to reason

reason. They are able to reason and act on your behalf. So you can actually give them they have specifications, they are able to perform, but they still need to do well defined goals that they need to accomplish. They are not really. They still

need to do some human interaction in order to perform what they do.

Now we're reaching an era that allows us to really hit these bigger transformation because we're hitting agentic AI. These are actual

AI. These are actual bigger workflows. We're talking about a

bigger workflows. We're talking about a multi-agent system that that can be fully autonomous and allow you to really execute complex processes. And then the whole

processes. And then the whole idea is that when you have these capabilities, then you're able to run from kind of like delivering something that you need to run all the interaction into something that the process itself can be executed in a full transformative way.

To understand how it goes, I'm gonna go one level deeper into what is an AI agent. So,

agent. So, this is kind of like a cycle that allows you to understand how it behaves. You have 4 steps, observe, reason, act, and in some cases reflect how it's being done. Assume that I give you as an input goal, a problem.

I have a problem, I need to do something.

First he's doing is observe.

What's the current situation?

What kind of information do I have about it?

Then the other, the next step is about reason.

What kind of steps do I need to take?

What kind of an approach do I need to do in order to resolve the problem that has been given.

Um, then it ACTs. Now, Act

allows us to use, um, to actually take control, to use, to call APIs, to use tools.

We have an ability to use, um, to use external source of information, really orchestrate change.

And then reflect, how did it go? What did

I, like, can I do it any way better?

Now, this is a sick way, meaning that the agent is always learning.

So when I have an AI agent and he's specializing in something, there's an ever-growing and learning capabilities that go through it.

So it's really is improving over time. Let me give you two examples of

time. Let me give you two examples of the way it works.

First of all, let's do kind of like a differentiation between a reactive and proactive agent.

In reactive agent, I've got for you a telco question. I might ask,

question. I might ask, hey, how much am I paying like monthly?

And then the answer is gonna be, OK, you're paying $85 per month and, you know, it shows what you're paying. It's like you have unlimited uh texts, calls and uh 10GB of data.

Now, Answer, you know, there's a question, you got an answer, that's fine.

What's gonna do a reactive, a proactive agent? Proactive agent can do

agent? Proactive agent can do beyond that.

So, first of all, please observe. I have a customer who's asking me about the plan, but let me see what I know about it more.

So, I know, um, I know what planning is, but I can also look at his six months past usage to see how much he actually consumed. What other plans are available for

consumed. What other plans are available for me. So then it allows me to reason.

me. So then it allows me to reason.

He asked me about the pain, maybe he would like to do some changes, maybe he would like to create some optimization.

Then the actual magic happened because it's going to be acting. Acting, analyzing

the data, understanding that there's gonna be a better plan, and then come back to my customer and saying, OK, you're now paying 85, but I'm seeing on your past experience that you're actually not using more than 4 gigabytes, and then he's offering you a better plan for your needs.

This is kind of like the cycle that we're talking about.

It's a complete different experience. And if I'm as a customer saying, oh yeah, great, I would like to change that, then that's the other act that you can do. They can switch your plan, you're gonna get a mail

can do. They can switch your plan, you're gonna get a mail about the changes you've done.

So again, it acts on your behalf and it's autonomous.

But we need to understand that a single agent is unable to perform very complex, comprehensive processes. Just imagine that if I want to

processes. Just imagine that if I want to ask a single agent to do, can you do a full core transformation for me, there are so many steps on the way that one specialized agent will not be able to perform it.

So, we're moving from a single agent into a multi-agent.

What multi-agency is doing is the same way as like we behave in a telecom company. You're

having different teams specializing in different activities, and they can run it.

So I can have inside a telco transformation or any kind of like a big legacy transformation, I can have my co analysis.

This is the type of agent who will learn your code and understand all the relationships, all the integrations and um Give you more information about the way you can run it.

Then we can have um code transformation that's actually the one who will do the code conversion.

We can have the one who's going to build the unit.

You understand it. It's kind of like a multi-agent working in tandem.

You have in the middle, the internal uh agent communications that orchestrate it all.

That's kind of like what allows a genetic AI to perform these complex transformation.

In an effective way.

So, we talked about agents.

Now I'm going to take you to the broad level. Probably

if you've seen, until now, you've seen this slide once or twice at least.

This is talking about our gentic AI stack. And the whole idea is the same thing. We

stack. And the whole idea is the same thing. We

have a suite of services that allows you to work through your business, per your needs.

So, we're going to be very brief because we're going to reach the point which is interesting for transformation very quick.

On the bottom, on the foundation, we have the Infrastructure.

This is um purpose-built infrastructure for optimizing AI workload. It can be AWS Premium, um Infringia, it can be GPUs, um, and then on top of it comes the second layer, which is about models and tools. This is where you're going to have Amazon

tools. This is where you're going to have Amazon Bedrock that allows you the access of any foundation model for your services.

It also allows you to have uh engine runtime, orchestration capability, and so on. So, on this, we

so on. So, on this, we can stay to stay a lot of time, but I'm gonna move forward because what we need to talk today is about the application tier.

This is where the magic happens for transformation.

You're gonna have Amazon Que Developer, uh, Amazon AWS Transform, and Kiiro. We're gonna talk about all

Kiiro. We're gonna talk about all of these throughout the next slide of the session.

Um, just letting you know, Amazon Queveer is not like your regular, um, developer chatbot. It actually enables

developer chatbot. It actually enables you to understand, to write, to embed more information into Orcus to, um, operate all your applications on AWS.

So, one of the things that I would like to, um, show is afterwards, we're going to do a short demo showing kind of like the agents behind the scene for running it.

Next one is gonna be AWS Transform.

Now, AWS Transform is one of the more interesting, um, it's the first industry purpose-built service that allows you to run transformation.

And we tackle one of the biggest ones, VMware, .NET, and mainframe.

Now, think about it is that, by the way, now in uh this reinvent, we just launched another capability in Transform, which is the full stack of Windows.

As an example, is that it now is known to do SQL Server conversion to Aurora Poker, our managed database.

So this is an ever evolving and we're listening and seeing what's needed um on your side.

Another thing that has been launched in AWS Transform is whoever heard on custom.

What is custom?

The idea is that we gave you the bigger ones, but at the same time, there's a lot of patterns needed by customers.

So, We are giving you repeatable um transformations that are either coming out of box from us for any kind of transformation.

It could be version upgrade, it can be runtime uh migration, it can be changes of core transformations that we're just going to discuss.

But you're either gonna be able to use our own AWS um provided um solutions or you're gonna bring your own. So if you have a pattern that you would like to do some transformation that actually allows you to run things at scale.

Um, and lastly, we're gonna have Hero.

Hero, and I'm still giving some for, for the next, I'm just going to give you a snapshot is, is allow you to reimagine, to reimagine your legacy system as a modern microservices-based application. So, we're going to have that to the end of the session.

application. So, we're going to have that to the end of the session.

I'm gonna go one level deeper into um Q Developer because what you need to know about it is that it's a list of specialized agents. We've thought

about it is that you have agents that are specializing in doing, performing tasks. In this case, we're talking about your

performing tasks. In this case, we're talking about your entire software development life cycle.

So think about it, your developers are spending a lot of time on this.

It's kind of like a repeatable but very time-consuming, important activities.

Your developer help you with these.

So I'm going to give you a few examples. Look about the software development agent. It allows you to implement application changes in minutes. And we're not just talking about uh code completions. We're actually

talking about production grade features that you need to to develop.

Another thing is look about the unit test.

That's one of the things that takes a lot of time to write down the test. So it allows you to build these

the test. So it allows you to build these comprehensive tests to support your new code that you haven't edited.

Documentation agent, we talk about it. It's one of the things that we constantly see, especially for legacy applications that are missing.

So very one of the favorite capabilities that we see around it is explainability of the code.

So it allows you to have a lot more understanding of what's happening in the code itself.

Um, code review agents, think about it is like, you know, your senior engineer.

It's looking for if you have any uh bugs, if you're having any um regulatory uh constraint that you put in the code, misconfiguration. So it gives you advice around this.

And then lastly, this is what I'm gonna demo soon.

We're gonna show transform agents.

So, what is transform agent is the capabilities that you're gonna have inside Que um developer, but at the same time, is what I mentioned about AWS Transform custom.

Kind of like the same experience into how do I take one of the biggest challenges. In this

case, I'm gonna take Java version upgrade.

Java-based version upgrade, everybody who's in Telco usually have that. These

applications that are running, I don't know, Java 6, Java 8, and I need to update them to 17 or 21 because the uh software is end of life and the security vulnerabilities around running this code um in this version.

But if I'm trying to do this manually, Everybody was trying to do this platformat this project, it takes time, it's error-prone, you need to debug it a lot. It's just one of the most time-consuming

a lot. It's just one of the most time-consuming activities, but necessary ones.

So, I'm gonna show you a short demo on how we can do it much effectively.

OK. So here we go. We have the um we have the uh dashboard.

I'm hoping it's starting, it should be starting. Now, yeah. I have the dashboard and I just picked. I wanna do an upgrade in between 8 to 17.

And I'm also being able to say, hey, I wanna do also Unitist.

On my code.

So this is where the thing starting. I'm saying,

OK, give me my JDK. I'm pressing the JDK.

What is now happening is usually takes around 10 to 30 minutes for it to be executed, depending on the project site.

What you're seeing is what's happening behind the scenes.

He's What kind of activities need to be done. The first one is going to say I'm going to do the upgrade of the version.

Now what's interesting is that he's doing a full analysis of the dependencies and he's now showing you what he identified, what are the things that need to be added, removed updated.

So now as a person, that's a human in the loop, you are able to see exactly as he's offering to do the transformation of the code.

So you have the kind of the full information that you can review later on and understand and maybe change it so you can always prompt it.

Second step that he's doing, and again rushing the demo quickly, is that it allows you to look at the deprecated code. So,

code. So, same is happening if you do a Java update. In many cases, you have code that

update. In many cases, you have code that needs to be removed, he's able to identify all the deprecated code, and then um we'll change the code accordingly.

Now, when we reach, we're gonna reach step 3, that's emerged, the magic happens, he's actually now doing the agentic work of taking all the analyzed information, do the actual code changes based on the analysis that he's done before.

Um, and then he's going to issue a formal transformative, transformation report showing you all the changes that he's done, but better than this, you can actually open the old and the new code and see and compare what has changed.

Something doesn't fit, doesn't, something doesn't add up, you prefer to do something differently, then you're always allowed to kind of like prompt it and say, To change it.

Let me show you how it looks like.

Yeah, it looks like this, so you can scroll down and see all the changes. So, what

you've seen is an update being done in minutes for an activity usually take between months to even years, sorry about that.

Month or even weeks or days, depends on the complexity of the uh of the application. Um, so this is

application. Um, so this is closing the demo side and the first service.

And I took you through some of the, kind of like bigger challenges of telecom and a little bit about Agentic AI in our stack, but all of this is just technical.

The really interesting thing is actually here.

What's happening in, you know, in the real world, a real use case from a real customer, about the challenges that they're experiencing, how they're handling it, what the vision toward it. And for the second stage,

it. And for the second stage, I'm very happy to have on stage Rama and Sanjay to walk us through the AT&TU case.

Thank you. Thanks.

Thank you, Fred.

So Rama, thanks for joining us today. I really

appreciate your time.

Um, so Rama, you're leading a like mainframe, uh, migration for telecom industry big time.

So can you talk about what are the challenges you have when you're dealing with the mainframe applications and also your vision around when you try to migrate or modernize that application? How do you deal with it?

application? How do you deal with it?

So if you can talk about it, that will be super useful for the audience. Thank you. Yeah, thanks.

Um, hello everybody. I'm Rama Raghavan.

hello everybody. I'm Rama Raghavan.

I drive the cloud engineering platform engineering enterprise architecture for AT&T.

We've been on a, on a fairly good journey to evolve to a very hybrid environment within AT&T.

And over the years, a lot of our migrations have been on the, on the mid-range, um Linux, yes, kind of environment.

And we've been looking at, OK, what does it, what does it look like from a very legacy, uh, mainframe oriented assets.

And if I were to ask this audience, does anybody know anything about what JCL stands for?

Or, you know, CICS and I, I'm sure there'll probably be two hands that may go up.

But then as you go out to the industry, you're going to find fewer and fewer people that still have any knowledge about some of the mainframe systems. Now, the systems though are going to be running there. It's going to be running,

there. It's going to be running, and, you know, not a moment goes by and and it'll be another 30 years that will go by, and they could still be running.

But it does present a certain business risk across our entire, uh, you know, environment from, uh, from the impact to, uh, impact to the, uh, business critical systems, the impact to finance, impact to many of the things that, that tend to occur.

Now um the key challenges with this landscape again is, is about the highly integrated systems and over years, we've been.

Uh, you know, aggregating some of the affiliates, disaggregation, you know, companies, we, we, we do billing aggregations across, uh, across telcos.

Um, so where do we wanna go with this, right? In, in terms of

this, right? In, in terms of our velocity and how do we move from, uh, where we are now to accelerate and, and, and integrate it with, you know, faster evolving platforms that are, you know, more cloud-based, uh, capability sets.

The scale, obviously, we got to match and exceed.

Um, oftentimes you'll find that these, uh, you know, mainframe systems are high, high-performance systems. There are newer capability sets that, that enable us in, in the cloud platform. Um, same thing with

platform. Um, same thing with security. A lot of our security

security. A lot of our security vulnerabilities tend to arise on the mid-range, uh Linux uh platform compared to mainframe.

However, when there are going to be events and incidents, we wanna evolve our platform in a manner that we are highly secure and bringing, bringing some of that.

All those are looking more on the technical side of things. And

then the, the other key uh is what do we, how do we enable our business in terms of um de-risking the business, and then providing them more capabilities as we evolve, uh, into additional, you know, business needs.

Um, so as we kind of look at our approach towards that, um, Obviously, you know, the current um set of assets, uh, you know, all the way from, you know, uh code or documentation, and many of those people have moved on to do other things. So you're gonna find challenges in terms of, uh, uh, you know,

discovery of what exists right now.

To then where do we want to go from there? And, and we've done

there? And, and we've done quite a bit, um, try to look at it very hard and say, do we need to retire this platform?

And so that, that's obviously given a very strong uh look and, and you'll find that, yes, you could, you could do 10%.

You could do 20% re uh retirement, but then after that, you're finding that these uh applications are serving business critical functions across multiple product lines. And so from there, we

lines. And so from there, we kind of went down, we, we did not actively pursue just trying to re-host it. So we

didn't want to grab that entire, you know, environment and run it like another mainframe in uh an alternate uh hosting environment.

So instead, we kind of started pursuing our path down a, a code conversion.

And there are several products in the market. Uh, but

market. Uh, but again, when you start looking at the code smell, uh, you'll start finding that the source code got converted, and it starts to look like cobalt just written in Java, or, you know, many other such things. And so now,

things. And so now, if you go out to the street and hire somebody, And they are top Java developers, and they look at the code and they're gonna say, OK, what is this thing, right?

So what have we done in the process?

Um, now, there are, uh, additional improvements constantly in the last 18 months to 24 months. There are products that are constantly

months. There are products that are constantly improving that. So there's uh object-oriented

improving that. So there's uh object-oriented Java code, and other things. So when you get to that, um, almost, I would say You know, a little bit of, ah, ah, you know, trans transforming that.

We've only been able to get it to a certain level.

And as part of the evolution within uh AI and, and the possibilities and, and where we want to go with all of this, is, is where we start looking at exploring kind of my partners in crime here with, with the uh critical capabilities.

And so what might that look like? And

now we're no longer limited, I, I would say limited by some of the constraints that we have in our existing environment, and we were able to move, let's say, 0.10% using some of the existing tools set, but we feel like we could do a lot more with, uh, with some of the great capabilities that we've heard through the last few days here as well. So,

um, you know, Sanjay is gonna help kind of start talking a little bit about some of the capabilities that gets us to more than just, you know, re-platform the code, but to take us further out to more refactoring and reimagining the whole thing using the critical AI capabilities.

Uh, Sanjay, thank you, Rama. Thank you for the insight. These are the real challenges, really.

the insight. These are the real challenges, really.

So over the next 20 minutes, I will walk you through how generative AI completely redefining the way we used to do migration and modernization.

When we talk about migration, uh, as you see Rama also mentioned, uh, we are familiar with 7 Rs. We are familiar with all the way from retire, all the way to refactor.

But thanks to generative AI, it allows us to add one more hour, reimagine, and that is what I'll be talking about today. Once

we add this one more hour, that changed everything, and I will be talking about how reimagine work and not only that, by the way, uh, the way we used to do refactoring our application also dramatically changed with generative AI.

So I'll be talking about how the new technology in hand helping in reimagine as well as refactoring our applications.

Uh, before I go further, um, I would like to get on the same page with all of you. What does refactor mean, because that may

you. What does refactor mean, because that may have a different meaning for different people.

So how I define refractor is Let's take a simple example. You have a cobalt application running on mainframe hardware, a mainframe application, by the way.

You've been asked to refactor, move it to AWS Cloud.

So the way I define refactor is The V application work in your own prem that will not change when it migrates. The business

logic, the functional requirement, nothing will change.

It will be as it is working as it is, but what will change is the tech stack.

What is tech stack?

You may be converting your cobalt code to more advanced Java code, Python code.

You will be converting maybe your database to more.

EQ engine running on the RDS or maybe you are also doing a redesigning your application from a monolithic architecture to more microsurface kind of architecture.

So, what we're seeing is generative AI is allow us to do all of it automated from start to end. And that is what I want to show you how AWS technology is allowing you to do all of it from start to end.

So, first tool which I want to discuss today is AWS Transform.

What is AWS Transform? It's really a Collection of pre-trained agents sitting side by side, doing this work from looking for your code all the way to deployment.

They're stitched together, you have full control of it, but they are working from you for you on your side to do all these tasks.

Let's dive a little, a little deeper.

So when you feed your cobalt applications to AWS transform or maybe set of applications, not only one applications, what happens first is code analysis agents take over and it goes every single line of your code and learn from it. It learns from it and

it. It learns from it and create a.

Kind of a blueprint of your applications. Create

a blueprint of your application, find out all the dependencies, do a dependency mapping, find out all the interfaces, so it does create a blueprint and that is something very unique which we never had before. Once

before. Once we have a blueprint of your application, next agent take over, which is called documentation generation agent, and that is where the magic starts happening.

So just by looking into code, generative AI now is able to produce two types of documents.

One is business document tells exactly what that application is doing, and second is more technical document.

Technical document means it will go function by functions, tell exactly how the system is designed for each function, each file, each application.

So this is something, a game changer because you will relate to it whoever doing involved in the migration.

It is the hardest part is to migrate to the application which has no code or no no document or very little document.

They're the hardest part.

So this step alone accelerate our migration to cut the migration time to half, and I will show you exactly how under the hood this agent worked, by the way, in a, in a minute.

I'll show you all of it.

Now we have a blueprint of the application.

We have an up to-date document. Now I need help. I need help creating a target state

help. I need help creating a target state architecture.

And that is where the core decomposition agent come into the picture. What

the picture. What it does basically, it Again, goes to your core knowledge and give a very prescriptive guideline to developer, how can you break the monolithic to the small microservice-based architecture.

And that with that you can design your targeted architecture.

Once we have that, now we get to the work where Rama was mentioning as well. We

start converting code to Cobalt to Java code and generative AI doesn't stop there.

It also works with your new code, make it better, better, better to the point where it's ready for test and deploy.

So this is the end to end migration process which is taken care by it will transform start to end, but I would like to still give you a little bit more.

What happened under the hood when, when we talk about this agenda.

So let's start the first one, right, code analysis.

So think of it once you upload your code to It was transformed As I mentioned, right, code analysis tool will go line by line, look at your code, and it will create basically, um, it's, it's like your inspector.

It will go look at inspect it will look at your application.

We'll see first of all, is there any missing artifact.

Maybe for your cobalt application, maybe your runbooks are missing or your copybooks are missing, or maybe they duplicate code. So

it will create a full visual representation of what your application is doing.

And one interesting feature I would like to highlight for you is it also gives you a complexity score.

What it is, by the way, complexity score is, will tell you upfront how difficult or easy it's going to be migrate your application.

So, I personally like calling it X-ray vision of your application, which was never available to developers like us before.

Having this information upfront is super, super useful.

It will not only for migration but also for planning purposes as well.

So once we have this X-ray vision of your application, next, I talked about the documentation agent.

Yes, it creates two types of documents, business document as well as technical document, but why such a big deal? Why is it such a game changer?

Well, we all can relate again. The longest time

again. The longest time in your migration journey is the testing part.

You can migrate the application, but if the thing doesn't work, what do you do? You

have to fix the problem, and that is where this agent help because if I have to fix a problem and if I don't have a documentation or technical documentation I'm talking about particularly, that is the toughest part to fix, and this agent solved that problem beautifully.

Uh, once we have all these artifacts created, I talked about.

Uh, how can I get help for my target shape architecture?

And this is where the core decomposition agent come into the picture.

So the way it works is a typical geneative AI problem.

It kind of do a segmentation, segmentation of your code based on the business logic.

And give you a visual representation to a developer, how can you break apart all your monolithic applications to the small component.

Microservice. What is microservice?

Macroservice is a subset of functionality which you can independently design, code, test, deploy, even maintain.

And why it's so important?

Day 2 operation TCO. Your TCO will

TCO. Your TCO will be um if you have to fix a small component of a problem, you need not to touch the whole application.

Just fix the small problem in the particular microservice, deploy it, you're good to go. So this save alone, this

to go. So this save alone, this step is super critical for day two operation.

Once we have all this artifact, which was never there before, it used to take us actually months to create that mental model with this artifact in hand, which, which gets created within hours, by the way.

So we will show you some demo as well.

So you already have a target architecture. Now let's go to

architecture. Now let's go to the real work, convert the code, and that is where the code refactoring happened.

You convert the code to language of your choice, Java, Python, any other language of your choice.

And then generatively I still continue working and make it better.

So, uh, but one important point I would like to emphasize very clearly here, at every step you are in control. Any input

control. Any input to the agent or output to the agent, you can change.

So, think of this agent is like Your assistant working with you, doing heavy lifting for you, but you are always in control, so you can pivot whatever way you want.

So this In fact, we will talk about some more detail in a few minutes, but these are the ones changing the way we used to do our refactoring.

Let's take the problem to one step forward.

What if you have an application and that application 30-40% requirement is not even valid anymore because the application was deployed and developed 20 years ago, 15 years ago.

You have to migrate that application. What do you do?

What will you do?

There are two options. One, you refactor it. Yes, this

agent will help you refactoring.

You can migrate it, refactor it. That's all good.

it. That's all good.

But the problem is you are bringing some of the. Some

the. Some of the functionality which is obsoletated. You will be developing it, converting it, testing it, but that is waste effort.

Second option, you rewrite it.

You can always do that as well, but it takes a lot of time, a lot of time to refactor, uh, to rewrite the application.

So what if I tell you there is a way now, you can Bring only a selective part of application.

And this is a new way of working, and that is what we call reimagine.

So I will walk you through a little bit on reimagine part.

How does that reimagine come into the picture?

So let me define first what reimagine for me is.

Reimagine is your even business function is modified.

You choose what you want to bring of the application.

Yes, tech stack will be changing, yes, but we are talking about Even the functional requirement also you can modify.

So you're bringing only the portion of the application and that is something we never saw before. Never was possible without

before. Never was possible without generative AI.

So this is the first thing we we call it reimagine, but you can call it smart refactoring.

You can call all of these names and the tool for that is Quiro. You might have seen Quiro

Quiro. You might have seen Quiro House in our uh expo floor entry.

That's what Quiro lived, but I can walk you through what it does, basically.

So Quiro is AWS's new AI powered development tool with a simple but different approach.

Here we do not start with the code.

We start by telling what you want to develop and Quiro will do the job for you.

A little bit different approach. Same thing, we will still um do all the things we will be doing, but now at the last step, let me show you a little bit of architecture, how it works.

So, so far, whatever we talk about with the transform, we still will be doing that. We still will be looking into the code, doing a dependency mapping, interfaces, creating documentations.

All good stuff, right?

But that we call it a reverse engineering.

Once we have that artifact, we will not convert that cobalt code to Java code.

We will, this will become an input to the Kiro, and we will tell Kiro what functionality you want, how you want, create a specifications, and with that specifications, Quiro will develop a new application for you.

Very different approach, that's why we call it reimagine, and you will have a brand new application.

Again with the clicks of buttons.

And which you go to the next step, 3rd step, which is where you do validate everything, but things may not work as you like, so you can repeat step 2 and 3 and 2 and 3 again and again till you get what the result you are looking for.

And step 3, once you are happy with the result, it will give you basically a brand new application.

You are not leaving all the knowledge behind.

You're still using the knowledge of the application, but Quiro gives you a new way of migrating that application.

Uh, I even have a short demo.

So what I did was, it's a three-tier application, web application, very simple web application, and I want to add one functionality, single sign-on.

I want to add a single sign-on with a third party, Google sign-on or enter ID, whatever way you want, and see how Quiro take it forward.

And I will also pay attention, by the way.

So, not only once you develop the new functionality, kiro can help you in testing as well, automated testing.

Hi, this is Quiro, the new Agentic ID that works alongside you from concept to production.

Let me give you a tour of some of the features of Quiro.

First, we'll click on that ghost hand icon on the left-hand side.

Now I'll click the generate steering docs button. This will help Carro better

button. This will help Carro better understand the application when we prompt it later. Now

later. Now that the docs are created, you can find them in the Quiro staring folder.

These files describe a server-less full-stack web app that we're using with a NexJS and an AWS backend.

Let me show you one of our most powerful features, Spec. I'll ask Quiro to add a

Spec. I'll ask Quiro to add a new way to sign into our app using Google.

Quiro is going to help us code this up and give us a step by step process so it gets it right.

The spec feature generated this requirements document as its first step in helping us create this feature.

It has user stories and acceptance criteria.

I'll review it and add another criteria to improve the way sign out is handled.

For this next step, Kiro generated this in-depth design document. It

includes a mermaid diagram, infrastructure changes, and configurations.

This looks good to me, we'll continue on.

Finally, we're presented with a task list. Quiro

has broken up this feature into smaller parts that it can complete one by one.

I really like this feature because I could interactively have Quiro complete a task at a time, and I can verify or make changes if needed afterwards.

I'm gonna have it start on these tasks.

I'll queue a few, and I'll check it along the way.

This will take a short while.

While it works, I'll set everything up inside the Google Cloud console outside this video.

It's been a few moments. We're now in task 5, and it's updating the PRISA update to include the Google account.

Great. It's now completed all the tasks.

I did a few updates during each task to make sure everything is working. And now, let's take a look at the output.

Here's the application. It has this new sign-in with Google button. All the backend and infrastructure has been completed because of Quiro.

If we test the login, it looks good.

This next feature is really helpful. It's called

agent hooks. To get started, I'll click the little ghost icon on the left-hand side, and then I'll click the little plus arrow next to agent hooks.

I want this hook to create tests anytime I save a component if they're not already there.

So, I'm gonna describe this in the prompt on file save, add a few basic tests, if not already created for each component. And then I'm gonna click this enter icon. Kira

is going to create this hook for me, and it'll be listening in the background for that file save.

So, I'm gonna see it now inside the dot Kira folder, this new hooks, and then I can give it a try. So, I'm gonna go into one of my components, and I'm just gonna add a quick comment to it and make sure it works. So, I'm

gonna type in test comment here, and then I'm gonna open up the task list, and you can see it running already in the background here, and it'll start creating the tests.

This will just take a few moments.

Here's the new test, the Google Sign-in button test. I'll look through it,

test. I'll look through it, created several of them, looks really good.

Kira does more than just development with specs and agent hooks, we also support native MCP and vibe coding.

With Quiro, you can go from concept to production and beyond.

Go ahead and give it a try. It's free during preview.

So I hope we learn together something like how this migration is evolving with the genetic AI.

So, Rama, uh, if you can help, like we started this journey together, so if you can uh guide us through where it's going, so that would be super useful.

Yeah, thank you, Sanjay. That was awesome. And think of

Sanjay. That was awesome. And think of it, folks, in terms of If we are crawling right now, uh, our plan is to run fast, right? So,

fast, right? So, some of these capabilities that that Sanjay just walked through is, is going to get us, um, To a, let's call it, a swim faster model right?

Now, when you talk about model and our, our migration, um, what you see here on the screen is really our factory approach or a, or a blueprint for how many of our migrations uh are organized.

Um, what you see as the cloud COE, uh, the cloud COE.

The purpose is really to take the AT&T well architected framework. So, you derive

framework. So, you derive many of the, the base foundations from the AWS well architected framework, and then define our landing zones, how we define what our VPC model's going to be, what are the guardrails around it from, from a security, uh, and, and other requirements and policies and the OU policies.

So in many ways, the cloud, uh, the Core Cloud Foundation team.

Helps define some of those prescriptions and patterns.

And oftentimes you'll find that you will need, when you start off, you may need, say, 10 patterns, and that'll unblock about, you know, 5 applications. And then

you're going to find, I need to unblock 1 more, 2 more other patterns, and that'll unblock about another 1015 applications. And so part of that,

applications. And so part of that, um, is a, a, a robust set of prescriptions that are, some of them are codified through policies, uh, some of them become inputs into some of the, uh, you know, automation that can go into, into the migration.

Now, overall, when you look at the, the Discover plan, this is about how do we go find what doesn't exist, right? I don't have the people that know any knowledge anymore.

I don't have, in some ways, I would even say, where's the source code for some of these things that are running in mainframe?

And so, what's the, what's fascinating is to be able to create documentation, to be able to create test cases, and in order for us to then validate that the outcome is in fact, you know, comparable and or better than, than where it is right now, from a performance standpoint, obviously.

And so, um.

So in that kind of the transform state, uh, as we are, uh, reimagining this, it wouldn't be possible without many of our, our, you know uh partnership both from an AT&T and AWS standpoint, but also with some of our SIs that, uh, we have some relationship as well.

And so, um, in, in a result of, if you think about this, if we're doing, let's say, I don't know, about 1025, maybe 25 applications in, in a year.

We're looking to get us, uh, to a much faster rate through, through many of the innovations that, uh, that we're here and seeing, um, in terms of where we go with this, like what, what's our goal and, and how do we get to, uh, something that the. You know, what, what does

the. You know, what, what does it look like, right?

Um, obviously, the development velocity, we talked quite a bit about how, how do we, how do we progress from where we are right now to, um, you know, accelerating our, our migration.

The other aspect is about making sure that it is, in fact uh highly performant from an operational standpoint.

And it's actually meeting the business needs in, and, and the, and the functional value.

And a lot of these are, you know, billing, billing transactions and, and, uh, you know, for ACRs and summary, and those are pretty intensive performance intensive, and so we're gonna capitalize quite a bit on many of the, the, the step functions and other capability sets to get us to a complete parallel execution of many of those things. We've also

things. We've also been looking at um Some of the capabilities where you may have some SAS products, uh, as in the SAS products to A SAAS products, right?

So, not all of them necessarily have to be, uh, code converted into this outcome, but there may be some, some opportunities, uh, in getting us to, to the outcomes.

And, and so some of these, these, where do we see all this is, how do we enable our, our business partners to be able to do more of, um of their business functions.

And then how do we de-risk our, our, our IT infrastructure.

So, as long as, as you look at it right now, nothing's falling apart.

But then, you know, how do, how far do you keep moving that, that, uh, goalpost? And so we took a deliberate

uh, goalpost? And so we took a deliberate action to start investigating this. We didn't need to get it all done

this. We didn't need to get it all done last year. It wasn't the goal. Our

last year. It wasn't the goal. Our

goal is how do we get to it in a manner with, with, with great innovation that's occurring around, uh, how do we capitalize on some of this.

Um, I think, you know, from a performance standpoint, many of these bad, bad jobs and other, uh, you know, uh, billing, it is both from a volume of data that's coming through it, um, the aggregations that occur, the, uh, billing troups that occur, and our compliance needs around it.

And quite extensive data analytics that's occurring right now in the existing set of uh platforms we wanna bring forward through, through newer technologies and, and improve, uh, improve our availability and performance.

So I think, uh, what, what it gets us to is, is, um, you know, you know, future forward our, our platform.

And, um, capitalize on on many of the AI innovations and I think uh none other than, you know, AWS capability sets is is what's gonna get us there so uh thank you very much and and uh uh, thanks for the great opportunity here.

Thanks so much.

Loading...

Loading video analysis...