Azure Front Door Resiliency Deep Dive and Architecting for Mission Critical
By John Savill's Technical Training
Summary
## Key takeaways - **210+ POPs with Anycast & Split TCP**: Azure Front Door services through over 210 points of presence using anycast IP addressing so clients connect to the closest POP, and split TCP terminates the session at the POP for lower latency before forwarding to origins. [01:22], [02:34] - **Three-Layer Resiliency Stack**: Front end layer with 210+ POPs handles primary traffic, fallback layer in tens of Microsoft DCs sheds overflow, and traffic shield dynamically adjusts Traffic Manager anycast IPs to balance load across POPs. [06:13], [12:42] - **Async Processes Caused Outage**: October 2025 outage occurred because config changes passed safe deployment gates but triggered crashes in later async optimization processes that weren't running during testing; all async processes now removed. [19:07], [20:06] - **Config Rollback Now 1 Hour**: Rollback to last known good configuration took 4 hours during outage but reduced to 1 hour now and targeting 10 minutes by March 2026. [21:01], [21:17] - **Traffic Manager over AFD Failover**: For mission critical non-CDN, use Traffic Manager with 100% weight to AFD normally, failover manually to secondary Traffic Manager pointing to public App Gateways with WAF for L7 functionality. [25:51], [28:13] - **CDN Failover with 10% Warm Cache**: For mission critical with CDN, route 90% to AFD and 10% to alternate CDN via Traffic Manager to keep its cache populated, avoiding origin flood on failover. [30:58], [31:05]
Topics Covered
- Anycast Split TCP Minimizes Latency
- Three-Layer Resiliency Shields Outages
- Async Config Exposed Safe Deployments
- Traffic Manager Fronts Front Door Failover
- 10% Shadow CDN Prepopulates Failover Cache
Full Transcript
Hi everyone. In this session, I want to dive into the native resiliency capabilities that is part of Azure front door and then what are some of the other
things I as a customer and architect can do when I have mission critical services and I don't want to rely on any particular service for my architecture.
Now I previously did a deep dive video all about Azure Front Door. So I'm not going to go into a lot of detail about what Azure Front Door is,
but just from a getting everyone on the same page, it is a layer 7. So it
understands HTTP, HTTPS, TLS, global load balancing and content delivery network capability. So hey, anywhere in
network capability. So hey, anywhere in the world I'm a client, I want to go and find and talk to some resource. It's
that front end that I initially talk to.
Now to facilitate that initially what it has is a huge number of points of presence that me as a client can actually go and talk to. So I can think
about that the huge internet network we have and we interact with well on that frontend network
there are a massive number of points of presence these pops. So the idea here is that Azure Front Door services its
capabilities through over 210 of these points of presence and they are distributed over 130 actually more than
that metro locations all around the world. So it's a highly distributed set
world. So it's a highly distributed set of capabilities.
Now when I as a client am actually communicating with this so let's say I'm over here
it does something called split TCP and anycast IP addressing so these points of presence when you think about well how do I go and talk to it we talk to it
using an IP address what these are actually using is an anycast IP so a particular IP address can be served by any of these points of
presence when it's a global anycast IP and I will talk to whichever one is closest to me. So hey, if I'm here in London and there's points of presence in London, when I talk to that single IP
address that can be served by any of them, I'll go and talk to the one is closest to me. And then what it's doing is called split TCP. So, as a client,
once I've worked out which POP I'm talking to, the communication is split because what's going to happen is it
will terminate the TCP session at that POP. It will also terminate any TLS
POP. It will also terminate any TLS communications that are going on. So, we
get a much faster, lower latency set of performance talking to that POP. And
then obviously there's going to be a whole set of origins that are actually providing the various services. So I can think about okay there's a whole set of origins over here
and it's the responsibility of the service to actually go and communicate to whatever those origins may be.
Now it can also operate as a content delivery network. So in that content
delivery network. So in that content delivery network mode, what it means is it can optionally cache information. So instead of always having
information. So instead of always having to go back to the origin to grab that image, grab that uh static, whatever it is, HTML file, RSS, video, whatever, it
will cache it and serve it locally.
Also, additionally, I can optionally turn on the web application firewall. So
that's give me protection for common types of attack and bots. It only
supports HTTP, HTTPS, HTTP2. So, it's
going to reject other types of traffic.
It has DOS protection and it has very rich policy rules to control how traffic is routed to those various origins on the back end. So, those origins, remember,
back end. So, those origins, remember, can be many different things. This could
obviously be Azure services back here. And if I'm using Azure Front
back here. And if I'm using Azure Front Door Premium, it can also talk to services that are using private links.
that are surface via private endpoints, but it could also be public IP addresses, hosts, uh, a whole set of different things that it can actually go and
service. And it can use many
service. And it can use many distribution algorithms, roundroin, weighted roundrobin priority latency has health checks.
And very often if we think of this as a global load balancing solution, well when I'm offering for example Azure services within a region, I'll operate in many regions. Well, very often we're
going to put app gateway in here. So app
gateway is a regional layer 7 load balancing solution. It could be the
balancing solution. It could be the regular app gateway, could be app gateway for containers because really I don't want Azure front door trying to front 10,000 individual pods. I want a
regional solution to do that. And then
hey, Azure Front Door lives outside my V-Net, does much higher level global things.
Now, fantastic. Sounds great. Lots of
interesting things this can do.
But how resilient is Azure Front Door then? Because if it is fronting all of
then? Because if it is fronting all of the different services I have in my regions or other locations that I've got nice high availability and DR around,
Azure Front Door is that point of entry.
So, how is it achieving that resilience?
And depending on when you're watching this, the outages of October 2025 may be fresh in your mind and you want to understand this and maybe you're looking for some guidance.
So, let's break down what actually is Azure Front Door. Now, we already talked about the idea that there's 210 plus points of presence and what these are
making up. If we start to break down
making up. If we start to break down what the service actually is initially, it is the front end layer. So I can think about, okay, we've got these
points of presence and let's expand what one of these are. So those 210 pops make up the front end layer. So this is the
primary place. We're going to go and
primary place. We're going to go and take that traffic. And if we expand out one pop, just so I don't have to draw too much,
we're expanding this particular pop.
It's going to be made up of a whole set of multiple racks. So I've got lots and lots of racks with lots and lots of servers inside
that are running the various pieces of software that make this up. On top of the racks is a layer of edge controllers.
Now you can think of this as a layer 4 load balancing solution made up of many instances within the pod and it's checking well are the servers healthy if they're not it will stop sending traffic there. It would quickly fail that
there. It would quickly fail that traffic over to another healthy server in the pod. This is not an Azure standard load balancer. It's built for purpose. It also handles things like the
purpose. It also handles things like the DOS capabilities. is in charge of
DOS capabilities. is in charge of shredding shedding traffic where required which we're going to get to and as I talked about already there's over 210
of these so think of this as and that that's growing all of the time now where these points of presence are
they're distributed over Microsoft and various colo partners that meet the Microsoft regulatory and compliance
requirement. So these are distributed
requirement. So these are distributed over many different data centers again all throughout the world and remember this is any cost. So if for example any
particular pop was down users would simply be served by another pop. So just
natively on its own 210 sets of these um pops with all of these capabilities. If
anyone has a problem, any cast takes care of as a client, I just go and talk to another one, the next closest to me.
But then there's the next layer. So this
is the typical front end layer. What we
then move to is the idea of a fallback layer.
Now the fallback layer is exactly the same software and architecture but it's only used when that pop needs to shed some of its traffic. So once
again we've got these whole sets of racks with all the different servers inside them and the the things going across. But what will happen now is hey
across. But what will happen now is hey the front end layer for some reason needs to shed some traffic and so these edge controllers will actually say you
know what I'm going to move some of my traffic to be now serviced by that fallback layer.
Um consider a scenario that maybe I'm in a certain metro. So I've got a couple of those pops there. Maybe some of them are down and so the remaining ones can't keep up. It can just start shedding
keep up. It can just start shedding traffic to the fallback layer. So I'm
not impacting the performance of the various clients. The fallback layer has
various clients. The fallback layer has its own caching capabilities. It can
still go and route to all the various origins. It it's exactly the same set of
origins. It it's exactly the same set of capabilities. Now one difference though
capabilities. Now one difference though is if you think there's like 210 plus of these. When I think about the fallback
these. When I think about the fallback layer, it's in the tens of instances and these are only in Microsoft data centers.
So if you think Azure regions, these are primary located there.
And then there's a third layer of resiliency.
So if we think that okay, great. This is
the overall set of capabilities in the service. Let me give myself a bit more
service. Let me give myself a bit more room. move that upwards.
room. move that upwards.
So, there's a third set that sits on top and is really viewing everything that's going on. And this is called the traffic
going on. And this is called the traffic shield.
But maybe before we dive into that, take a step back just just for a second.
When I use Azure Front Door, my endpoint is a DNS name. I'm not giving an IP address. It's something azure fd.net.
address. It's something azure fd.net.
So what actually is happening behind the scenes when I use the service is okay I want to use Azure front door but the first thing I'm actually going to go and
talk to is DNS and it's actually served by Azure Traffic Manager. So it's using another service Azure traffic manager
which obviously is DNS. And what's
happening here is my client is queries DNS and it says hey I'm something aurefd.net
and then that resolves and gives me that anycast IP.
So savvlettech.com would be an alias to maybe sav.asurefd.net
sav.asurefd.net which when I go and ask DNS service by aure traffic manager it gives me the anycast IP that I actually go and talk to.
Now you may be thinking okay so I'm using Azure traffic manager. Now Azure
Traffic Manager itself has 100% SLA.
Now an SLA fundamentally though is a a money back guarantee of achieving a certain level of service. But we'll talk a little bit later about what I think about in terms of okay well now isn't
there reliance on Azure traffic manager as well but ordinarily this is going to return a global anycast IP but there were also
regional anycast IPs. It is possible that hey there's maybe five pops or within East US for example there would be a regional anycast IP that those five pops only in East US would respond to.
So there's a certain amount of flexibility there. So we know that Azure
flexibility there. So we know that Azure front door I resolve DNS name and it returns me the anycast IP the client actually goes and talks to.
So what traffic shield is doing think of it as an overwatch of the front end and the fallback layers. So we have an eye
and it's looking at the front end and these fallback layers. If it seems they are getting overwhelmed,
what it will actually go and do is update how that Azure traffic manager is
serving up the IPs. So for example, if it saw a particular set was getting overwhelmed, instead of maybe returning
a global anycast IP, it may start returning a particular regional anycast IP to shift traffic away. Maybe it's
already using a certain regional anycast IP and it will shift it around. So it
can move traffic around to trade. Maybe
it's no longer the lowest latency, but it will improve the availability. It
won't typically cross continents, but it can if it needs to well shift how it's sending those anycast IPs to shift them to potentially different sets of pops.
So, fantastic.
Looks pretty rock solid. Three layers of resiliency. We have the front end with
resiliency. We have the front end with 210 sets of these highly resilient POPs.
A full back layer where traffic can be shed to over tens of different sets of data centers. And this nice traffic
data centers. And this nice traffic shield. So I can move clients around
shield. So I can move clients around where needed to maybe set them on to particular groupings of the services.
But often when we think about problems, the cause maybe isn't so much something fails, but a change is made. And so we often think about all safe deployment
practices. So how are they handled? So
practices. So how are they handled? So
let's actually look at how safe deployment and updates are made when we think about Azure front door.
And so if I think about configuration resilience, there's really three different types of configuration that gets applied. I can
think about the system config.
Now the system config could be Azure front door's own data plane, its control plane, its various bits and absolutely it uses safe deployment practices and it
does this over a very slow roll out across all of the various pods and it does that over a 2 week period.
So has a very large bake time between sets of pods before it moves on to the next one. It proves they're healthy.
next one. It proves they're healthy.
etc etc. So a two week time to roll those changes out.
Then I can think about data config.
So data config could be things like geo data for determining IP to geos. Uh it
could be an IP reputation fe feed for bot detections. It could be malicious
bot detections. It could be malicious signatures for layer 7 DOS protection and much much more. Once again, this follows a safe deployment practice, but
it has to be a bit quicker. So, this is basically following a daily cadence.
So, over the course of a day, it will get these rolled out to all of the pods to get that information.
And then we get to the idea of the customer configuration.
So this is you making a change to the origins to creating a new configuration and in that case I want that happening pretty quick. Maybe I'm doing it part of
pretty quick. Maybe I'm doing it part of a DevOps pipeline. It's rolling this stuff out. I don't want to wait 2 weeks.
stuff out. I don't want to wait 2 weeks.
I don't want a day. Now it's still going to follow a safe deployment practice.
And it does it over essentially three rings. So I can think about it rolls it
rings. So I can think about it rolls it out to a pre-pro.
This would be made up of maybe four to five pods that are really not being used right now for active traffic. Then I can think about there's a staging
and staging could be for example let's say 15 pops. I think I'm saying pod and pop.
pops. I think I'm saying pod and pop.
I'm switching those things. Pops. It
should be pops. And then obviously you have production which would be the rest.
So maybe that's the 200 that are left in the environment. So it's rolling it out
the environment. So it's rolling it out over these three different stages. It's
rolling that out over a 10-minute window.
So it's pretty rapid because I'm making a change. I want it to take effect. Now
a change. I want it to take effect. Now
at this particular moment in time, I'm recording this. It's middle of November
recording this. It's middle of November 2025. This is actually 45 minutes. It
2025. This is actually 45 minutes. It
was increased because of the October 2025 issue, but it will be back to 10 minutes in January. So, just realize, hey, if you're watching this before January, this is currently 45 minutes, but it will be going back to 10 very
shortly.
Now, between these three stages, there is a capability called config shield.
It's a gate. So, it's looking at each stage. It's looking for any crashes that
stage. It's looking for any crashes that are occurring. And if there are any
are occurring. And if there are any crashes that occur, it will stop the change rolling out and it will actually revert back to a last known good configuration.
So, I can think about at a certain periodic interval, it's creating this last known good configuration that it can go
and use.
So you look at this and you say, "Okay, great.
How was there there an outage then with all of these great things and is it fixed?" So fundamentally what's
fixed?" So fundamentally what's happening here is remember this is great. I I'm making changes and I'm
great. I I'm making changes and I'm making sure it's not causing any harm providing every code path is being run as it's going through these stages. So
what happened was there was a combination of config metadata that made its way through prepod and staging into production at which point there was an
asynchronous process that was running to do certain optimizations. Well it's
asynchronous. It is not running all the time and so it wasn't running at the time this particular set of metadata that was that would cause the crash it wasn't wasn't running yet. So the config
change is made its way through sort of here and then the asynchronous process kicked off and it caused the crash.
Um and so that safe deployment practice the bake times the gates don't work if the code path isn't executed.
So that has now been very well understood and it's fixed. So what they have done is there is no longer any async processing.
They they've just been removed. So now
all of the code paths will get tested as it moves from pre-pro to staging um into production.
So that vulnerability in the testing that nullified some of the config shield because it's looking for the crashes and a negative impact that that didn't show because it didn't run constantly. Now
everything is synchronous. So it's
always going to run. So it will get detected as part of that. But another
aspect is once they actually found the problem they had to roll back to a different last known good configuration.
And what happened here is when you think about the application.
So I want to actually take that last known good and I want to apply it.
What we saw is that took four hours.
At this moment in time, mid November, that has already been reduced now to one hour. So if a catastrophic disaster
hour. So if a catastrophic disaster happened right now, it would take 1 hour instead of 4 hours. What they're working
towards is 10 minutes. And that will be in place March 2026.
So yes, remove all the async processes.
So config shield should always work and the safe deployment practices, but fundamentally we're human, things can happen. And so the goal is to be able to
happen. And so the goal is to be able to res resolve and reapply that last known good configuration much much faster.
Additionally, one of the other actual big changes is if we think about all of the POPs and you consider that, okay, there's lots
and lots of different tenants. One of
the the big focus areas here is reducing any cross tenant impact.
So, think of a micro cell segmentation.
So even if there was some issue based on this new micro cell segmentation any impact would actually only affect less than and it's way less but I'll say
1% of the overall Azure front door population. The the goal for this and I
population. The the goal for this and I think they're working towards a more aggressive date than this but June 2026 is what they're kind of committing to.
Um hopefully it'll be there sooner than that, but that's another big investment.
So even if there was a huge issue and even though they can now recover it in 10 minutes, it should only impact a much much smaller segment of the population
using Azure Front Door.
So hopefully that that gives a a degree of confidence if we consider well okay they've got these three layers of resilience the front end layer the
full back layer the traffic shield we've got this new very soon ability for a 10-minute worst case roll back to a last known good configuration there's no
async processes anymore later on we'll have this reduction of cross tenant uh impacts if there was an issue with microcell. So, fantastic
things. Fantastic things there today.
It's only going to get better. But what
if I'm a missionritical service? And
that's the key thing I want to stress here. What we're talking about now
here. What we're talking about now because I don't want everyone thinking, oh, okay, this is something we need to do. The focus is if you are a true
do. The focus is if you are a true mission critical,
there is life at risk. Um, planes don't fly. Like if you can tolerate five or 10
fly. Like if you can tolerate five or 10 minutes down, this does not apply to you. But if you cannot be down, if I
you. But if you cannot be down, if I cannot rely on any single thing, what could I do?
Because I do want to stress that yes, there was the AFD outage, but AFD is one of the core services of Microsoft.
Microsoft relies on it. It's not just for our customers. It powers the portal, M365, Entra, Xbox, you name it. There
was a 100% investment in ensuring this thing is always there. Historically, if
you look at availability numbers, it has always been there. So, it is rock solid.
But I'm mission critical. And if I'm mission critical, there should never be a reliance on any single component. So,
what can I do?
So, let's take the fact that we have Azure front door.
And for our first scenario, I'm going to consider I don't require the CDN functionality. It's not caching
information. That is not a critical part
information. That is not a critical part of what I need to do.
Now, what I have is the origins. So, I
still have a whole bunch of different origins.
They might be spread. they probably are over region one and region two. So
they're distributed and so what we normally have here remember is Azure front door is pointing to those origins. They may
or may not have that gateway in front of them. It depends.
them. It depends.
But what I now require is I don't want to only rely on Azure front door that service to be functioning. So I need
another global distribution mechanism if AFD isn't available. So that other mechanism is going to be Azure Traffic Manager. So
what we're going to do is we're going to put Azure Traffic Manager in front of Azure Front Door. Now
there's a few configurations we have to do. One of them is something called
do. One of them is something called always serve.
We are not going to rely on health probes. Remember, it's uh any global
probes. Remember, it's uh any global anycast IP address. Any probing traffic manager would do would be unpredictable and unreliable at best. So, we're not
going to focus on the health probes.
We're just going to say always serve.
And I'm going to use waiting of the traffic to actually say 100% of the traffic goes to Azure Front Door. That's a
normal healthy situation.
uh I'd probably have a time to live of like 300 seconds. So 5 minutes. So
within that 5 minutes, clients would get the new path if there was a problem.
Now if there was an issue with Azure front door and we had to now fail over to an alternate path, I need to replicate the functionality of Azure front door. Remember it's a layer 7 uh
front door. Remember it's a layer 7 uh can terminate TLS, can do various routing decisions. It has a web
routing decisions. It has a web application firewall. So I need to build
application firewall. So I need to build in that kind of layer 7 set of functionality. So the way we can build
functionality. So the way we can build in that layer 7 set of functionality is I would have to put app gateway in front of my various origins
and I would want to have the WFT running on them. Now again I may be using this
on them. Now again I may be using this anyway in which case Azure front door is talking to the app gateways but absolutely if I'm talking about a DR path I have to put app gateway in because I have to have that web
application firewall. I have to have
application firewall. I have to have those maybe additional rules that I'm going to use. Even if you did already have app gateway, do remember though
because what we're going to now do is the communication is going to come basically from DNS resolutions to it.
These now have to be public facing. So
they if maybe they were on a private endpoints, private IPs today that they were being talked to from Azure front door, well then they'll have to be public. There's probably going to be
public. There's probably going to be changes to its config. There's going to be changes to the rules. So there's work and things you would have to do to prep for this because what we're actually
going to now do is we are going to put another fellow another Azure traffic manager instance.
So there's going to be another Azure traffic manager here and this one's going to run in a performance mode and its targets will be
the two app gateways.
And what we have here is this is kind of the disaster the break glass and it's going to be a manual break glass. We
would have some script. that script
could go and reconfigure the app gateway to now maybe have public um front ends and it's going to change this waiting to now
hey this doesn't get this now gets 0% and this now gets 100%. Normally this
would get 0%. Under normal working scenarios, this gets 0% of the traffic.
But in that break glass scenario, I've got probes in place. I'm looking at the health of my applications and how it's being served, synthetic transactions, everything else. I have
detected, okay, this is not meeting my needs. I would modify this Azure traffic
needs. I would modify this Azure traffic manager instance at the front to change the waiting to now send to this ATM performance that will resolve to the one closest to the various clients. It's now
bypassing that Azure front door.
So that's the architecture I can use if I don't require CDN functionality.
If I do require CDN functionality, it looks much the same.
So, we're still going to have our Azure front door instance exactly the same as we had before.
We're still going to have all of the various origins. I'm not going to be
various origins. I'm not going to be consistent in colors cuz I don't really remember. We're still going to have all
remember. We're still going to have all the various origins with or without. In
this case, um app gateway could have it, don't have to have it. So, Azure Front Door points to these before, but now we require that caching capability,
which means I can't just use a DNSbased solution. What I'm going to have to have
solution. What I'm going to have to have is an alternate content delivery network. So, you'll
have a second CDN solution from someone else who has the ability to communicate to these origins.
At this point, you're still going to have that Azure Traffic Manager.
You're still going to have it in always serve mode because again, I'm still not going to rely on health probes.
I'm still going to use weighted.
However, I'm going to do something a little bit differently.
This time, Azure Front Door as my normal is going to get 90% of the traffic.
But I'm going to give this CDN 10% of the traffic. Why?
the traffic. Why?
What I don't want to do is everything going to Azure Front Door populating its cache with the content. And then if there's a disaster,
then I would run my script as normal and now set this to waiting 100. But think
about what would happen. This CDN's
cache is empty. it would cause a rush and maybe flood the origins as it tries to populate the cache. So what we're going to do is give Azure front door
most of the traffic but the other CDN gets 10%. So it's cash is populated. So
gets 10%. So it's cash is populated. So
then if there was actually a problem and again we're going to break glass and if we break glass once again what do we do?
We would change this so that becomes 0% and that becomes 100%.
So that that's the change we would make.
But it's cash is populated. We're not
going to get that horrible run on the bank kind of scenario as it tries to fill up its cash. So that that's really the important point about it.
One other thing then so that those are the models that we can do if we're mission critical. will have an alternate
mission critical. will have an alternate path. If I don't need caching, I can
path. If I don't need caching, I can just use DNS, but I need app gateway to replicate the functionality of AFD. If I
do need caching, I need a second CDN.
But in all these pictures, there's Azure Traffic Manager, Azure Traffic Manager, Azure Traffic Manager, Azure Front Door itself uses Azure Traffic Manager.
And so there's there's probably a little bit of a Azure Traffic Manager seems to be a single point of failure in this picture. Yes, it has 100% SLA, but
picture. Yes, it has 100% SLA, but that's a certain financialbacked credit.
What I would say though is when we think about DNS, remember DNS has its own builtin native resiliency capability
because I have a cache.
When I go and look up a DNS record, it has a time to live a TTL. Let's say it's 5 minutes. So, if there was some blip in
5 minutes. So, if there was some blip in Azure Traffic Manager or any DNS service, if the client is already using it, they can handle the blip because
it's already got it cached. It it
doesn't need to go and talk to it again.
If I have concerns beyond that, well, I can do a number of different things. my
client application, I could build in logic to say, look, I'm actually going to start in my app cache the IP that it resolved to and if I can't resolve DNS
anymore, fall back to the value I cached. The
other option I could absolutely do is I could do things like having a second DNS provider and run them in an active active.
So now if there was an Azure traffic manager, well, there's another DNS service that it would go and switch to to resolve the record. So depending on how missionritical you are, there are
things you could do. Now again, I would say I'm not aware of any Azure traffic manager outages. It is a crazy
manager outages. It is a crazy cell-based architecture that's globally resilient, works under regional disruptions, but hey, the whole point of this if it's mission critical, I design
for anything could fail. And so there are options you can take either in the client andor thinking about having these active active DNS configurations.
So overall I hope that helped. Yes,
Azure front door had a pretty big hiccup uh nove uh October but that whole exposure has been removed. There's no async processing.
removed. There's no async processing.
They've already accelerated 4x the times of any resolutions. That's going to be down to 10 minutes in March. There's
work going on about reducing any cross tenant impacts on top of already the resiliency that exists there, but it has a super strong resiliency and none of that failed. Like that that wasn't any
that failed. Like that that wasn't any of the issues. It was all part of that the way the config changes uh move through. But if you are mission
through. But if you are mission critical, hey, you have options depending on if you need the caching or if you don't need the caching. So, as
always, I hope that was useful. Um, stay
resilient, stay available, and uh, appreciate you watching.
Loading video analysis...