Spring 2026 Temporal Product Update
By Temporal
Summary
Topics Covered
- Multi-Cloud Replication Enables True Cross-Cloud Deployments
- Azure Cloud Availability Targeted for July with Capacity Planning Challenges
- Six-Month Migration Completed in Less Than a Week
- Workflow pause addresses root problem, not symptoms
- Projects Let You Consolidate Accounts and Keep Strong Isolation
Full Transcript
So, I just want to welcome everyone to the call today.
Um I want to say thank you very much for attending. I know it's the Easter break,
attending. I know it's the Easter break, and we're standing between yourselves and your Easter bunnies. Um but we have a very packed seminar for you today with lots and lots
of content. Um so, we're going to do
of content. Um so, we're going to do things a little bit differently. Um I'm
sure you will have lots of questions, but what I'd like to do is ask you to um put those questions into the Q&A
um box. You can find that at the bottom
um box. You can find that at the bottom of your Zoom screen right next to hosting tools. If you don't see it
hosting tools. If you don't see it there, there are three little dots um at the bottom. You click those, and they will then give you the option to
see the Q&A box. Any challenges, please um just chat to us, and we will try and help you.
Um so, two bits of logistics.
Uh we're going to have a poll right at the beginning, and this poll will capture the things that you're trying you're interested in, and that will enable us to make sure that we capture
that and, you know, tailor the um content accordingly.
Um we will then have a Q&A where we'll go through all the questions in the Q&A box.
Um and prior to that, there will be um another um poll just to catch your um perspective on what you've seen and and how you thought the seminar went. And
that's really important because that enables us to capture really capture the metrics and make our seminars as as good as they can be. Um so, without further
ado, what I'd like to do is to hand over to Ben, and we can begin um the Temporal product update. Ben, over to you.
product update. Ben, over to you.
Great.
Uh thank you very much.
Um so, I yeah, I think Milan covered these logistical details.
Um but yeah, please do ask questions in Q&A. Um I'm going to run through a lot
Q&A. Um I'm going to run through a lot of material. We got a a lot to cover. Um
of material. We got a a lot to cover. Um
we're working on a lot of exciting areas, but uh we're excited to get into it.
Um so, I'm uh the director of product for Temporal Cloud.
Uh so, I really look across obviously our our uh you know, cloud portfolio. Um
I'm also thinking a lot about how operators and platform teams are able to deploy Temporal at scale.
Uh you know, a little bit further away from writing an individual workflow, a little bit more about how do you run a million or a billion workflows uh you know, safely, cost-effectively reliably.
Uh before I get into exactly what we've shipped, I want to talk about how we ship features.
Uh so, most features will go through uh sort of these three release stages.
Uh we launch things first in pre-release usually. This is a little bit of a sort
usually. This is a little bit of a sort of uh private closed beta mode. Um this
is really where we're uh coordinating with design partners.
Uh we you really do this to make sure we've got the APIs and sort of fundamental constructs right. When we
put something in the public preview, that's actually, you know, we believe that's ready for production use cases.
That's actually the point that a lot of customers will start to deploy uh those new features into production. Um we're
obviously still iterating, but this is usually a little bit more of like a soak period versus a you know, a period where we're dramatically like changing functionality or breaking APIs.
Uh and then when when we move into general availability, that's really APIs are locked, uh pricing is locked.
Usually, we've invested a lot in self-serve, regional availability, scalability etc. Um so, we're we're trying to, you know, make sure we move through these phases as quickly as possible, but also making sure that we're incorporating feedback
and building the right thing in each phase. So,
phase. So, uh you know, with that, I'll jump into uh a bunch of the stuff we have made generally available recently. Uh and
then a lot of the road map I'm going to be covering. Many of these features are
be covering. Many of these features are in pre-release or in public preview, and I'll talk about how we're bringing those to sort of fruition.
Uh so, it's uh it's been busy. Uh this
is this is a subset of the features we've shipped in the last um you know, call it like 9 months or so.
Um so, we've made a ton of investments.
Kind of the left side is is is a lot around uh how you actually, you know, run your uh Temporal code. So, literally
just yesterday, we made worker versioning uh generally available. So, that's a better approach to iterating on your code without breaking your workflows.
Uh along with that, we've invested a lot in worker resource tuning to make it easier for you to sort of scale and and like right-size your worker fleets.
Uh Nexus has been a big investment for us that gives you cross-namespace connectivity, allows you to basically use the functionality in one namespace in another namespace.
Uh synchronous workflows are huge improvement for customers with highly latency-sensitive use cases. So, you
know, end user consumer applications, maybe you're an e-commerce app putting items in a cart. Synchronous workflows
uh can be a good solution to really keep that latency down and and optimize that customer experience.
We're always investing in new SDKs. So,
uh we've launched a Ruby SDK that's that's fully GA now. Uh talk about another SDK we we've got earlier in the pipeline uh in a bit.
Um and we we've made workflow history export generally available. So, a lot of our customers are using this for analytics and sort of understanding patterns and usage of Temporal. Uh it's
also pretty powerful as a compliance feature.
Uh I was actually just updating this deck about 30 minutes ago because we just launched our uh over on the right are a lot more sort of operational uh considerations. Uh we launched our open
considerations. Uh we launched our open metrics endpoint this morning. So, uh or moved it into GA. It's been deployed. We
think we have over 350 uh cloud customers using it. Um but this is a much higher cardinality, uh much like finer-grained uh metric solution. Uh so, we're very excited.
solution. Uh so, we're very excited.
This is this is uh unlocking task queue metrics and workflow-type level metrics.
Um lets us do a lot more cool stuff.
Lets you sort of drill in much more accurately.
Uh we also just officially made multi-cloud replication generally available. So, this is uh you know, an
available. So, this is uh you know, an extension of our sort of multi-region replication that's been available for uh some time now, but multi-cloud replication lets you run a namespace with replicas in different cloud
providers.
Um so, we're seeing a lot of customers who use that for sort of true multi-cloud deployments. We also see
multi-cloud deployments. We also see that uh being used by customers for cloud-to-cloud migration, moving workloads to the right cloud for the right use case.
Uh our Terraform provider to manage all your infrastructure as code uh went GA recently. Uh and our uh
recently. Uh and our uh we've also focused a lot on the cloud provider marketplaces, making it easy to buy from uh and then we've added a variety of sort of compliance features.
So, uh SCIM user syncing, API key authentication for organizations that don't have mature mTLS cert management.
Uh API keys are a lot lighter weight to use um while still maintaining security guarantees. Uh and then audit log
guarantees. Uh and then audit log querying in the end UI. So, we you know, we're really trying to you can see we're working on a lot of different areas. I think one big theme that maybe
areas. I think one big theme that maybe comes through is sort of like enterprise readiness. We're making sure we have a
readiness. We're making sure we have a lot of these fundamental checkboxes, and at the same time, we're making sure we pave the way and make it uh easier to actually, you know, run the platform, use the framework uh in terms of like
deploying your workers, managing your code.
So, so, with that, uh I'll run through uh what's coming up. Um we've broken it into sort of roughly these five areas.
So, platform and deployment, kind of where is Temporal running? Both where is the, you know, back end running and where are your workers running? We'll
talk about extensions to the core, you know, execution model, the core Temporal framework. That's uh super uh important
framework. That's uh super uh important there is how we're making Nexus better.
Obviously, AI is popular. Uh so, we're going to talk about how we are leaning into the adoption we're seeing of Temporal um specifically for building agentic apps. So, you know, we've we've
agentic apps. So, you know, we've we've talked a lot about our partnerships with OpenAI and Replit and Cursor. Uh so, a lot of the companies that are leading the the way on
agentic applications are using Temporal as the backbone of those applications, and we are investing in features to make that even easier for them and for you to build those types of apps as we figure out how to build those apps.
Uh and then I'll wrap up with observability and operations and security and uh identity and access management. So, more of those
management. So, more of those operational concerns uh that are, you know, maybe less the fun part, but are all often really the critical things or end up being the blockers for uh starting
actually moving a lot of these use cases from cool demos into actual production.
So, platform and deployment.
Uh first off, I you know, I know that uh talking to to you all, my friends in Europe, um we we know that uh Temporal Cloud on Azure uh has been a a long-awaited ask. Um we are heavily
long-awaited ask. Um we are heavily investing in this. We are working on this now. Um we are targeting a
this now. Um we are targeting a pre-release uh for Temporal Cloud availability on Microsoft Azure in July.
And we are hoping to move that to a public preview uh sometime in the second half.
Uh I think you know, one sort of note here, you know, part of the reason there are relatively large error bars on that, um you know, Azure and all the cloud providers are in a big in uh infrastructure capacity crunch. And so
we're working closely with them to figure out how we can get enough uh capacity, enough compute allocated so that we can sort of support the scaling and reliability guarantees, you know,
that you expect from Temporal Cloud. Uh
so that exercise is actually, you know, potentially more challenging than you know, making the actual bytes run, uh you know, the bytes that already work for AWS and on GCP where Temporal Cloud
already runs. Uh moving that to Azure is
already runs. Uh moving that to Azure is certainly a large lift, but uh it's actually a lot of that capacity planning, reliability testing, um that ends up being kind of the long pole
there. So, uh definitely excited about
there. So, uh definitely excited about this. This is a a huge unlock. Um so we
this. This is a a huge unlock. Um so we are actively engaged in this. Um
I think the exact timelines on when we land uh sort of full feature parity are we are figuring out, but we the expectation is actually will will have close to if not full feature parity earlier in the life cycle. If anything,
we might need to stay in that pre-release phase for a little bit while we are figuring out some of those infrastructure capacity questions with Azure.
Um so another big uh long-awaited uh feature has been uh Temporal supporting serverless workers.
So, historically you've had to, you know, Temporal workers work great. You
can kill them, they come back, everything resumes, your code never fails. Uh but you've had to deploy those
fails. Uh but you've had to deploy those workers as long-lived processes, and that leads to both friction if your company is sort of all-in or defaults to a serverless uh
code deployment model. It also
definitely contributes to some of the challenges around worker life cycle management. So some of the investments
management. So some of the investments we made around worker versioning that I was just talking about definitely help mitigate that. But uh you know, we're
mitigate that. But uh you know, we're also excited to support serverless workers and really uh sort of eliminate uh some some of those categories of worker life cycle management challenges if you are using
serverless.
Uh so really all this means to start, we're actually uh adding this into the Temporal open source framework, and then we'll also run this in cloud, but there's uh going to be a new component that when there is uh when there are
tasks uh on a task queue backlog, it will actually uh push a notification or an emit an event to trigger a lambda that you have configured. Uh we're also
partnering with uh other compute providers. Um
providers. Um but the the idea is that instead of workers having to continuously poll the task queue, we there will be a component that can uh sort of notify that worker that there's work to be done. Uh so, you
know, spin up the uh serverless process, do the work, uh and then it can disappear. So, uh this is something
disappear. So, uh this is something we're targeting for pre-release uh actually uh in May timeframe um around our Replay conference. So we'll have a lot more details and uh demos uh there.
So this is this is a huge unlock. This
is, you know, a massive architectural uh you know, deployment model we haven't been able to help people use uh until now.
Uh so we've been rolling out uh capacity modes in Temporal Cloud. And and so what this does is uh lets you go beyond Temporal Cloud's native auto scaling, which has uh you know, really always
kind of sized your namespace with your workload, and you know, it it's sort of automatically looks at your usage, both your average usage and your spikes, and it tries to maintain some headroom.
Capacity modes basically just gives you a way to say, you know what, I know that I've got a huge customer event coming up, or I know that I I need to do a load test, I'm going to need more headroom even though my historical behavior
doesn't indicate that I'm going to need that headroom. It lets you actually
that headroom. It lets you actually preemptively scale up your uh namespace so that you've got all the uh throughput capacity that you're going to need to handle that event.
Um we've also gone to great pains to design this in a way that if you actually uh use the capacity that you're that you're adding on, so we basically every capacity unit comes with sort of a
minimum percentage of required actions, if you use those uh actions, we don't charge you anything for the capacity itself. So we really really are trying
itself. So we really really are trying to make sure like that our sort of incentives are aligned, that we're like sort of communicating the right level.
So when you need uh more capacity, you add TRUs, you use that capacity, you don't pay us anything extra uh other than what you, you know, just your
actions cost. Um we really we're not
actions cost. Um we really we're not trying to monetize compute, you know, that's that's uh that's not what we're in the business for. We're we're here to help you use Temporal and get a lot of value out of that. So, uh capacity modes
are in public preview. We're actually
expecting to move these into general availability later this month.
Um and you know, these are fully manageable through all of our programmatic interfaces.
Uh so I just mentioned this, um but we we just moved multi-cloud replication into general availability so that partners with uh multi-region replication.
Um so at this point really you're able to add a replica for any namespace either in a another region in the same cloud provider uh on that same continent, or you're actually able to
add that same replica in exactly the same way uh in a different cloud provider. Uh and like I said, this
provider. Uh and like I said, this serves both kind of a long-lived sort of uh high availability or disaster proof uh multi-cloud deployment models as well as migrations or sort of right-sizing
between uh different cloud providers.
Uh so we've we've I think actually I think we initially started doing this uh about a year ago where we've we've helped customers uh migrate from self-hosted open source Temporal into
Temporal Cloud with zero downtime, zero, you know, data loss, uh trying to make that just a seamless in in-place upgrade. Um so we've been doing this.
upgrade. Um so we've been doing this.
This has sort of been in pre-release uh for coming up on a year now, I think. We've
done this with dozens of customers with actually quite high uh high scale workloads.
Um this is something that we're continuing to invest in and scale up.
And then we're also are excited we uh we are we have actually built the capability for you to migrate zero in exactly the same way, zero downtime, zero data loss uh from Temporal Cloud
back to open source uh you know, self self-hosted uh Temporal. So obviously
this isn't something as the uh Temporal Cloud product uh person that I want everyone to go do, but I think this is reflective of our commitment. You know,
we know that you're taking a big bet on Temporal, we really want to make sure that you you truly are not locked into our cloud provider or to our cloud service. You truly have sort of
service. You truly have sort of deployment flexibility to put the right workloads in the right places. Um you
know, we're we're really going to continue making these kind of investments to make sure that uh you know, you you and your team have confidence that you've got that sort of flexibility.
So this is something else I think we're we're going to start uh talking more about this in the May timeframe.
So, uh those those were both of the deployment model uh sort of updates that we're working on. Let's get into the actual execution model. So how are we making Temporal uh the framework itself better?
Um well first off, obviously everyone's moved over to agentic development. Uh
we've actually recently shipped uh Temporal agent skill that helps uh your, you know, cloud code or codex or your your coding harness of choice
um actually helps uh write workflows uh materially better, you know, using uh Temporal best practices and knowledge.
Uh it's actually got specialized knowledge for each of our different uh supported languages. So we're really
supported languages. So we're really going in a uh you know, pretty great detail in the skill to make sure you're doing the idiomatic thing for your language as well as the sort of optimized, both like performance
optimized, correctness optimized, and cost optimized uh best practices, you know, for Temporal. Um this is something I think we've seen some really, uh you know, frankly almost ridiculous uh
feedback from customers. Uh you know, one one large financial institution said they had a six-month migration planned, and they were able to actually complete it in less than a week because they
plugged in the Temporal skill into the rest of the uh you know, AI agent sort of framework they had built to do that migration. And I think they they saw
migration. And I think they they saw such a high hit rate on just the production quality and quality and like readiness uh from those apps on sort of a first pass that they were able to pull that timeline into a a pretty crazy
degree. So, we definitely don't think
degree. So, we definitely don't think that's the, you know, necessarily the scale of impact everyone is going to see, but um this is a really exciting thing. So this is in public preview. We
thing. So this is in public preview. We
expect it to be uh moving this into general availability very soon. We're
also uh certainly um expanding the sort of universe of skills. So the the current Temporal agent skill is focused on, you know, how do you write a workflow, how do you sort of translate
business logic into a Temporal, you know, code. Um there's obviously a ton
know, code. Um there's obviously a ton more we can do there around uh we've also actually launched a skill around, you know, debugging cloud connectivity and authentication issues. There's a lot
more we can do on understanding and debugging production problems. There's a lot around understanding just what are you seeing, what are the patterns going on sort of operationally. Um there's
there's a lot more we're going to do here. You know, I think overall we're
here. You know, I think overall we're we really are going to have sort of the focus of, you know, how do we make agents excellent at everything that has to do with uh Temporal. Um so got a good
a good start here. Um excited to add more.
Uh so another huge uh unlock. This is
another big architectural shift, you know, probably on on par with serverless workers. So, serverless workers is at
workers. So, serverless workers is at the infrastructure layer.
Uh standalone activities is basically allowing you to run an activity outside of a workflow context. And so, that means you get the exact same expressiveness, you get the same, you
know, traceability, and uh reliability guarantees that Temporal provides for any activity, but you're able to unwrap that from the uh actual workflow. And
so, we're building this, you know, obviously, because we've gotten tons of customer feedback saying, "A workflow is a great construct. I want all these guarantees, but either from a just sort of boilerplate, you know, I feel like I'm doing a lot of work to set up a
whole workflow to just do one thing. It
feels a little silly." And then there's obviously a cost element, too. We want
to make sure that we are always making it as cheap as possible to use our service. And so, we really don't want to
service. And so, we really don't want to charge you for extra sort of workflow scaffolding and the actions that come with that if we don't have to. If you
don't need those, we don't we don't want to make you uh incur those. So, we think that this uh you know, by basically letting you, you know, if you're doing one thing at a time, it lets you really
kind of cut the cost in half. It also
super simplifies the getting started and uh you know, a lot of the onboarding experience. So, I think we're excited
experience. So, I think we're excited for this to unlock some new use cases and then also to potentially lower some of the barriers to entry cuz I think we also hear from customers that, you know, Temporal, it's a lot to understand, it's a lot to ramp up on. I'm trying to get
other teams to adopt it. It's a little bit overwhelming.
Uh potentially standalone activities are going to provide a nice entry point for folks to actually get started and see the value.
Uh so, I saw, you know, it looks like uh well, I guess feeling one on the on the Q&A, but fairness and priority appears to have come in second.
Uh so, good thing we're talking about it. Uh so, what is task queue fairness?
it. Uh so, what is task queue fairness?
Uh this is the ability to within a task queue backlog, uh you assign fairness uh keys to different uh you know, workflow uh
executions or different uh activities.
And Temporal server on the backend will make sure that we fairly balance the rate of task queue dispatch. And so,
what that means is that you're able to configure and say, "Hey, I want to make sure that all of my tenants have a similar quality of service even if one tenant is, you know, submitting lots and
lots and lots of work. We want to make sure that we don't starve all all my our all of the other tenants. At the same time, if nobody else is really putting load on the system, we don't want to artificially rate limit that one tenant.
We want them to be able to get through their work, have the best quality experience they can." And so, I think we've often seen customers having to, you know, resort to sort of naive rate limiting where they just artificially uh
cap the throughput of the system.
Uh we've also seen customers investing in, you know, queuing systems, database solutions, all kinds of stuff in front of Temporal trying to achieve some of this fairness behavior, try to deliver some of these quality of service guarantees.
You can get rid of that stuff now.
Um we this is actually uh we in from a customer who's been adopting fairness, um they actually expect their Temporal bill to go down on the order of 18 to
50% uh because they were both they were they were spending money on infrastructure in front of Temporal, but then they were also very aggressively querying and signaling their workflows trying to sort of hack in uh an
equivalent uh fairness behavior to deliver some of these quality of service guarantees and provide some of these different tiers across their their different levels of service. And they're
going to be able to get rid of all of that now. So, again, we're we're really
that now. So, again, we're we're really excited to both im- I think improve the quality of what you're able to do. Like
you're going to be able to very expressively say, "One tenant ought to get, you know, the same weight as another tenant, but a third tenant maybe should get 10x the weight." So, almost anything that you know, they're they're doing, they're going to make progress 10
times faster.
Um we we're giving you that flexibility, but in a way that uh potentially unlocks really material uh cost savings and or, you know, uh scaling advantages for you.
So, the the complement to fairness is priority. Um so, we we usually talk
priority. Um so, we we usually talk about these things together.
Um I guess I should also mention that uh that fairness is a paid feature uh or will be. We're not yet charging for it.
will be. We're not yet charging for it.
We will be charging for it.
Uh when you enable uh fairness on a namespace, there's a 0.1 uh action upcharge on all the actions in that namespace uh
while that is enabled. Um to be clear, when, you know, when that feature is enabled, uh we will basically apply that fairness uh to your prioritization.
Um if you disable the feature, your application code does not break.
Everything still works. Uh we'll just uh allocate the tasks sort of in the in the default way that we always have. And so,
there's there's no uh harmful impact. Basically, you're able
harmful impact. Basically, you're able at this fine-grained namespace level, you're able to say, "You know what? This
namespace, we really want those kind of quality of service guarantees, and we want to make sure we're handling these different tenants the right way. Other
namespaces, we don't care. We don't want to pay for it."
Um that's totally in your control.
Uh priority uh is not a paid feature. Uh
you can use it um you know, with no upcharge. Um what priority is doing is
upcharge. Um what priority is doing is much more uh blunt. So, basically, there are five priority levels. Whenever uh a task comes in, if, you know, if it's
priority one, we will try to dispatch all priority one tasks before we try to dispatch any priority two tasks. And so,
this is really valuable for distinguishing often between like background jobs versus, you know, user-facing critical path actions.
Um I think sometimes uh customers come to us and say, "Hey, I think I need priority. I want this really hard
priority. I want this really hard rating." And I think when we dig into
rating." And I think when we dig into it, they actually not they actually often find that fairness is a better fit because you want a little bit more fluid, little bit more dynamic balancing. Um so, fairness actually
balancing. Um so, fairness actually applies with that within every priority level. So, you're able to say, "Hey, for
level. So, you're able to say, "Hey, for all of my, you know, high priority customer-facing critical path stuff, I still want that fairness balancing to make sure that one, you know, premium customer doesn't come in and starve all
my other customers."
Um and then you also have that balancing for, you know, my priority five background cleanup jobs, I also still want to sort of balance across my different tenants uh and make sure that everyone's sort of getting a reasonable
quality of service. So, priority and fairness, um these are both currently in public preview, both expecting to GA in the May time frame. Um you know, pretty pretty significant leaps forward.
There's a lot more we can do on this sort of whole area of flow control. So,
uh my team is very aggressively sort of doing discovery. So, I think we would
doing discovery. So, I think we would love to talk to you if you've got both applications for these features in their current state, but also where you would want to take this in the future.
Uh there's there's so much we can do uh within Temporal itself.
So, uh I talked a little bit about Nexus already, but uh wanted to just hit on it again because it's really it's just such a important extension to uh Temporal.
Um so, what it is, it lets you connect apps across namespaces. So, it lets you do sort of per-service encapsulation.
So, obviously, uh you know, if you're running like a service-oriented architecture or microservices, um Nexus is a really great way to expose endpoints and functionality without having to have fully shared implementation and access across all
these teams. What What I think what's really cool now is we're seeing uh customers adopt Nexus for uh AI use cases where they're, you know, basically wrapping tool calls or,
you know, NCPs in you you need that sort of durable RPC. That's basically what Nexus is giving you.
Um that's a really nice fit for AI use cases where you've got an unreliable inference provider. It's, you know, slow
inference provider. It's, you know, slow in computer time. It's expensive, you know, your token costs rack up. So,
using uh durable RPC to abstract away some of these tool calls or study agent invocations uh is is a really cool and like I think very powerful pattern we're seeing a few customers adopt. So,
customers adopt. So, um Nexus is uh is the way that uh folks are implementing that. Um very excited about that. So, that's been generally
about that. So, that's been generally available. We are uh actually just about
available. We are uh actually just about to uh promote a couple of these other uh SDKs um to the next release is a big deal.
Um a huge unlock for making Nexus easier to use is allowing you to Right now, you have to call Nexus from within a workflow. So, we are unblocking that.
workflow. So, we are unblocking that.
So, you're you are able to uh actually invoke a Nexus operation even if your uh caller, the the service that's calling into Nexus, is not itself operating within a Temporal workflow.
And so, this should be, you know, a huge unlock for teams that, you know, we're using Temporal, we want to expose functionality other teams, we want them to start to see the goodness of Temporal, but we obviously can't make
them go uh you know, use use Temporal all up or write their own workflows.
That doesn't necessarily make sense for them. Trying to again, make it easier to
them. Trying to again, make it easier to gradually onboard and adopt uh some of the functionalities. So, this is uh
the functionalities. So, this is uh currently getting built. Uh we are trying to get this pre-released in a Q2 time frame.
Um so, I talked a little bit about uh synchronous workflows uh earlier. So, we GA'd some of the
uh earlier. So, we GA'd some of the basic capabilities. Uh we still have the
basic capabilities. Uh we still have the capability of, you know, what we're calling eager workflow start. This has
been in public preview. Uh I think we're honestly still soliciting a lot of input here. We don't have a hard GA date on
here. We don't have a hard GA date on it. Um we are But But what eager
it. Um we are But But what eager workflow start allows you to do is basically kick off a workflow and um you know, bundle the the first task
uh execution together. And really, the the effect here is just to lower the latency for the total like workflow to get started and make progress. And so
again, this is key in, you know, for instance, user-facing consumer applications where latency is hyper hyper sensitive.
Um this saves you a couple round trips.
Uh so, we'd love your feedback on this.
Uh it's been in public preview, I think, since the fall.
Uh and then the Rust SDK. So, we we've uh added the ability to uh officially uh code uh you know write simple old workflows using rust. So the the rustations are strong, you know, small
but mighty community.
We we shipped this uh recently and we are hoping to bring this into public preview again in that sort of May / Q2 time frame. But really uh
you know, little inside baseball. We've
actually had a rust core internally that we've used to build many of our other SDKs. We that we actually had that in
SDKs. We that we actually had that in place for a while, but we we hadn't actually made the investment to make it fully sort of end user facing and and polish it up into a full SDK. So this
was really kind of a labor of love from some of our engineers who were just you know, they're like we we know folks want it and we're already getting all this great knowledge out of it internally.
Let's let's do the work and and expose it externally.
Cool. Take a breath. Let's see if I'm not missing any crazy things in the in the QA or the chat. Cool.
Uh so now let's talk about how Temporal is helping uh helping sort of AI natives or you know, non-AI natives build AI applications. Um
really our approach is there's so much experimentation at the agent SDK level at you know, evals and you know, agent observability is you know, really seems like the wild west.
I think right now our policy at Temporal is you know, we we see all these people doing all this cool stuff. We want to play nice with everyone. We we would like to integrate with and basically help make all of these experiences
reliable and observable and you know, give you all the Temporal stuff you want. We're not necessarily trying to
want. We're not necessarily trying to say hey, Temporal knows how to build a an agent, you know, developer experience better than anyone else. Um so we've got you know, partnerships and integrations
with a variety of different providers, a variety of different verticals. You know, I'll zoom in a
verticals. You know, I'll zoom in a little bit. So we we've been integrated
little bit. So we we've been integrated with the open AI agent SDK for a few months now.
Um this is something else we're hoping to bring to uh full general availability in the Q2 time frame.
Um but really the idea here is you're able to write your code just using the expressiveness of the agent SDK and with really minimal zero overhead,
you also get all the benefits of Temporal in terms of visibility into what those agents are doing and the reliability, you know, retries and
traceability uh of that agent behavior because it's being backed with Temporal.
And so you know, really what we're we're trying to you know, distinguish between express your logic for your agent in whichever SDK suits you best. You know,
we we've seen really strong adoption of the open AI agent SDK, but everyone's playing with many of them it seems like.
Um and then really just we want to underpin that and sort of help it with the state management so that it is durable, you know, frankly what we've heard from a lot of customers is like it's hard enough to actually iterate on
these agentic loops and get them to high degrees of quality.
You know, there's so much context management, there's so many eval challenges.
Like it's it's a hard problem in its own right.
And then if you layer in you know, processes failing, network errors, etc. the standard Temporal stuff, it gets really hard to make progress on those and get those things to prod. So our
hope is that we can at least carve off that bottom part, you know, those those sort of underpinned issues with with Temporal and then you know, everyone still is going to have to solve all the the hard work of making these
agents actually do good stuff, do the right thing.
Uh something else that comes up a ton, large payload storage. So as you have very long conversation histories, as you got agents calling lots of tools, doing lots of turns, invoking sub agents, your
workflow history blows out. Um we are building native support for referencing, you know, instead of just putting all that information into your workflow history and then hitting a size limit, that's the current behavior.
We are replacing that basically with this sort of claim check pattern where uh sort of you will be able to configure an object store, you know, an S3 bucket or
equivalent and transparently the SDK instead of adding in you know, the full raw content, will actually just put a pointer to object storage into your workflow history. And so you'll be able
workflow history. And so you'll be able to you know, operate or build applications without having to reason so much about hitting or working around Temporal
workflow history size limits.
So this is a definitely a big quality of life improvement. This is something
life improvement. This is something that's landing in pre-release in Q2.
Um but yeah, this is this has been sort of a nagging challenge for a lot of users as has been streaming.
So right now you know, an activity completes and it's sort of all or nothing.
Obviously a pretty foundational aspect of interacting with a lot of LLMs is those those responses streaming back. Uh
this is something else where we've seen a lot of customers build you know, different workarounds.
We are building this natively into the framework so that you are able to stream back uh you know, incremental responses in a way that's compatible with obviously Temporal's core
you know, reliability and persistence guarantees.
So we are going to again be pre-releasing this also in the Q2 time frame.
Um and you know, we think this is a big unlock for UX for for the folks building AI apps.
Um a little bit further out. We're
really excited about using Temporal's workflow history and visibility to actually help you design agents and sort of look at possible
routes your agent could take and sort of prune and optimize the agent. Um so
basically being able to say I see my agent got to step five of 10. It did
something wrong. Well, I can rerun it from that I can make a code change and then rerun it from step five so I don't have to waste my time rerunning that.
But then also potentially you can kick off and run 10 or 50 or you know, 1,000 variants of the code from that point and really sort of explore the possibility space and see
what your agent would do. So this is something that you basically get out of the box with with Temporal. We're
building some UI and APIs around this to make it work, you know, to make it more accessible. Um but this is actually
accessible. Um but this is actually something that you know, potentially makes Temporal a lot more valuable and a lot more useful kind of in that initial design phase when you're feeling out how is you know,
how am I going to get this software to do what I want to do. Um
you know, versus like right now I think a lot of folks think of Temporal when it's like all right, I know I have my code working and now I want to make sure it never breaks so that it scales really well.
You know, we're trying to be useful at all points in the life cycle.
Uh related to that, agentic observability. So this is really sort of
observability. So this is really sort of recasting a lot of what you're already familiar with from the Temporal you know, timeline view and workflow history. Trying to basically up level
history. Trying to basically up level that from you know, tasks and activities and retries up into the sort of more agentic concepts that are basically
getting wrapped up by you know, the open AI SDK or whichever the other SDKs you're using. Uh you know, you're we
you're using. Uh you know, you're we want you to be able to write your agents at that abstract level, not have to think about Temporal internals and then you know, part of that uh
to enable you to do that, we need to show you in the UI and in the API show you that data or that performance back at that rolled up agentic level. So
making investments here in you know, being able to basically label sort of Temporal activities uh in a way that maps to those higher level SDK constructs.
Cool. Observability operations. I know
I'm running fast. I got to I got to keep going.
So Temporal worker controller.
So this is you know, Kubernetes uh controller that actually lets you scale workers.
So this is uh going to go into GA later this month.
This really you know, this is what we see customers using to deploy workers if they use Kubernetes.
They basically full stop. We very
recently have added auto scaling or HPA support.
So you I think a gap for this controller before was its ability to actually scale your workers based on the task queue backlog. Um that now that now exists and
backlog. Um that now that now exists and is available and that's actually aware of worker versioning. So that's been a gap with I think KEDA is another
Temporal sort of supported Kubernetes deployment option or scalar option.
Um we are really recommending everyone use Temporal worker controller. It's going
to be GA soon. It supports auto scaling.
It supports versioning.
It's also got you know, it supports these other capabilities, you know, upgrade on continuous new is a a new capability that basically helps you work around some of these versioning
issues with very long running workflows.
Um so anyway, we're this this is the way to do things going forward if you're using Kubernetes at all.
So excited that we're about to get this out the door.
Uh activity commands. So you know, we've invested a lot these are this is another category where we put some stuff into public preview in the fall. We're really
looking for feedback.
We basically want to give you all the options you need to make a activity healthy.
And so they can fail in a variety of ways for a variety of reasons. So you
know, we we have you know, we have activity pause and unpause. We have activity reset. We have
unpause. We have activity reset. We have
the ability to actually update the options on an activity and sort of change some of its config.
And then you're able to do some of those things in a like batch way to you know, fix a whole set of activities at one time.
Um you know, [clears throat] this ability to sort of at runtime remediate issues uh is obviously super important especially as you're running at scale,
especially as you're you know, getting more teams running more workloads, especially as you've got more third party dependencies.
Uh so I think this is a big area for us to get better at. Um one of the big challenges here is obviously, you know, the temporal kind of reasoning about the whole state machine sometimes feels overwhelming. And then when you're
overwhelming. And then when you're dealing with like overriding uh some of the configurations or the behaviors, um you know, I think it it can get like tricky to reason about. And so, I think we're trying to
about. And so, I think we're trying to um talk to users and understand what kinds of uh operation or what types of guarantees do you need to use these operations and feel safe and understand what's uh what they're doing.
Um so, related to that, sort of zooming in like workflow pause is like huge huge uh ask. Um uh looks like the slide got
uh ask. Um uh looks like the slide got messed up. Um but one of the one of the
messed up. Um but one of the one of the big things is um in addition to sort of being able to pause a workflow, we want to make sure that you're uh able to even just know that there's a problem with the
workflow. So, we've actually uh launched
workflow. So, we've actually uh launched this sort of issues detection uh a while ago where we'll automatically put search attributes on uh on workflows that are having an issue. Um but then we are uh
working on towards a pre-release of actually being able to pause an entire workflow. Um so, that's going to land
workflow. Um so, that's going to land later this month.
Um but that that's basically a you know, distinct from activity pause. Uh
workflow pause obviously has like potentially much bigger ripple effects, but it's also in many situations that's what you actually need. You don't need to just pause an individual API call.
You need to kind of pause the entire business process while either a bad code deploy gets fixed or downstream dependency gets uh recovered. You don't
want to just be hammering it with retries. You don't want to be
retries. You don't want to be accumulating, you know, potentially uh you know, dirty state that you have to reconcile later.
Um so, that that's our goal with workflow pause. Um that's coming out
workflow pause. Um that's coming out soon.
Uh the cloud ops API. Um so, this is uh you know, really this is what underpins our Terraform provider that has been generally available. Um we've been
generally available. Um we've been actually locking in the exact API contract for this for for both the GRPCs and the HTTP APIs.
Uh the cloud ops APIs are is really what we that's our like control plane or management plane uh API for cloud. So, it lets you manage your
for cloud. So, it lets you manage your namespaces, users, service accounts, API keys, mTLS certs, uh connectivity rules, uh Nexus endpoint,
like like all of the things in cloud uh it lets you manage uh above the data plane.
Um so, this is this has existed for a long time. I think it's taken us longer than
time. I think it's taken us longer than we would have liked to get it in the general availability, but uh we have a lot of customers using and like scripting this uh scripting against this uh today.
Uh billing API, this is a huge thing for cost management. Um so, we're exposing
cost management. Um so, we're exposing an API to let you basically uh pull down an extremely detailed uh breakdown of your costs. It includes
namespace tags. Uh it it's broken down by namespace, by charge type. So,
uh we've actually built this in a format that's compatible with the uh cloud provider cost management tools. Um so,
lots uh lots of powerful analysis you can do there. Um that's coming later this
there. Um that's coming later this month.
Related usage API. Um we haven't showed this yet, but this is basically metrics level access uh to get much more granular uh visibility into uh you know,
namespace level usage and getting down even into sort of uh like workflow type uh more granular details. So, this
complements uh the billing API. So, this
is what you're using, this is what's happening through the system. The
billing API is once all that usage gets translated into your actual Temporal Cloud charges.
Uh we're upgrading our billing center.
Uh Yeah, I know I'm I'm running low on time, so I'm I'm going too fast not giving this to the credit it deserves, but we're basically layering those usage and billing API improvements into the UI
to make it even friendlier to do cost analysis and let you do sort of internal uh cost accounting.
So, security and uh I am. So, private
connectivity, so tons of users require, you know, we need to use AWS private link or Google uh cloud uh private services connect to connect to Temporal.
Often we want to lock down and only allow access over those uh networking methods. Uh in the fall we released uh
methods. Uh in the fall we released uh these capabilities in public preview.
We're bringing those to GA in the Q2 time frame. Um this gives you basically
time frame. Um this gives you basically this sort of network level control to make sure your namespace can't be accessed uh except from trusted networks.
Uh workload identity federation. Uh this
is actually something we're in the discovery phase on. I think we're hoping to really dig in in into it in uh the second half. Um but really what this
second half. Um but really what this would let you do is instead of uh using a static API key or mTLS cert that's configured with, you know, created
within Temporal Cloud, uh we want you to be able to integrate your identity provider uh so that you can give, you know, we can authenticate a request, get a short-lived token, uh and make sure
that uh identity information, the source of truth is fully in your IDP of choice or your secret store of choice, uh and allowing you also to manage the sort of
token or credential life cycle at a really granular level. I think we're, you know, I we've gotten lots of feature asks for make API keys do this or give me more flexibility on uh making certs do that.
And they're they they are 100% they make sense. I think we're excited about this
sense. I think we're excited about this as a way to potentially just deliver almost unlimited flexibility and control to you uh by federating with your identity provider.
Uh so, projects. Uh projects are basically a folder or uh namespaces. And
so, right now in Temporal Cloud you have an account and then you have a bunch of namespaces. And so, that means it's
namespaces. And so, that means it's really difficult for you to just even logically reason about like, you know, do are these namespaces a group? Um it also makes permission
group? Um it also makes permission control really hard. You you have to either give somebody really broad account level access or you have to really manually enumerate across all the individual namespaces. We've gotten a
individual namespaces. We've gotten a ton of feedback feedback that that doesn't work for folks. Projects
basically introduces the level in between. So, you have an account, a a
between. So, you have an account, a a bunch of projects, and then namespaces would uh belong to one project. And so,
this lets you delegate uh permissions, lets you scope, you know, the blast radius of changes. Uh you know, in many cases customers are telling us like, "Oh, you know, I have a few different Temporal Cloud accounts. I could
probably all consolidate them all and have them in isolated projects. That
gives me actually more unified visibility like a billing level, but it gives me still the strong like isolation. I can I know that someone in
isolation. I can I know that someone in project A can't mess with project B."
Uh so, these are coming soon in the Q2 time frame in pre-release.
Uh and then I think my last one, custom roles. Um so, we we also know that our
roles. Um so, we we also know that our permission model can never be flexible enough. You know, whatever number of uh
enough. You know, whatever number of uh roles we predefine, they will not fit some, you know, legitimate use case uh that the folks have. So, of course we're building custom roles. And so, what this
is is you can define a role, you can define a set of resources that that role applies to, and then you can apply specific uh actions on those resources that that role is allowed to perform.
And so, this really is incredibly composable. So, the the first stage here
composable. So, the the first stage here is going to be uh exposing sort of data or sorry, control plane level uh fine-grained control. So, being able to, for
control. So, being able to, for instance, differentiate uh access to viewing billing information versus access to uh managing users versus
access to creating namespaces.
Um we often see, you know, a lot of different permutations at different companies of of like those types of capabilities. Uh fine-grained roles uh
capabilities. Uh fine-grained roles uh or custom roles will uh allow you to design exactly what you need. And and
so, you can give every user exactly the access they need and no more uh in this in this highly composable way. And then
we'll be bringing that into the data plane layer after we land it for control plane.
Um that is also coming in Q2.
Cool.
Wow, I was uh not that far off and I think.
Um Cool. So, uh I guess
Cool. So, uh I guess Milan. Uh yeah, so I I think we maybe
Milan. Uh yeah, so I I think we maybe had another uh poll or uh Q&A we wanted to run. I guess I don't see that. Yes, there it is. Cool. Um so,
see that. Yes, there it is. Cool. Um so,
yeah, we wanted to solicit um you know, just popped up a poll on the screen. Please please do answer. Um you
screen. Please please do answer. Um you
know, we from the product side, we would love to come demo any of these features, dig in deeper.
Um also to get your feedback. Like I
think, you know, as you can tell, a lot of this we really co-develop with customers.
Um we're also, you know, if you'd be interested in coming to an in-person uh event, we'd love to see you. Um you
know, whether that's building AI apps, coming to a just a general Temporal training session. Uh obviously not
training session. Uh obviously not required to. Um and then yeah, I think
required to. Um and then yeah, I think any other topics that folks are interested in, uh you know, we're we're always interested. We're we're building this
interested. We're we're building this for you and with you. So, um please please please share share everything you've got. Um I do see there there are
you've got. Um I do see there there are a couple questions in the Q&A.
Um I don't know if I can I can jump in on those or a second for folks to do the to do the poll.
Um Oh cool. So,
Oh cool. So, uh I'll speak to a couple of these questions. Um so, Johnny asked uh
questions. Um so, Johnny asked uh in the big picture, uh how would streaming work uh with async request-response model of activities? Um
that's a good question. It's been that's been one of the sort of big design challenges and like how do we integrate that? I think we've also
that? I think we've also uh obviously um you know, there's there's been challenges about like how much do we want to store incremental state in the workflow history and like for
replayability, do you need to be able to replay like I got exactly this stream and exactly this time. Um
so the the the team is I think starting with a sort of more client side edit to the libraries to basically give you more of the streaming behavior, but
wrap a little bit of of it within the SDK. I think we're then going to layer
SDK. I think we're then going to layer on more server-side changes. So, I think the goal is to make this as transparent as possible. Um I'm honestly not sure
as possible. Um I'm honestly not sure how we are going to exactly handle the async semantics in each of SDKs. Um like
in each of the languages. Uh that's
something that like our team could follow up on, um but I think our our goal is to do sort of this SDK side change first and then to layer it into the back end uh in the service.
Um and then there was a question from Olivier about will the public internet block for namespace uh enforced to go through private link for example be for both the data plane and the control plane. Uh so, the
connectivity rules we have are only at the data plane level.
Uh we we actually do support making a private link connection to our control plane.
Um so, that capability does exist today.
We don't offer an ability [clears throat] and frankly, we don't really plan to offer an ability to limit access to the control plane at the network level just because that creates a basically unrecoverable
error or like you you can basically if you misconfigure uh you know you know your network access, you lose access to the control plane and and then you can't fix it. Uh so, I I think right
now our posture is to make sure that there are secure connectivity options available uh for the control plane and to give you enough like policy level access like logical enforcement to say
you know, people should only be accessing this from a secure network.
But I think our goal is to avoid at the literally at the network layer like, you know, I submit one API request and then my next API request cannot go through because I just cut off my own network access. Uh I think that that's a pattern
access. Uh I think that that's a pattern we're trying to avoid. Um if that makes sense.
Uh cool.
That's wonderful. I think we've come to the um end of um our presentation. Thank
you for all those who asked questions.
Thank you for taking part in uh polls.
Um they help um drive these things much more smoothly and help us make the next one even better. Um all that remains to be said is Ben, thank you so much. We
appreciated time and going through all the all the content.
And I'll wish everyone on this call um happy Easter and we'll see you soon.
Thank you.
Thank you.
Loading video analysis...