LongCut logo

Next.js Conf 2025

By Vercel

Summary

Topics Covered

  • AI Agents Demand Local Reasoning APIs
  • TurboPack Defaults Unlock Agent-Scale Speed
  • Composition Beats Global Caching Nightmares
  • Cache Components Enable Partial Prerendering

Full Transcript

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

Heat. Heat.

Hey hey hey.

[Music] Take my time.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

Heat. Heat. N.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Applause]

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat.

[Music] [Applause] [Music] Heat.

[Music] [Applause] [Applause] [Music]

[Music] Heat. Heat.

Heat. Heat.

[Music] In time override. That's

[Music] heat.

Heat.

[Music] [Applause] [Music]

Heat.

Heat. Heat. N.

[Music] Heat. Heat. Heat.

Heat. Heat. Heat.

[Music]

Heat.

[Music] Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music] [Music]

Heat. Heat. N.

Heat. Heat. N.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

Heat.

[Music]

Hey, heat. Hey, heat.

Hey, heat. Hey, heat.

[Music] Heat. Heat.

Heat. Heat.

Heat.

[Music]

Heat.

[Music]

Heat. Hey, Heat.

Heat. Hey, Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

Heat.

Heat.

[Music] Hey, hey hey.

Heat. Heat.

Heat. Heat.

Last Heat. Heat. Heat.

Heat. Heat. Heat.

[Music] [Music] Heat. Heat. Heat.

Heat. Heat. Heat.

[Music] Heat. Heat. Heat.

Heat. Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

Heat.

[Music] Heat.

[Music] Heat.

[Music]

Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] Hey, hey, [Music] hey,

hey, hey.

Hey, [Music] hey, [Music]

hey.

[Music] Heat.

[Music] [Music] Heat.

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music] [Music] Heat.

Hey Heat.

Hey, hey, hey.

[Music] Hey [Music] [Music]

[Music] true. Heat. Heat.

true. Heat. Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat.

[Music] [Music] Hey, heat. Hey, heat.

Hey, heat. Hey, heat.

[Music] [Applause]

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] [Applause] [Music] Heat. Heat.

Heat. Heat.

[Music] [Music] Heat.

[Music] Heat.

[Music] Incoming

override.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] [Applause]

[Music] Heat. Heat.

Heat. Heat.

[Music]

Hey, hey hey.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat.

[Music] Heat.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

Heat. Hey. Hey. Hey.

Heat. Hey. Hey. Hey.

Hey hey hey.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat up here.

Heat. Heat. N.

[Music] Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music] [Music]

Please welcome to the stage Verscell founder and CEO Guiermo Roush.

Good morning.

Good morning everybody. Thank you, GM, and welcome back.

Hello. Today we're having the sixth annual Nex.js Conf. I'm pumped to spend a day with everyone here in this room.

And a special shout out to the thousands of people watching around the world, especially our watch parties in London and Berlin. Thank you for spending your

and Berlin. Thank you for spending your time with us today. Last year when we met we're a year and a half into this

app router era of Nex.js and at that time our framework was being downloaded 7 million times per week and over the course of building the app

router we made a lot of bets. We'll be

talking a lot about bets today. When it

first launched it had the earliest mainstream implementation of React server components.

It also supported nested layouts. It

used the file system as the API for its routing features. And it kicked off the

routing features. And it kicked off the development of TurboAC to ensure that this new server first architecture would

scale to apps of all sizes.

Today, NexJS is being downloaded 13 million times per week. Nearly doubled.

Thank you.

nearly doubled the adoption in just one year, especially with AI math, right?

That tells us that many of our bets are paying off and we're so grateful to the community who has helped us get the app router to where it is today. Thank you

so much.

But how we're writing software has changed. When we started the app router,

changed. When we started the app router, GPT3 had just come out. Chat GBT or chat didn't exist. GitHub Copilot was in

didn't exist. GitHub Copilot was in beta. In a very short amount of time,

beta. In a very short amount of time, we've gone from writing code by hand to getting suggestions to having models write code to now

having agents that author, test, execute, and ship entire features.

And what we've seen with this LLMs is that they really push us in terms of the design of our own APIs.

And LLM's context window is even shorter than a human's attention span. So if we make an API that's confusing for anyone

in this room, the LLMs stand no chance.

The easier we make the developer experience for humans, the better we make it for agents.

So, with all of this in mind, let's take a closer look at our bets and see which ones paid off and which ones were

a novel unproven architecture.

Fast forward to today and we're seeing RSC's gain adoption in popular ecosystem projects like VIT, React Router, Redwood

SDK, Wacku, Parcel, and more. In other

projects like Tan, Tanax start actively working on an integration.

What's more, even outside of React, projects like Zvelt are bringing more and more of the rendering to the server side. The server is so back

side. The server is so back So this is quite remarkable of a development. RSC's provide an idiomatic

development. RSC's provide an idiomatic way for all React apps to fetch data and compose server side and client side logic together regardless of the

framework that you choose. So we're

excited to see all of these innovations continue to come to the React ecosystem and the adoption continuing to spread.

Server actions. Server actions are another feature that we're seeing pay off. They let you handle API requests

off. They let you handle API requests directly inside of React. And between

React server components and server actions, Nex.js now has support for first class idomatic APIs for both data reads and writes, which is something

that we never had with the pages router.

As much as I loved it, we're also seeing success with our new routing architecture. Layouts persist

routing architecture. Layouts persist across navigations. This is kind of like

across navigations. This is kind of like the dream of React, right? avoiding

unnecessary renders across navs, maintaining ephemeral state during RSC updates.

The router also supported advanced routing patterns with parallel and intercepting routes and our file system conventions allow every screen's UI data

and logic to live together. So these

features have enabled a new level of scale for the most ambitious applications.

Finally, one of our biggest bets has been Turboac.

I remember when I first saw a demo of Turopac and it was performing fast refresh in singledigit milliseconds. I

was like, "Holy cow." But we asked ourselves in that meeting, is this worth it? Because a human under 100 millconds

it? Because a human under 100 millconds is all kind of the same, right? But for

an agent loop, every millisecond counts.

TurboAC now drives more than half of all Nex.js15 development sessions with over

Nex.js15 development sessions with over half a million developers using it every week. It's astonishing.

week. It's astonishing.

And after releasing it at st as a stable for development last year, our ambition for this year was to enable it by default for all new apps.

So our first milestone was to ensure that all of Nex.js JS buildtime features continue to work. This meant getting the

existing suite of 8,266 tests to pass. It's an incredible accomplishment, but we then proved it in production. Our own internal sites

production. Our own internal sites adopted Turac for builds, giving us the confidence that it was ready for everyone. And today I'm thrilled to

everyone. And today I'm thrilled to share that TurboAC is the default for all apps on Nex.js16

and we're not deprecating Webpack anytime soon. We are shipping both. We

anytime soon. We are shipping both. We

care a lot about backwards compatibility and we're setting up Nex.js to be incredibly fast for the majority of our users.

So speaking of Webpack, when compared to Webpack, we've seen up to seven times

faster production builds, up to 3.2 two times faster initial load times in development

and believe it or not, astonishingly, up to 70 times faster HMR updates in XJS16.

If you take a look at these numbers, you'll see that TurboAC is especially good at updates.

Once it's warmed up, it can track tasks at a granular level and avoid redoing unnecessary work. But in reality, if you

unnecessary work. But in reality, if you stop your dev server, all that work gets kind of thrown away. And if you start a new dev session, you have to start over

from scratch. So, we've been iterating

from scratch. So, we've been iterating on this problem all year. And today I'm excited to announce that a file system

caching for development is now in beta.

It extends the in-memory task tracking of TuropAC to full server restarts by leveraging the file system, the good old reliable awesome file system. And this

is going to be particularly awesome for the largest projects in the ecosystem.

I have one more teaser. We're already

working on bringing this amazing innovation to your production builds so you can develop and deploy faster than ever.

So the investments that we've made in our new bundler are really paying off.

But we live now in a world of agents that can continuously open pull requests and create preview builds. And for this world, TurboAC is going to pay off a

thousand times over.

Speaking of agents, another bet we're making is on the importance of keeping models up to date with the latest best practices in XJS.

That means understanding features like server components, server actions, even features that we're announcing today.

Who likes when claw tells you you're absolutely right and it uses an old API, right? So to support that we're open

right? So to support that we're open sourcing next evals.

Next evals is a public benchmark that tracks how well the latest models and coding agents can build with Nex.js.

This is an investment into the ecosystem. The better AI gets at Nex.js,

ecosystem. The better AI gets at Nex.js, the better everyone's developer experience will become.

So if you're in the AI space, we'd love to collaborate. But now let's talk about

to collaborate. But now let's talk about the bets that haven't paid off. There's

one big one that we've been iterating for the past two years. Any guesses?

Yeah, that's right. Caching, our

favorite word as computer scientists. We

try to make things static by default.

This is kind of the background of it.

And our efforts to marry the static rendering capabilities of next with the app router nested layouts resulted in kind of confusing APIs and unpredictable

behavior. So let's take a step back and

behavior. So let's take a step back and take a look at the bets we've made with the app router. Little bit of a bet review our portfolio. So here they are.

Server components, server actions, file system, turbo pack and caching.

So, if we squint our eyes and focus on the ones that paid off, what do they have in common? Any guesses?

It's a little bit of a tough one.

Composition and collocation. Our best

features really embrace these qualities, right? So, you have this ability to

right? So, you have this ability to reason locally. Server components, for

reason locally. Server components, for instance, they let you fetch data in every component whereas get serverside props or old API, it was restricted to

your page module.

And our caching APIs kind of went global. So fetch cache no store export

global. So fetch cache no store export const dynamic force static. It gives me a little bit of PTSD. So these APIs introduce implicit coupling and

unpredictable behavior and nobody likes spooky action at a distance. That's kind

of how those APIs behaved. So this

really all comes back to React's core value prop, right? composition.

We've seen time and time again how well this principle has served React, both in the longevity of the first party APIs, but also in how rich the ecosystem has

become. And today, this idea of betting

become. And today, this idea of betting on composition and avoiding spooky action is even more important because it's not only better for humans, it's

essential for our friends, the LLMs. So let's kind of evaluate how or think about how well LLMs can work with React code that uses Tailwind, right? Works

great that uses Shhatzienne. Raise your

hands.

He's not here. Or even React pages in a next app that have both server and client logic in one file. You get to understand everything right there and

then. So composition wins over

then. So composition wins over separation of technology every single time.

But caching spends lots of technology in lots of stacks, CDNs, in-memory stores like Reddus, browser headers like cache

control, CDN cache control, TTL vary, it's been hard to figure out how to compose all of these things together, which is why caching kind of always felt

like it was separate from React. is this

thing you have to worry about rather than another Lego brick that stacks up perfectly.

But RSC's, if you think about it, they brought concerns right into the component boundary. They brought

component boundary. They brought serverside rendering, for example, into the component boundary. They brought API routes into the component boundary. And

with use cache, we're bringing caching into the component boundary. So, I'm

very excited to now invite Sam up here to tell us all about the future of caching. Give it up.

caching. Give it up.

[Music] [Applause] [Music] Morning everybody. Uh my name is Sam and

Morning everybody. Uh my name is Sam and I'm super excited to be telling you about everything the NextJS team has been working on today.

Uh, now it's kind of funny that I'm up here because, uh, even though I've been plenty involved with the Nex community over the years, I only just joined the team a few months ago. And one of the

first things I asked my new teammates was, why does clicking around the app router feel so slow? I mean, sometimes you click a link and it's fast, usually if

it's a static page, but sometimes you click a link to a dynamic page and you see zero feedback and then eventually a new page comes in. It's almost become a

meme at this point.

Don't get me wrong, uh I love the DX of the app router and I think server components were a great addition to React, especially for just fetching data and rendering it on a page. The DX is

really hard to beat, but I just didn't understand why using RSC's meant we had to give up on the snappy client navigations that we're so used to in our React apps. I mean, isn't that one of

React apps. I mean, isn't that one of the reasons we love React is the ability to run code right in the browser?

Why couldn't we just use RSC's for the initial page render and use good old client side React for everything else?

Now eventually I got my answer and to be honest it was pretty fascinating and that's something I want to share with you today. But first I want to set the

you today. But first I want to set the stage. I want us to understand the

stage. I want us to understand the problems that the app router was actually trying to solve when it was first created in 2023.

So what did other React frameworks look like in 2023?

Well, most of them looked like uh similar to each other because they followed the broader web ecosystem.

If we go back, the earliest front-end frameworks bundled all of our code on the server and then they sent it to the browser and the browser was then

responsible for rendering, data fetching, and routing. Kind of like an iOS app. Um except the web is not iOS.

iOS app. Um except the web is not iOS.

People want to be able to open up articles and links to tweets instantly, which is exactly why in 2012, Twitter's engineering team shared in this post that they cut their initial load times

by 80% by moving a lot of their client side rendering back to the server. They

also split up their code into smaller bundles so that they could lazy load them as needed.

So we started seeing features like serverside rendering and dynamic imports making their way into our frameworks things like get initial props.

Uh but the story doesn't stop there. In

2017 Netflix shared that they had removed all client side react from their landing page.

Uh by pushing even more rendering and data fetching to the server and letting pre-rendered HTML do the heavy lifting.

They saw a 50% improvement in the performance of this page.

So again, we saw the trend continue.

More APIs doing more work earlier on the server. Kind of like get static props.

server. Kind of like get static props.

Now, what about those snappy SPA navigations that we all love so much?

Uh so what about this routing layer here? Um, it turns out that teams that

here? Um, it turns out that teams that relied exclusively on client side code for their routing concerns also ran into performance ceilings. In 2018, the

performance ceilings. In 2018, the engineers at Square wrote about using a new Ember feature called engines to break up their dashboard into sections that could be lazy loaded. And it's

funny because at that time I was working on Ember 2 and I remember LinkedIn doing the exact same thing. They adopted this feature, Ember Engines, because the sheer amount of URLs on their site was

causing performance problems just by loading the entire client manifest, the route manifest into the browser. And uh

as one more example, recently Remix added a fog of war feature uh to help with this exact same problem over at Shopify. So uh I make all these points

Shopify. So uh I make all these points to say that uh uh after a decade of of experience building with these rich frontends and the development of the

entire framework ecosystem, this was kind of the state-of-the-art. We had uh these hybrid frameworks that were primarily client first SPAS,

but they kept adding serverside features that you could opt into when you inevitably hit the limitations of the client.

And this brings us to RSC's and the app router.

The goal of the app router is to solve these problems at a foundational level.

And we knew from experience that the client- centric approach was a dead end.

That's why the app router defaults to rendering, data fetching, and routing on the server with server components. We

wanted to bake all these hard one lessons from the community directly into the framework so that you and your team never hit these performance ceilings and feel like you've outgrown Next.

But here's the thing. I like snappy client navs and I like opening my iPhone and seeing pre-rendered screens sitting there waiting for me. So, shouldn't we

have this option to pre-render certain screens or prefetch upcoming client navigations so we can provide the best user experience? I think we should. And

user experience? I think we should. And

not coincidentally, so does the rest of the team because that's exactly what they've been working on and what I'm excited to share with you today. It's a

new feature that unlocks partial pre-rendering, which we started working on about 2 years ago, and we're calling it cache components.

It lets you pre-render and prefetch your UI, bringing instant navigations to the app router. So, let's see what it looks

app router. So, let's see what it looks like to work with Nex.js when cache components are enabled.

Now, if you're like me, you're happy it's fall because football is on. So, uh

let's make a little fan site here for college football. It's going to have

college football. It's going to have real-time scores up top. It's going to have the current standings of the teams. It'll have some news stories um and so on. So, we're going to start with a

on. So, we're going to start with a simple layout for now and we'll just get started by rendering the header. And this is just a static

the header. And this is just a static list of links. So, that's looking good so far. Now, let's create a module for

so far. Now, let's create a module for the homepage. So, we'll create our page

the homepage. So, we'll create our page and this is going to be the standings.

So, we'll add an H2 for the standings label. And then let's go ahead and add a

label. And then let's go ahead and add a panel.

And uh this is coming along. And now we actually want to fetch the data for the standings. So we're going to add a fetch

standings. So we're going to add a fetch call here to our server component. So

let's go ahead and add fetch.

We'll fetch from the API and we'll iterate over and render the teams inside of our panel. Now, when we save this and reload

panel. Now, when we save this and reload the page, we're actually going to see an error. It says uncashed data was

error. It says uncashed data was accessed outside of suspense. This

delays the page from rendering and it specifically calls out our fetch call here as the culprit.

So, remember when we talked about slow navigations at the beginning of the talk? Well, Nex is telling us here that

talk? Well, Nex is telling us here that because our new fetch calls are outside of any suspense boundary, they've made our homepage a blocking route.

This means that if we were to load it, we'd have to wait for all the data before we see any content from the server.

Um, but if we go back to what we just had before we added the fetch call, we just saw everything on the page that didn't depend on those fetches at all.

All this UI could have been rendered by the browser uh before the fetch calls even finished. In fact, before they

even finished. In fact, before they started, the browser could have already started the work it needed to do to render this page like fetching the static JavaScript bundles and the stylesheets from the server ahead of

time.

So, let's go back and we're going to fix this error by unblocking our route.

And what we're going to do is we're going to move the data fetching code into a new component. We'll call it a standings table. And let's go ahead and

standings table. And let's go ahead and render in our tree. And now we'll use suspense to unblock it. Suspense

provides a fallback. Usually you you provide your own, but for now let's just use a null fallback. So uh if we save and reload the page, the error goes away

and we see our static content again. And

now once our fetch calls complete, the data streams in.

Pretty cool, right? Um,

this is pretty amazing. At first glance, it might look familiar. I mean, haven't we all seen skeleton screens before? And

lots of frameworks have APIs for deferring data fetching so that, you know, our whole page isn't blocked by the slowest fetch. So, how is this any different?

Well, there's something special about this skeleton screen.

It's static HTML.

That's what that error was all about.

Next is guaranteeing that this route can be pre-rendered into static HTML. And if

you've worked on Next before, you know that if you can make a route static, it's usually pretty good. It usually

ends up in an extremely fast user experience. Uh, which is great. Except

experience. Uh, which is great. Except

today, when you make a route static and next, you can't fetch anything dynamic during the initial request at all. So, it's an all or nothing decision, right? If you

pre-render a route today, you have to pre-render the whole thing. But pretty

much every page has something dynamic.

So, inevitably, you're going to add some client side data fetching logic. Um,

some new library that's going to complete way after the initial request has already come back. It's going to start waterfalling back and forth between the client and the server. You

have to set up API endpoints. You're

probably going to bring a library in to make those API endpoints type safe.

But here, we didn't have to do any of that. This page isn't dynamic or static.

that. This page isn't dynamic or static.

It's both. In other words, it's partially pre-rendered.

And partial pre-rendering is the one mode of rendering for all routes when cache components are enabled. No more

choosing between static or dynamic.

Every route is partially pre-rendered.

And that error that we saw earlier ensures that every new route you add produces some static content that Nex can extract and serve up for a fast initial load. Even if that static

initial load. Even if that static content is just some light fallback UI and most of the pages content is dynamic, it gets the browser to start booting the app instantly and it doesn't

slow down the dynamic data at all thanks to React's use of serverside streaming.

Okay, let's get back to our app and uh let's go to the games link up here in the header which is currently 404. So,

we'll build out this page. Let's create

a new file, a new page module, and we'll fetch the current games and we'll render the schedule and then and then all the games that are currently playing. And if

we save this again, we'll see an error.

That's an old version of the error because we just released NEX 16 last night. Um but uh it says the same thing

night. Um but uh it says the same thing as before. So let's let's do the same

as before. So let's let's do the same thing and unblock our route. We're going

to extract the dynamic part into a new component and we'll render it up here in our page and then we'll wrap it inside of

suspense. And this time we'll add some

suspense. And this time we'll add some light fallback UI. So if we save this and take a look, we see our new beautiful skeleton screen. And uh then

once the fetch calls finish, the data streams in. Easy peasy, right?

This is what it's like to work with Nex under cache components. It's truly

dynamic by default. So there's no more implicit caching. We didn't need to add

implicit caching. We didn't need to add force dynamic to our page or cache no store to our fetch calls. As long as we're inside a suspense boundary, we can

fetch data without any surprises. And

those suspense boundaries ensure that every page in our app can serve its static content instantly and prepare the browser for the dyn dynamic content as

early as possible.

Now, all this seems great to me. Um,

caching in the app router is always seems confusing. You know, there's all

seems confusing. You know, there's all these conventions you have to know about. There's like nine different

about. There's like nine different config options. So, I do like this new

config options. So, I do like this new explicit approach.

But where are my snappy navs? I want my snappy client navs back. Well, let me show you something.

If we reload the games page and then click the link to go home, watch what happens.

We see an instant render of the static pre-rendered homepage followed by dynamic data. Let's try from the other

dynamic data. Let's try from the other direction. Oh, sorry. So, that's where

direction. Oh, sorry. So, that's where the dynamic data comes in. Now, let's

try it from this direction. Let's

imagine we just refreshed on here and we want to go back to the games page. So,

we'll see as soon as we click, we get an instant nav to the pre-rendered page and then the content fills in. So, how does this work? Well, the answer is

this work? Well, the answer is pre-fetching.

Thanks to partial pre-rendering, links will prefetch the static content for the upcoming route by default. And this

makes total sense, right? We just saw how the new programming model ensures that each route has some static content.

So, we know that is both cheap to fetch and that it won't become stale by the time the user actually clicks the link.

So the link tag can just prefetch it in advance and uh we can have a snappy client nav, an instant client nav that even though the server was involved

originally, by the time we click the nav, it's as if you were in the client doing a client side navigation.

This is awesome. We're able to make use of the browser to get those snappy navs without doing any extra work. We don't

have to move any of our data fetching code to the client. We don't need to fork our page component based on whether it's the initial render or a client side navigation. And we didn't need to create

navigation. And we didn't need to create any special loading.tsx file. All we had to do was use suspense and RSC's like normal. And next gave us instant client

normal. And next gave us instant client side navs for free.

So at this point you might be wondering why is this called cache components? Uh

what the heck does any of this have to do with caching? And why haven't I seen this new use cache directive yet?

Well, if you think about our site so far, what is it that actually let us prefetch the static content for each one of these pages ahead of time?

Oops.

It's the fact uh sorry, I went ahead.

It's the fact that we know that this static content can't be stale. That's

what let us prefetch it. It's not like the games page that has real time data, right? Uh it updates every few seconds.

right? Uh it updates every few seconds.

This static content can't change because it is based on the code we wrote in our application. It can't change unless we

application. It can't change unless we change the code and redeploy the app. So

this static pre-rendered content is effectively cached content, right?

Wouldn't it be cool if we could cache more of our content? Let's try.

If we come back to our homepage, well, the standings here only change once a week. In case you don't watch football, they only play once a week.

So, the only time that they can change is basically once a week. Seems like a good candidate to pre-render ahead of time so that we don't have to fetch and render uh the standings every time a new

person visits the site, right? We can we can cach it and save some work and make the site a lot faster. So instead of keeping them dynamic, let's remove the suspense boundary

and we'll cache this entire route by adding use cache to the top of the page.

If we save and reload, let's see what the behavior is. Now

look at that. No partial page, just all the final content available immediately.

Let's try navigation from the games page. So, we'll load the games page,

page. So, we'll load the games page, which still has a loading screen, and then we'll go ahead and click on the homepage.

Boom.

Instant nav to the fully pre-rendered page. So, if you think about it, it

page. So, if you think about it, it turns out that we've been working with cache content this whole time. It just

starts out as the code in our code base.

But with use cache, we can choose to make more and more of our UI cache content. And once we tell next what's

content. And once we tell next what's cachable, it can pre-render it so it can be included in that fast initial load as well as those snappy pre-fetched client navs. Pretty awesome, right?

Importantly, we haven't had to make any all or nothing decisions here on a route by route basis.

If we were to come up to our root layout and say we wanted to add a live score strip of the scripts up of the scores up top, um, but we'll keep them dynamic, you know, so that you can see the latest

scores. So, we'll wrap it in suspense.

scores. So, we'll wrap it in suspense.

Now, if we look at the behavior, we'll see that our standings are still pre-rendered and then the scores stream in.

And uh same with the homepage. If we

come back to the homepage, let's say we want to add some news stories below the standings table. So, we'll write a new

standings table. So, we'll write a new component for our news stories that has a fetch and we'll add it to our tree.

We'll put it inside suspense so we have something to pre-render for that part of the UI. But now, let's see what happens

the UI. But now, let's see what happens when we reload the page. Oh, that was important. Let me go back. So, uh, we're

important. Let me go back. So, uh, we're going to wrap it in suspense so it's dynamic, but we still want to cache the standings table. So, what we do is we

standings table. So, what we do is we move use cache down our component tree, right? And this is uh the composition

right? And this is uh the composition point that GMA was talking about. Uh, we

don't have to make these all or nothing decisions. It works within the component

decisions. It works within the component boundary. And because we don't want

boundary. And because we don't want everything on the homepage to be cached, we can move it down. And now if we look at what happens when we render the page.

Let's take a look. We'll save it.

And this is our shell. This is our instant static page, pre-rendered content, which still includes the standings. And once the data finishes,

standings. And once the data finishes, both the scores in the layout and the news stories in the homepage stream in.

Pretty cool, right? Pretty cool.

Uh, so we've added more dynamic content without sacrificing the fast initial load or instant client nav to this page.

Um, there's so much more I want to talk to you about, but I wanted to focus on this today. Uh, and we've seen how use cache

today. Uh, and we've seen how use cache lets us partially pre-render our pages into static HTML, right? That's kind of

what we focus on today. And uh we have APIs for regenerating that HTML on demand or after a period of time similar

to how ISR works uh in the current next.

But one thing I just want to leave you with is that use cache is not just an update to our static rendering API. It's

not just a replacement for get static props um or a better way to avoid all those crazy caching APIs because after all most of us work on

dynamic websites, personalized websites and dynamic personalized websites need information from the request, right?

Like dynamic params, search params, cookies and headers and that data is never available when we're generating static HTML.

But what if we wanted to prefetch pages that depend on those things once users are clicking around our app? What's the

answer there? Well, the answer is still use cache. So, we're working on APIs

use cache. So, we're working on APIs that leverage the same caching semantics that use cache and cash life have given us to make it so that your users can

click around to fully prefetch pages even in highly dynamic personalized apps. And this is what I found so

apps. And this is what I found so fascinating when joining the team.

Just because the app router is server first doesn't mean we have to give up on the kinds of interactions that made us all fall in love with React in the first place. And prefetching is a powerful

place. And prefetching is a powerful example of how there are many, many more.

And now that we have an architecture that avoids the performance cliffs of the past, we know that this new model can take us further than the old one ever could.

So that's cache components. They let you pre-render and prefetch your UI for instant navigations in the app router.

And we just saw what they felt like to use. Now I'm going to turn it over to

use. Now I'm going to turn it over to Jimmy to show you how they behave once they're deployed.

[Applause] [Music]

Hi, I'm Jimmy. I lead the NexJS and Turboac

I'm Jimmy. I lead the NexJS and Turboac teams. So, like GMO said earlier, React really nailed composition.

With just a few well-designed primitives, React allows you to build beautiful and functional UIs.

With NexJS, we're really trying to extend the same philosophy and apply it to your entire apps. But what does that really mean?

Well, one example of that was static generation. That was a game changer. We

generation. That was a game changer. We

realized we could premp compute some pages ahead of time and serve them instantly.

That was composition over time. We made

apps faster by just doing a little bit more work at build time.

Something funny happened along the way.

We made those instant responses so attractive that even when the page needed dynamic data, people were willing to wait for a static page and just fetch data from the browser. I don't blame

them. Like Sam said, everybody loves the

them. Like Sam said, everybody loves the snappy and instant navigations.

But in doing so, they introduce hidden latencies to their apps.

Since we're now fetching from the browser, every request introduces a new round trip to your database. And while

might be acceptable for one request when multiple of those have to happen sequentially, that latency stacks up resulting in much longer loading times overall.

So we knew one way to optimize these latencies was to be closer to the database via server.

But we still had the problem of the server not being able to respond up until it finished rendering.

So we went back to the drawing board and we bet on server components and streaming.

Nexus was now able to show pieces of the UI as they became ready without being blocked on the slowest part of the page anymore, but it still didn't feel as instant as a

static page.

However, we realized we had all the pieces we needed to come up with something new.

Something you might have noticed is how binary the choices were. Pages were

either fast and stale, or they were fresh but slow.

Turns out the answer was always more composition. And as Sam showed you, our

composition. And as Sam showed you, our answer to that was partial pre-rendering and cache components.

With a model that allows you to compose both static and dynamic content down to the component level, this allows you to get the best of both worlds. A fast

initial UI and optimized data fetching from the server.

It's a huge unlock and I believe this is going to be a big evolution for how you write your NexJS apps and everything Sam demonstrated earlier just works whether you're running NexJS

on a $5 header VPS or in your cloud platform. Partial pre-rendering works

platform. Partial pre-rendering works just as intended.

But let's see how the cache component model can also help when deploying NexJS on a globally distributed platform.

See, if your user is in a different region from your server and data, there's still going to be some latency involved. NexJS can do a lot of things,

involved. NexJS can do a lot of things, but we can't make light move faster yet.

But as we as we know from the history of the web, the quickest way of serving content is doing it from a CDN close to your user, right? We do it every day for

images fonts CSS JavaScript.

Well, because our programming model allows us to extract the initial UI for a page, a platform like Verscell can supercharge Nex.js rendering and serve the same UI right from your CDN. And we

can also stream in the content from your original server.

Cache components not only tells NexJS how to compose rendering over time, also lets you compose it over space.

So regardless of where your users are on the planet, we now brought latency down to a theoretical minimum.

That's the kind of innovation that's at the core of our work at Versell. We want

to build a web framework that works for everyone, but also we want to push the boundary of what's possible on a modern infrastructure.

So you might be wondering, does NexJS come with a tutorial on how to do this?

Well, not really. We're so focused on building the appra we didn't spend as much time sharing that philosophy as clearly as we should have and we heard it loud and clear.

Even though Loin was never our intention, a lot of people claim it was by design.

But Nexus has always been open source and it still drives everything the team does.

So it's not just about having our embarrassing PRs in public. It's about

building a framework that can be used and supported by everyone to the best of their abilities.

Rising tide lifts all boats and the ideas we're pushing with NexJS are meant to scale beyond versel, even beyond NexJS.

So, we want you to have the best possible experience with NexJS regardless of your platform choices.

So, we've been building something new, the build deployment adapters API. It's

a clear and explicit contract between NexJS and the platform that runs it.

We're sharing all the primitives that we use to deploy NexJS at scale and it's all there in a stable inversion and documented API. That information was

documented API. That information was already present when running next build, right? But there wasn't really any guide

right? But there wasn't really any guide on how to enhance NexJS the same way Versal does. So this API is meant to be

Versal does. So this API is meant to be a framework for deploying Nex to its full capacities.

We want to provide you with a clear spec, fewer surprises, and hopefully a stronger foundation to invest on.

Another piece of the puzzle is well, once you built your adapters, how do you verify that they actually work as intended?

Well, we're also sharing the same test suit that we use to validate all NexJS features on resell. And our goal is to create transparency for users so that

when they adopt a new platform they can verify that Nex works well.

And also this also helps us identify which area of the ecosystem we can better help on.

We didn't do this alone.

Earlier this year we created a working group with contributors from AWS, Cloudflare, Firebase, Netifi and Open Next.

We gathered the pain points they had when hosting NexJS and from there we crafted an RFC and refined the design together and so what I've announced

today is a result of that collaboration with N16 deployment adapters ships in alpha and we want your feedback as we migrate the ecosystem gradually.

We're also using the same API on our cell with our own adapter. We want to play fair and with the same rules we're imposing to our partners.

Our goal is to use this as the foundation for a more collaborative next year. One where the ecosystem benefits

year. One where the ecosystem benefits from all the work we continue to do at Versell. And so to everyone in our

Versell. And so to everyone in our working group, thank you for your time, for your feedback, and all the engineering hours you've invested into making this possible.

If you're a platform provider who wants to better support NexJS, we'd love to work with you.

So today we talked about three long-term bets for NexJS.

Turboac, which was our bet on incrementality and a better developer experience.

Cache components, our bet on a programming model that unlocks new levels of performance with composition.

and the deployment adapters API which was our bet on an open NexJS and one that scales beyond Verscell.

These ideas have been in motion for years and they're finally here.

NexJ 16 is available today with Turopac as a default cache components as an opt-in and we'll be sharing guides, examples and migration path to help you make the most of them.

I'm incredibly proud of the work the team has done for this release and I'm extremely excited to see what you will build next.

Thank you [Music] everybody. Thank you.

everybody. Thank you.

This conference and this release feels like an inflection point for the framework, for the ecosystem, and for the future of coding. We talked

a lot about the future of coding with LLMs and agents. So, we decided to bring something even better than agents, the people behind creating the agents. So

we'll have Arian from Burcell who's working on Vzero, Fawad from OpenAI working on Codeex, Matten from Factory

AI working on Droid, and Swix from LA in space who's going to moderate this panel and they're all going to talk about this future that's coming and in many ways is here today. So let's give it up for them

here today. So let's give it up for them and thank you again.

[Applause] [Music] That is good music for walk on. Very

nice.

>> Uh all right. Welcome to the future of AI coding panel. Thank you for reading the memo that you have to wear all black. Um uh okay. So uh I I I do want

black. Um uh okay. So uh I I I do want to cover a little bit of introductions.

I I know each of you uh in in different ways. Uh but maybe the audience like

ways. Uh but maybe the audience like fully doesn't quite uh matan why don't you go first uh uh what are you proudest of like what is facto's position to like

the broader world on in AI coding >> yeah so uh at factory our mission is to bring autonomy to software engineering um and what that means more concretely

we have built endtoend software development agents called droids they don't just focus on the coding itself but really the entire end-to-end software development life cycle so

things like documentation, testing, uh review, kind of all the ugly parts so that you can also do the more fun parts like the coding itself. And for the parts of the coding you don't want to do, you can also have the droids do

that. So we build droids.

that. So we build droids.

>> Build droids. Um and OpenAI uh obviously needs no introduction, but uh you your your role on the Codex team. Uh I you know I saw I saw you pop up on the Codex video. That's how I that's how I knew it

video. That's how I that's how I knew it was you working on it. Uh but how do you think about Codex these days since it's expanded a lot?

>> Yeah. So um earlier this year we launched our first coding agent um which I I worked on Codex CLI um bringing the power of our reasoning models into people's um computers. Um then we

released Codex cloud where you could actually distribute and kind of delegate those tasks to work in the cloud. And

over the last you know some odd months um we've been unifying these experiences so they work as seamlessly as possible.

So a lot of our focus um is around how do we make the the fundamentals the primitives as useful as possible. We

just released at dev day codeex SDK. So

I think one of the key directions we've been seeing is not just using coding um or code executing agents for coding but also for general purpose tasks. And so

whether it was tragedyd agent which I worked on earlier this year that actually executes code in the background to accomplish some tasks but starting to enable our developers to build on top of um not just the reasoning models but

also things like sandboxing and all the other primitives that we built into codecs.

>> Awesome. Um vzero.

>> Yeah. Um the goal of v 0ero is to enable developers to do preview driven agentic programming. So today when you build web

programming. So today when you build web apps you probably have an agent open your IDE open. is some kind of code and then a preview of what you're actually building. Usually you're running dev

building. Usually you're running dev server. With v 0ero, our goal is to

server. With v 0ero, our goal is to allow you to just have an agent running and directly prompt against your running app. Uh and that's how we think the

app. Uh and that's how we think the future of DX is going to is going to pan out.

>> Okay, awesome. Um and everyone has different like surface areas in which to access your coding agents. So I think one of the things that we kind of want to kick off with is how important is local versus cloud? Um you started

local, went cloud, you started cloud went local, you're cloud only for now.

Um what's the split? Is everyone just going to merge eventually?

>> Yeah. So maybe uh maybe I can start there. So I think at the end of the day

there. So I think at the end of the day the point of these agents is that uh they are as helpful as possible and they have a very similar silhouette to that of a human that you might work with. And

you don't have local humans and remote humans that are like somehow you know this one only works in this environment, this one only works in that environment.

generally humans can be helpful whether you're in a meeting with them and you come up with an idea or you're sitting like shoulder-to-shoulder at a computer.

Um so I guess asmtoically these need to become the same. But I think in the short term um remote is typically what we're seeing is that typ it's typically

more useful for smaller tasks that you're more confident that you can delegate reliably whereas local is when you want to be a little bit closer to the agent. um it's maybe some larger

the agent. um it's maybe some larger task or some more complicated task that you're going to kind of actively be monitoring. Um and you want it to be

monitoring. Um and you want it to be local so that if something goes wrong, you don't need to, you know, pull that branch back down and then start working on it, but instead you're right there to to guide it.

>> Yeah.

>> Yeah.

>> Yeah. Maybe I'm just greedy, but I want both. And I think having a modality to

both. And I think having a modality to to Matan's point where um I I like to think about what are the primary forms of collaboration that I'm used to and I I enjoy with my co-workers. Um, often

that starts something like a whiteboarding session and um, maybe we're just like jamming on something in a room. Um, when we were building, I

a room. Um, when we were building, I think a good example was agents.MD,

which is our kind of um, in custom instructions intended to be generic across different coding agents. Um, the

way that started was Raine and I um, uh, were just in a room coming up with this idea and then we just started whiteboarding and then took a photo and then kicked it off um, in Codex CLI locally just to kind of workshop a

Next.js app that that we could work on.

um went to lunch, came back, it had a good amount of like kind of core structure um and then from there we were able to iterate a little bit more closely. So having that kind of pairing

closely. So having that kind of pairing and kind of brainstorm style experience and then I think to to that second point about what kind of tasks you delegate to I think historically smaller monarily

scoped tasks where you're very clear about what the output is is kind of the right modality if you're doing a fire and forget. But I think what we're

and forget. But I think what we're starting to see with um we we just launched GBD5 codeex uh about two months ago now and I think one of the main differences is that it can actually do these longer running more complex more

ambiguous tasks as long as you are clear about what you want by the end. So it

can work for hours at a time. I think

that that shift as models increase in capability will start to enable um more kind of use cases.

>> Yeah. Yeah. I think there are three parts of making an agent work. There's

the actual agent loop. There are the tool calls it makes and then the the resources upon which the tool calls need to act. Whether you go cloud or local

to act. Whether you go cloud or local first is based on where those resources are, right? If you're trying to work on

are, right? If you're trying to work on a local file system, those are the resources you need to access. It totally

makes sense that your agent loop should run locally, right? If you're accessing resources that typically exist in the cloud, you're pulling from GitHub, you know, directly from like third party repo of some kind, then it makes sense for your agent to start off in the

cloud, right? Uh ultimately though,

cloud, right? Uh ultimately though, these resources exist in both places, right? Every developer expects an agent

right? Every developer expects an agent to be able to work both on the local file system as well as on an open PR that might be hosted on GitHub. And so

it doesn't really matter where you start. I think everyone is converging at

start. I think everyone is converging at the same place, which is that your agent loop needs to be able to run anywhere.

Your tool calls need to be able to be streamed from the cloud locally or from the local backup to the cloud. Uh and

then it all depends on where the resources you actually want to act on are located.

>> Yeah. Awesome. Um, okay. So, we were chatting offstage and we were like casting around for spicy questions and stuff. So, uh I really like this one.

stuff. So, uh I really like this one.

Uh, and I think it's very topical. Do

you guys generate slop as a living? Like

like are we are we in uh in danger of potentially being in a hype bubble where we believe that this is like a sustainable path to AGI?

I mean I think to start you could say that one man's slop is another man's treasure which uh to some extent might be true in like you know if for example

you have I don't know like let's suppose you had a repo that had no documentation whatsoever you could use uh you know many of the tools that we've been talking about to go and generate

documentation for this repo. Now is it going to be the most like finely crafted piece of documentation? No. But is it providing alpha? Yes, in my mind because

providing alpha? Yes, in my mind because having to like sift through some super old legacy codebase that has no docs is a lot harder than looking through some somewhat sloppied documentation. And so

I think the big thing is it's it's figuring out where you can use these tools for leverage. Um, and the degree to which it's slop I think also kind of depends on how much guidance you

provide. So, if you just say like,

provide. So, if you just say like, "Build me an app that does this." Like,

you're probably going to get some generic slop app that does >> that's purple.

>> Yeah. That's blue blue purple like fade.

Yeah. Um whereas if instead you're like very methodical about exactly what it is that you want, you provided the tools to actually run tests to verify some of the capabilities that you're requesting. I

think that makes it much more structured to a similar extent that if you were to, you know, hire some junior engineer onto your team and you just say, "Hey, go do this." Like they're probably going to

this." Like they're probably going to yield some like median outcome because they have no other specification to go off of and it's it's pretty ambiguous like what you actually want done.

>> I think the key word there is leverage, right? Like what AI coding agents allow

right? Like what AI coding agents allow you to do is do 10x more than you would be able to do yourself with a pretty high floor, right? So if you plot skill level against how useful an agent is or how likely it is, you know, how useful

it actually is in generating non-slop, there's probably a like pretty low floor. If you have no skill, uh you have

floor. If you have no skill, uh you have a pretty high floor still, right? Agents

are pretty good just out of the box. If

you don't know anything about development, the agent is going to do much more than you could possibly do.

But as you get to higher and higher skill levels, senior and principal and distinguished engineers actually use agents differently. um they're using it

agents differently. um they're using it to level up the things they could already do. Uh you know, a principal

already do. Uh you know, a principal engineer might be able to write manually 5,000 lines of code a day. With agents,

they can write like 50,000 lines of code a day. And it it it really operates at

a day. And it it it really operates at the level of quality of the inputs and the knowledge that you put in there. Um

so I think we're, you know, slowly raising the floor over time by, you know, building better building better agents. But I do think it's a it's a

agents. But I do think it's a it's a form of leverage. Uh it's a way for you to accelerate the kinds of things you can already do, do them faster. And for

folks who don't have skills, you know, that's when you can actually really raise the the floor of what it can be do.

>> Absolutely. And um just to add on to both these points, I think there are tools and amplifiers of craft. If you

have it, you can do more of it. If you

don't, it is just harder, but it does raise the floor. I think that's really worth um calling out. I think for for folks who um are just trying to build their first prototype, they're trying to um iterate on an idea that example was

mentioning earlier. It's not that like I

mentioning earlier. It's not that like I couldn't make a front end that kind of is like a content driven site. Um, but I just didn't have time and it was more fun to just draw on a whiteboard, talk,

have a conversation and then kick that off to an agent. Um but I think one of the interesting examples of this was when we were building much earlier iterations of codecs um well over a year ago um and we were putting in front of

two different archetypes folks who did a lot of kind of product engineering where um they're used to using um kind of local um in the interloop um style tools

where um they're used to just chatting and maybe iterating. Um, and then a completely different modality when we talk to folks on the reasoning teams where they would sit for maybe five

minutes just defining the task and have a that could be like an essay length uh like word problem for um the uh agent to go off and do and then it would work for an hour and that was that was effectively be 01 um or earlier kind of

versions of it. Um, and I think the interesting part there was just the way that people would approach giving the task to the agent was completely different based on their understanding

of what do they think it needs. And so I think really anchoring on um specificity um being really clear about what you want the output to be. Um and then I think there's a there's a broader item

that is a responsibility on both us as um uh builders of agents and um folks training models to really raise that floor and to ensure that the ceiling for people with high craftsmanship with high

taste are able to exercise that in the way that they see fit.

>> I think actually something that you mentioned uh brought this idea to mind that we've started to notice. So our

like target audience is the enterprise and something that we've seen occur time and again is that there's a very interesting biodality in terms of adoption of agent native development and

in particular uh normally earlier in career developers are more open-minded to start building in an agent native way but they don't have the experience of managing engineering teams. So they're

maybe not the most familiar with delegation in a way that works very well. Meanwhile, more experienced

well. Meanwhile, more experienced engineers have a lot of experience delegating. They know that, hey, if I

delegating. They know that, hey, if I don't specify these exact things, it won't get done. And so, they're really good at like writing out that paragraph, but they're pretty stubborn and they actually don't want to, you know, change

the way that they build and you're going to have to, you know, pry Emacs out of their cold dead hands. So, it's an interesting interesting balance there.

>> So, funny you say that. Sim similar

thing we've seen in the enterprise is, you know, senior engineers, you know, higher up folks will write tickets. So

they'll actually do the work of like you know writing out all the spec of what needs to be done. They'll hand it off to a junior engineer to actually do the junior engineer takes that super well-written ticket and gives it to the agent to do right. So you're just

arbitrageing the idea that the junior engineer will actually do the agent work because they're more comfortable doing that. But the senior engineer is the

that. But the senior engineer is the person who's actually really good at writing the spec, very good at understanding like what are the architectural decisions we should be making um and putting that into some kind of ticket.

>> Yeah. uh for those who don't know um Matan and factory in general have been uh writing and advocating about the agent native development. So you can read more in their website. Um I think one thing uh by the way I I do want to

issue maybe like one terminology thing which is raise the floor for you is a good thing but I think actually other people say lower the floor and also mean the same thing. Uh it it basically just like it's about skill level and like

what they can do uh and and just giving giving people more uh uh uh resources for that. Um I think I think also the

for that. Um I think I think also the the other thing is like um a lot of people are thinking about the model layer, right? Um obviously you

guys uh own your own models. Uh the two of you don't. Um and I and I think there's a hot topic of conversation in the valley right now. Airbnb uh Brian Chesky has said that like most of the

value like relies on Quinn apparently.

Um how important is open models to you guys? uh and and and you can uh Fad you

guys? uh and and and you can uh Fad you can chime in as well but like how important is open models like as a as a strategy for both of you?

>> I'd be curious to hear from you first.

>> Yeah.

>> Um well uh love open models. Um I think one of the important things about um so just before we talk about models I think um openness is really key um to I think a sustainable development life cycle

where um with codeexi we open sourced it out the gate and part of the priority was understanding that an open model was coming down the line. We wanted to make sure that we could as best document how to use our reasoning models. We saw a

lot of kind of confusion about you know what kind of tools to give it, what the environment should be, the resources. Um

and so we wanted to make sure that that was as clear as possible and then also make sure that it worked well with open models. So I think there are definitely

models. So I think there are definitely a lot of use cases especially when you get into um kind of uh embedded use cases or where uh cases where you don't want um the data to leave the perimeter.

Um there's a lot of really good reasons for why you would want to do that. Um

and then um I think the the benefit of kind of um cloud hosted models and that's what we see with a lot of open models. They end up being they're not

models. They end up being they're not run on device but they're actually cloud hosted anyway. Uh maybe for efficiency,

hosted anyway. Uh maybe for efficiency, maybe for cost. Um that um there's still a lot of value in just the pure intelligence that you get from using a much bigger model. And that's why we see

people really gravitate towards models from 03 to GBD5 to GB5 codecs. Um

there's still a lot of value in that.

Now we see that that overhang still kind of comes um it resolves itself where you know every couple months there's a new very small very very impressive model and I think that's the the magic if we

just consider the beginning of this year we had 03 mini as kind of the frontier and where we are now um and so yeah I think that there's a ton of value in open models but um still I I think

personally from a usage perspective more value in using the kind of cloud hosted ones. Yeah, I'll just interject a bit.

ones. Yeah, I'll just interject a bit.

Um, Fouad actually cares a lot about privacy, security, agent robustness. Uh,

and so you can talk if you run into him, talk to him about more about that. Uh,

but for for both of you guys, maybe you want to start off with like actually like uh you know, what's your ballpark of open model token percentage generated in in your respective apps and is it going to go up or down?

>> So, oh, I guess so maybe to start because I I think what you said is really interesting. Um, so a couple

really interesting. Um, so a couple weeks ago when we released our our factory CLI tool, um, people were really interested because we also released with it our score on this benchmark called

Terminal Bench. And one of the first

Terminal Bench. And one of the first asks was, can you guys put open source models to the test cuz our Droid agent is fully model agnostic. So immediately

people were like, throw in the open source models and like show us how it does. And I think something that was

does. And I think something that was particularly surprising was that the open source models and in particular GLM were really really good. They were in fact like obviously less performed than

the frontier models um but not by a huge margin I think. So, one thing that was noteworthy though was when we benchmarked the open source models of

the seven uh that were at the top, one of them was made in the United States um by yours truly over here, which I think is kind of a shame. Like the fact that by far all of of the frontier models,

it's you know, United States across the board, but then when it comes to open source, we're really dropping the ball there. So I think that's one thing

there. So I think that's one thing that's noteworthy and I think something that uh at least when I saw that I I really think there should be like a call to arms there in terms of changing that.

Um because I think to answer your question, what we found is that since we released support for open source models, the percent of people that are using open

source models has dramatically risen partially because of cost. Um and that you know it's allows you like let's say in that documentation example, maybe you want to generate docs, but you don't

want it to be like you on super high reasoning like to the max like cost you $1,000, but you just want to get like some initial first pass in. Um, and also people like having uh a little bit more

control and I feel like they get a lot more of that control with some of these um open source models both control in the cost um and just like kind of observability into into what's actually

happening there. Um so I think the

happening there. Um so I think the demand has grown to a point where I actually did not expect a year ago. I

think a year ago I was less bullish on open source models than I am now open weight but yeah >> yeah I think we use both uh open source and closed source models in our overall agent pipeline. Um and I think the way

agent pipeline. Um and I think the way we think about them is there's two different use cases for an LLM call. One

is you want state-of-the-art reasoning.

It's a very very open-ended question.

You actually don't know what the answer is. There's the goal is like the goal

is. There's the goal is like the goal function is is not super well defined.

In those cases closed source models are still state-of-the-art when it comes to reasoning and intelligence. um we use closed source models pretty much exclusively for those kinds of use cases. There's a second use case where

cases. There's a second use case where we have a more niche task uh with a much clearer goal function. Uh and in those cases we almost always try to fine-tune an open source model. We're okay taking

a like you know 20% cut hit maybe uh in terms of reasoning ability so that we can actually fine-tune a very very specific use case. Uh and I think we found that open source models are

catching up very very very fast. Um, a

year and a half ago it was like unthinkable for us to be able to use open source models uh as part of VZO's pipeline. Today, every single part of

pipeline. Today, every single part of the pipeline we're like, okay, can we bring open source models into this? Can

we, you know, replace what we're doing currently with closed source uh state-of-the-art frontier models with a fine-tune of an open source model. Um,

and we've seen a ton of success with uh Quen Comm, other kinds of models like that.

>> Yeah. Uh, yeah, I'll call this out as one of the biggest deltas I've seen across everyone. uh which is at the

across everyone. uh which is at the start of this year I did a podcast with Anker from brain trust and he said that open source model usage is roughly 5% across uh what brain trust is seeing and

going down uh and now I I think like reasonably it's going to go to like between like the 10 to 20% range uh for everybody >> I do think it's interesting that you know even closed source models are

investing more heavily into their small class models right uh the haikus GBD5 minis uh Gemini flashes of the world which I think also is like that actually that model class is what competes with

open source the most. Uh it's the small model class competing against a fine tune uh of an open source model.

>> And I also think there are some use cases where it's just it will just be overkill to use a frontier model. And if

it is overkill, you are then just going to obviously be incentivized to use something that's faster and cheaper. Um,

and I think part of that part of I think this delta in terms of percent usage is like there is this threshold of like when open models cross the threshold of like for most tasks it's actually enough

and then for some niche tasks you need like the this the extra firepower. I

think we're really getting there with some of these open models which is why I would suspect we'll see more usage um going forward.

>> Yeah. Awesome. That's very encouraging.

Uh so we have a bit of time left. I

prepped you guys with uh the closing question which is what's something that your agents cannot do today that you wish they could do that they'll probably do it next year.

>> Am I going first? Okay. Um yeah, I think that what we've seen over the last year, just maybe starting as a reference point, you know, with with 01, um a little over a year ago or 01 preview, um

what we've seen from then when I when I was using very early checkpoints of of that model, um it was great relative to 40, but still had so much left to be desired. I wouldn't put it. I was on um

desired. I wouldn't put it. I was on um the security team at the time. Um and

there were a lot of there was a lot of work and tasks that I just couldn't delegate um to to that model. And when

we compare to today where I can take a pretty well- definfined task like maybe it's like two sentences, a few bullet points to your point like here are the gotchas that I think you'll probably get stuck on. Um and then come back and 30

stuck on. Um and then come back and 30 minutes later, an hour later it's done it. We've seen cases where it's running

it. We've seen cases where it's running for many hours, maybe even um seven to eight hours, effectively a full workday that I spend a lot of my day in meetings and so um don't necessarily have that

kind of solid block of time. But that's

only half of what engineering is really about. um part of it is coding, part of

about. um part of it is coding, part of it is kind of architecting and troubleshooting and debugging. The other

half of the problem is writing docs, is um understanding the system, convincing people. Um and so I think what we'll

people. Um and so I think what we'll start to see is this um kind of super collaborator where um what we want to bring whether it's in codeex or these other kind of interfaces through um uh

the codeex model um is the ideal collaborator that you want to work with, the f the person you first go to, that favorite co-orker that you want to kind of jam on ideas with. Um, that's really what we want to see with at least with

with Codex.

>> I think on for us, we've seen a bunch uh rapid progression on two different fronts. The first is how many steps can

fronts. The first is how many steps can you reasonably expect an agent to be able to do and get reasonably good output. Last year, there's probably one,

output. Last year, there's probably one, maybe max three, right? If you wanted reliable output with over 90% success, you're probably running one to three agent steps. Today most tools run five

agent steps. Today most tools run five to 20 with no really great uh reliability rates over 90% success. Uh I

think next year we're going to ensure that like 100 plus 200 plus let's run tons of steps all at once have long running tasks for multiple hours and you know be confident that you'll get an output at the end that will be useful.

The second is in terms of what resources can be consumed. A year ago it was whatever you are putting into the prompt form like that was pretty much it. Um,

today you can now configure external connections, you know, via MCP or by uh, you know, making API calls directly in your your application. You can kind of do that if you're knowledgeable. You

have the ability to configure things.

And I think in a year from now, those will just happen like it will just work.

Um, the goal is like you should not need to know what sources of context you need to give the agent. The agent will actually go and find those sources of context proactively. Um, we're kind of

context proactively. Um, we're kind of starting to see that already today, but I'm still not really confident that's very reliable and useful today. Um, I

think by next year that'll be the default mode.

>> Yeah, I would agree with that. I think

agents can do basically everything today, but the degree to which they do so reliably and proactively is I think the slider that is going to change. But

that's a slider that's also dependent on the user. Like if you're a user who's

the user. Like if you're a user who's like not really like changing your behavior and meeting the agent where it is, then you might get lower reliability and proactivity. Whereas if you kind of

and proactivity. Whereas if you kind of set up your harness correctly or set up your environment correctly, it'll be able to do more of that reliably and more proactively.

>> Yeah. Amazing. Um well, we're out of time. Uh my contribution is computer

time. Uh my contribution is computer vision. Everyone try Atlas. Uh everyone

vision. Everyone try Atlas. Uh everyone

try like more computer vision use cases.

But uh thank you so much for your time.

>> Thank you. Thank you.

>> Thank you for having us.

[Music] Heat.

Heat.

[Music]

Hey, [Music] heat. Hey.

heat. Hey.

[Music] [Music]

[Music]

Hey hey hey.

[Music]

Hey.

[Music] Hey. Hey. Heat. Heat.

Hey. Hey. Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music]

Heat. Heat. N.

Heat. Heat. N.

[Music] Heat. Heat.

Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] [Music]

[Music]

[Music]

[Music] [Music] Heat. Heat. N.

Heat. Heat. N.

[Music]

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Woohoo!

Woohoo!

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Hello.

[Music] [Music] Heat.

[Music]

Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music] [Music] [Music]

[Music] Hey, hey hey.

Heat. Heat.

[Music] [Music]

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] [Music] Heat. Heat. N.

Heat. Heat. N.

[Music] [Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] [Music]

[Music] Heat.

[Music]

Hey Heat.

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat. N.

[Music] [Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music]

[Music] [Music]

[Music]

Power [Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

My head.

[Music] Hey Heat.

[Music]

Heat.

[Music]

Please welcome Welcome to the stage Crayon Consulting web developer Aurora Sharf.

[Applause] Okay. Hello everyone. My name is Aurora.

Okay. Hello everyone. My name is Aurora.

I'm a web developer from Norway. I work

as a consultant uh at Crane Consulting and also and I'm actively building with an XJS app router in my current consultancy project. Today I'm going to

consultancy project. Today I'm going to be teaching you patterns regarding composition, caching, and architecture in modern XJS that will help you ensure scalability and performance.

Let me first refresh the most fundamental concepts for this talk.

Static and dynamic rendering. We

encounter them both in the nextG app router.

Static rendering allows us to build faster websites because pre-rendered content can be cached and globally distributed, ensuring users can access it quicker.

For example, the next comp website static rendering reduces server load because content does not have to be generated for each user request.

Pre-render content is also easier for search engine crawers to index as the content is already available on page load.

Dynamic rendering on the other hand allows our application to display real-time or frequently updated data. It

also enables us to serve personalized content such as dashboards and user profiles.

For example, the Versell dashboard. With

dynamic rendering, we can access information that can only be known at request time. In this case, which user

request time. In this case, which user is accessing their dashboard, which is me.

There are certain APIs that can cause a page to dynamically render. Usage of the params and search params props that are passed to pages or their equivalent hooks will cause dynamic rendering.

However, with params, we can predefine a set of pre-rendered pages using generate static params. And we can also cache the pages as they're being generated by users.

Furthermore, reading incoming requests cookies and headers will opt the page into dynamic rendering. Unlike with

params though, trying to cache or preender anything using headers or cookies will throw errors during build because that information cannot be known ahead of time.

Lastly, using fetch with a data cache configuration noto will also force dynamic rendering. So these are the few

dynamic rendering. So these are the few uh there are a few more APIs that can cause dynamic rendering, but these are the ones we most commonly encounter.

In previous versions of Next, uh a page would be rendered either as fully static or fully dynamic. One single dynamic API on a page would opt the whole page into dynamic rendering. For example, doing a

dynamic rendering. For example, doing a simple off check for the value of a cookie.

By utilizing React server components with suspense, we can stream in dynamic content like a personalized welcome banner or recommendations as they become ready and provide only fallbacks with suspense while showing static content

like a newsletter.

However, once we add in multiple async components on a dynamic page like a feature product, they too would run at request time even though they didn't depend on dynamic APIs.

So to avoid blocking the initial page load, we would suspend and stream those components uh down as well doing extra work creating skeletons and worrying about things like human layout shift.

However, pages are often a mix of static and dynamic content. for example, an e-commerce app dependent on user information while still containing mostly static data. Being forced to pick

being forced to pick between them um between static or dynamic causes lots of redundant processing on the server on content that uh never or very rarely changes and isn't optimal for performance.

So to solve this problem at last year's nextgs comp the user use cache directive was announced and this year as we saw in the keynote it's available in the next

inj 16. So with use cache pages will no

inj 16. So with use cache pages will no longer be forced into either static or dynamic rendering. They can be both and

dynamic rendering. They can be both and next.js no longer has to guess what a page is based on whether it accesses things like params. Everything is dynamic by default and use cache let us

explicitly opt into caching.

Use cache enables composable caching. We

can mark either a page, a React component or function as cachable.

Here we can actually cache the feature products component because it does not need the request and processing and does not use dynamic APIs.

And these cache segments can further be rendered and included as a part of the static shell with partial preendering meaning the feature products is now available on page load and does not need to be streamed.

So now that we have this important background knowledge, uh let's do a demo. An improvement of a codebase with

demo. An improvement of a codebase with common issues often encountered in Nex.js apps. These include deep drilling

Nex.js apps. These include deep drilling making it hard to maintain or refactor features, redundant client side JavaScript and large components with multiple responsibilities and lack of static rendering leading to additional

server costs and degraded performance.

So yeah, let's begin.

And give me one sec over here.

Um, all right.

Great. So, this is very simple application. It's like inspired by an

application. It's like inspired by an e-commerce platform. And let me do like

e-commerce platform. And let me do like an initial demos here. So, I can load this page. I have like some content like

this page. I have like some content like this feature product. I have feature categories, different um product data.

There's also this um browse all page over here where I can see um all of the products in the platform and page uh page between them. Then we have this

about page over here which is just static. I can also sign in as a user and

static. I can also sign in as a user and uh that will log me into my user and also get then personalized content on my dashboard here like for example uh

recommended products or um this um personalized discounts here.

So notice here there's a pretty good mix. Uh al also one more page that I

mix. Uh al also one more page that I forgot to show you, the product page, the most important one. Uh also here we can see product information and then um save it if we like for our user. So

notice that there's a pretty good mix here of static and dynamic content in this app because of all of our user dependent features. Let's also have a

dependent features. Let's also have a look at the code which would be over here. So I'm using the app router here of course in nextg

16. I have all of my different pages

16. I have all of my different pages like the about page, the all page, uh product page. I also have uh I'm using

product page. I also have uh I'm using feature slicing here to uh keep my app folder clean. I have like different

folder clean. I have like different components and and queries uh talking to my database with Prisma. Um so yeah, and I purposely slowed all of this down. So

that's why we have these really long loading states just so we can easier see what's happening.

So the common issues that we wanted to work on here that we actually have in this application was deeprop drilling u making it hard to maintain and refactor features um excess client side

javascript um and add um lack of static rendering leading to additional server cost and the greater performance. So the

goal here of the demo is basically just to improve this app with um some smart patterns regarding uh composition, caching and architecture to fix those common features and make it faster and

more scalable and easier to maintain.

So let's begin with that. The first

issue we want to fix is actually related to prop chilling and um that would be over here in the peach.

Notice um right here I have this logged in variable at the top here and you can see I'm passing it down to a couple components. It's actually been

couple components. It's actually been passed multiple levels through into this personal banner. So this is going to be

personal banner. So this is going to be making it hard to reuse things here because we're always having this logged in dependency for our our welcome banner. So with server components, the

banner. So with server components, the best practice would be to actually um push data fetching down into the components that's using this and resolve promises deeper into the tree. And for

this a get is authenticated as long as this is using either fetch or something like react cache. We can duplicate multiple calls of this and we can just reuse it anywhere we like inside our components. So that will be totally fine

components. So that will be totally fine to reuse. So now we can actually move

to reuse. So now we can actually move this into the personaliz section here and we don't not going to need this prop anymore and just put it directly

oops in here. And we're not going to need to pass this anymore.

And since we're now moving this asynchronous call into the person section, we're no longer blocking the page. We can go ahead and suspend this

page. We can go ahead and suspend this with just a simple suspense here. And

we're not going to need this fallback.

As for the welcome banner, I suppose we're going to do the same.

Um, but trying to use the get the login variable here or value is doesn't work, right? Because it's a client component.

right? Because it's a client component.

So, we need to solve this a different way. And we're going to do a pretty

way. And we're going to do a pretty smart pattern here to solve this. We're

actually going to go into the layout and wrap everything here with a O provider.

So, I'm just going to put this around my whole app here and get this um loged in variable over here. And I definitely don't want to block my whole root layout. Let's go ahead and remove the

layout. Let's go ahead and remove the await here and just pass this down as a promise into this op provider.

And uh this can just contain that promise. It can just be chilling there

promise. It can just be chilling there until we're ready to read it. So now we have this set up.

That means we can actually go ahead and uh we'll get rid of this prop first of all and we'll get rid of this one uh drilling down to the personal banner and we'll get rid of the prop drilling also

here or the the signature. And now we can use this off provider to fetch this u loged in value locally inside the personal banner with use uh off with

that provider we just created and read it with use. So this will actually work kind of like in a wait where we need to suspend this while it's resolving. So

now it just collocated that little small data fetch inside the personal banner and I don't have to pass those props around. And while this is resolving

around. And while this is resolving let's just go ahead and suspend this one also with a fallback. And let's just do a general banner over here to avoid any

weird cumulative cumulative shift.

And finally also get rid of this one.

Um, so now this welcome banner is composable. It's reusable. We don't have

composable. It's reusable. We don't have any weird props or dependencies in the homepage. And since we're able to like

homepage. And since we're able to like reuse this so easy, let's actually go ahead and add it also to this uh browser page over here, which will be here. And I can just go

ahead and use it over here without any dependencies.

So um through these patterns, we're able to maintain good component architecture.

um you utilizing react cache, react use and make our components more reusable and composable.

All right, let's tackle the next uh common challenge which would be excessive client side JavaScript and large components with multiple responsibilities.

Actually uh that's also in the all page here and again we have to work on this welcome banner. It's currently a client

welcome banner. It's currently a client component and the reason it's a client component is because I have this very simple dismissed state here. Uh I can just click this. It's it's a nice UI

interaction. That's fine. What's not so

interaction. That's fine. What's not so fine though is because of that I converted this whole component into a client side component uh or a client component. And I even used use SWR to

component. And I even used use SWR to client side fetch. I have now this API layer here. I don't have type safety

layer here. I don't have type safety anymore in my data. Yeah, this is not necessary. And we're also breaking the

necessary. And we're also breaking the separation of concerns here because we're involving UI logic with data. So

let's go ahead and utilize another smart pattern to fix this.

It's called the donut pattern.

Basically, what I'm going to do is extract this into a client side wrapper.

So, let's create a new component here and let's call it banner container.

And this is going to contain our interactive logic with this use client directive. We can create the signature.

directive. We can create the signature.

We can paste everything we just had earlier.

And um instead of using these banners, I'm just going to slot a prop here, which is going to be the children. So,

this is why it's called a donut pattern.

We're just making this wrapper UI logic around server rendered content or it could be server rendered content. And

then since we no longer have this client side dependency, we can go ahead and remove the use client. We can use our uh is off uh asynchronous function here.

Instead, we can make this into an async server component. Uh we can even replace

server component. Uh we can even replace client side fetching with server side fetching. So let me go ahead and just

fetching. So let me go ahead and just get the discount data directly here.

discount data and just utilize our regular mental model like before with type safety and that means I can also delete this API layer that I don't

want to work with anyway finally for the is loading we can just export a new welcome banner here with our donut pattern banner container containing server rendered content and

that's means we don't need this loading anymore so we basically refactored this whole thing into a server component right and extracted the UI logic for But what is that? It looks like I have

another error.

So this is actually uh because of motion. Do you use motion? It's a really

motion. Do you use motion? It's a really great animation library, but it requires the use directive.

And again, we don't have to uh make this use client just for an animation. We can

create again a donut pattern wrapper and just extract uh wrappers for these animations. And that means we don't have

animations. And that means we don't have to convert anything here into client side.

And I'm probably missing something down here. Yep,

here. Yep, there we go.

So now um everything here has been converted to server. We have the same interaction. We still have our

interaction. We still have our interactive logic here, but now we have this one way to fetch data and we have a lot less client side.js.

Um actually I'm using this donor pattern myself for this UI boundary helper which looks like this.

Do you see that? So this kind of shows again what I mean right with the donut pattern we have this client component around a server component. I also marked a lot of my other components with this uh UI helper here. Uh also here I have

more server components.

Let's go ahead and uh improve those also since we're getting pretty good at this by now. They are in the footer.

by now. They are in the footer.

these categories. Um I mean I'm I have this nice component fetching its own data and I just wanted to add um this show more thing uh just in case it gets really long. And with a donut pattern I

really long. And with a donut pattern I can just wrap a show more component here.

And um this will contain my UI logic and it looks like this right? Pretty cool. And this is now

right? Pretty cool. And this is now containing the client logic allowing us to use state. We're using the children count and to array to slice this. And

what's so cool here is that these two are now entirely composable, reusable components that work together like this.

So this is really the beauty of these patterns that we're learning here.

Um you can use this for anything. Uh I

also use it for this modal over here.

Yeah, just remember this next time you're considering adding any sort of client logic to your server component.

Okay, we know the dom pattern. uh we

don't have to utilize it to uh uh create these composable components and avoid plant js. So we can move further to the

plant js. So we can move further to the final issue.

Let me go ahead and close this again.

So that would be um with lack of static rendering strategies. Right?

rendering strategies. Right?

Looking at my built output, I actually have every single page be a dynamic page here. So that means that whenever I load

here. So that means that whenever I load something here, this is going to be running for every single user. Sorry.

Every single user that opens this is going to get this loading state. It's

going to be wasting server costs, making the performance worse.

And that means also that something inside my pages is causing dynamic rendering or forcing dynamic rendering for all of my pages.

Um, actually it's inside my root layout.

I don't know if you experienced this.

It's over here. Uh, in my header I have this user profile. And this is of course using cookies to get the current user.

And that means that everything else is also dynamic rendered because again pages could be either dynamic or static, right? This is a pretty common problem

right? This is a pretty common problem and something that has been solved before in previous versions of next. So

let's just see what we might do.

One thing we could do is create like a route group and split our app into like static and dynamic um sections. That

would allow me to extract my about page.

I could render this statically. It's

okay for some apps, but in my case, the important pages is the product page, and this is still dynamic, so not really helpful.

How about this uh strategy? So, here I'm creating this request context param encoding a certain state into my URL and then I can use uh generate static params to generate all of the different

variants of my pages. That would

actually combined with client side fetching the user data allow me to get this cache hit on my product page.

definitely a viable pattern. It's

recommended by um the versel flags SDK called the premputee pattern I think.

But this is really complex and I have like multiple ways to fetch data and actually I don't want to rewrite rewrite my whole app into this. So what if we didn't have to do any of those workarounds? What if there was a simpler

workarounds? What if there was a simpler way? Well, there is. Let's get back to

way? Well, there is. Let's get back to this uh our application again. So we can actually go to the next config and just enable cache components.

Nice. Okay. And what this does, as you know from the keynote, um will actually opt all of our uh asynchronous calls into request time or dynamic and it will

also give us errors whenever we have some asynchronous call not suspended and it will give us this use cache directive that we can use to granularly uh uh cache either a page, a function or a

component.

So yeah, let's go ahead and and utilize this. We can begin with the homepage

this. We can begin with the homepage here.

Let's have a look. So again, I have this mix of static and dynamic content. I

have my welcome banner for me. Uh

something for you also for me. Um let's

have a look at that with this this UI helper again. So for example, the banner

helper again. So for example, the banner is dynamically rendered right with this over here. Whereas I marked this as high

over here. Whereas I marked this as high rendering because the hero it's it's fetching this asynchronous thing and it's going pretty slowly but it doesn't depend on any sort of user data or

dynamic APIs. So that means that

dynamic APIs. So that means that everything that is hybrid rendered here can actually be reused across requests and across users and we can use the use cache directive on that. So let's add

the use cache directive here and mark this as cached.

And that will allow me to whenever I reload this page uh I didn't save this. There we go. It will

not reload this part, right? Because

it's cached. It's just it's now static, right?

And there's also other uh related APIs like um the cache tag to allow me to tag this or validate the specific uh cache entry uh granularly or define my reation

period. But for this demo, let's just

period. But for this demo, let's just focus on the the plain directive. Now

that I have this use cache directive, I can actually remove my suspense boundary around this hero.

And that means um or what this will do is that uh partial pre-rendering can actually go ahead and include this in the statically pre-rendered uh shell so that uh this hero will in this case be a

part of my build output.

Let's do the same thing for everything else on this page that can be uh shared.

For example, I have this feature categories over here. Let's go ahead and do the same there and add the use cache directive and mark this as cached like that. And we can remove the

like that. And we can remove the suspense boundary. We're not going to

suspense boundary. We're not going to need this anymore. Same for the feature products. Let's add use cache and mark

products. Let's add use cache and mark this as cached.

Oops.

And then remove the suspense boundary.

So notice how much complexity I'm just able to remove here. I don't have to worry about my skeletons, my cumulative layout shift that I was doing before.

And the page is no longer uh or we don't no longer have this page level static versus dynamic uh limitation.

So now when I load this page, you'll see everything here is cached except for this truly user specific content, right?

So that's pretty cool. Let's go to the browse page and do the same thing over there.

Um, yeah, I already marked all of my boundaries here so you can easily understand what's happening. And I want to at least cache um these categories.

Looks like I'm getting an error though.

Maybe you recognize this. So it means I have a blocking route and I don't I'm not using suspense boundary when I should be doing it. Refreshing this.

Huh. It's true. Huh. This is really slow and it's causing performance issues and bad UX. So this is great. Use cache or

bad UX. So this is great. Use cache or cache components is helping me identify my blocking rods. Let's actually see what's happening inside that. So this is this is the problem, right? I'm fetching

these categories top level and I don't have any suspense boundary above it. Uh

basically we need to make a choice.

Either we add a suspense boundary above or we opt into caching. Let's do the simple thing first and just add a loading tsx here.

And let's add a loading page over here.

Uh some nice skeleton UI.

Uh that's pretty good. It resolved the error, but I don't have anything useful happening on this page while I'm waiting. I can't even search. So with

waiting. I can't even search. So with

cache components, um dynamic is like or static versus dynamic is like a scale.

And it's up to us to decide how much static we want in our pages. So let's

shift this page more towards static. And

just delete this loading tsx again. And

then utilize the patterns that we were learning earlier to push this data fetch into the component and collocate it with with the UI. So move this down into my responsive uh category filters here. I

have two because responsive responsive design. Uh I can actually go ahead and

design. Uh I can actually go ahead and uh just add it here. Oops.

and import this. I don't need this prop anymore. Actually, my component is

anymore. Actually, my component is becoming more composable.

And instead of suspending it, let's just add the use cache directive.

And that should be enough. So, notice

how I'm being forced to think more about where I'm resolving my promises. Uh, and

actually making improving my component architecture through this. I don't need to suspend this. This will just be included in the in the static shell here.

Um, the product list. Let me just keep this fresh h so I can reload that every reload that every time whereas the categories at the bottom.

I also want to cach this. So let's go ahead and go to the footer and uh since I'm using the donut pattern over here, this can actually be cached

even though it's inside this UI uh part of the UI that's interactive. So this is totally fine. So that pattern was not

totally fine. So that pattern was not only good for composition but also for caching.

Um I think I have one more error there.

Let's see what that is.

Still have this air. This is actually because of these search parmes I can't cach this. But I can resolve it deeper down to reveal more of my UI and

make it static. So let's go ahead and move this down. Pass it down as a promise to the product list. We'll make this typed as a promise over here

like that.

Um, let's resolve it inside the product list. Use the resolved search params

list. Use the resolved search params over here and over here. And since this is suspended here, the error will be gone. So reloading this,

gone. So reloading this, the only thing that's that's reloading here is just the part that I picked specifically to be dynamic. Everything

else can be cached. And that means that I can interact with my banner or even search uh because that part has been already pre-rendered.

All right, let's do the final uh page here which is the product page which is um the most difficult and the most important one. It's it's really bad

important one. It's it's really bad right now. This is like super important

right now. This is like super important for an e-commerce platform apparently.

All right. Um let's go ahead and fix that one too.

So here I have this product page. Uh

let's start caching just the reusable content here. For example, the product

content here. For example, the product itself and just add use cache here and mark this as cached that should be fine. That means we can

remove the suspense boundary over here.

Uh all right. And this is no longer reloading on every request here. Right?

For the product details, let's do the same thing. Let's add use cache. Let's

same thing. Let's add use cache. Let's

mark it as cached and see if that will also work.

It did not. Uh actually this is a different error. It's telling me that

different error. It's telling me that I'm trying to use dynamic APIs inside of this cacheed uh segment. And that is true. I'm using the saved product

true. I'm using the saved product button, right? That to click and toggle

button, right? That to click and toggle the saved uh state. So what do you think we can do with this?

We can use the donut pattern again.

Actually, we can also slot in uh dynamic segments into cache segments. So we're

interle them just like before, but with cache. So this is pretty cool. Let's go

cache. So this is pretty cool. Let's go

ahead and add uh the children here.

um like that and this will remove the error and I can just wrap this around this one dynamic segment of my page here. Remove the suspense boundary um

here. Remove the suspense boundary um and add in just a very small bookmark UI for that one dynamic piece of the page and let's see how well that looks now.

So notice how almost the entire UI is available, but I have this one small chunk that is dynamic. And that's fine.

Everything else is still there. And

let's leave the reviews dynamic because we could keep those fresh. There's still

one more error. Let's just quickly tackle that. Um again, this is the

tackle that. Um again, this is the params. I'm getting help uh that I need to make a choice either add a loading fallback or um cache this. Let's just

use generate static params in this case.

kind of depends on your your use case and your data set. But for this case, I'm just going to add a couple pre-rendered predefined pages and then just cach the rest after generated by users. And this will remove my error

users. And this will remove my error here.

So I think I'm actually done with my refactor. Let's go ahead and have a look

refactor. Let's go ahead and have a look at the deployed version and see what that looks like. So I just deploy this on cell.

Um and remember I purposely slow down a lot of data data fishes here. And still when I load this page initially it's just everything is just available already

right the only thing here is just those few dynamic segments like the discount and the for you same with the browse all all of the UI is already available

and for the product itself it just feels instant right and remember again that all of these cache segments will be included with uh the static shell with partial parending and it can be

prefetched using the improved pre-fetching in the new next 16 client router. So that means that every

router. So that means that every navigation just it just feels so fast, right?

Um all right to summarize um with cache components there is no more static versus dynamic.

Um and we don't need to be avoiding dynamic APIs or um compromising dynamic content and we can skip these complex hacks and

workarounds uh using multiple data fiction strategies just for that one stat this cache hit uh as I showed you.

So in modern XJS dynamic versus static is a scale and we decide how much static we want in our apps and as long as we follow certain patterns uh we can have one mental model which is performant

composable and scalable by default. So

let's get back to the slides and they're here. Okay, great.

they're here. Okay, great.

Uh yeah, so if you're not were not already impressed by the speed of that, this is the Lighthouse score. So I

collected some field data with Versel speed insights. So we have a 100 score

speed insights. So we have a 100 score on all of the most important pages the homepage the product page and the product list even though they are highly dynamic. So let's just finally summarize

dynamic. So let's just finally summarize the patterns will that will ensure scalability and performance in nextg apps and allow us to take advantage of the latest innovations and get scores like this.

So firstly we can refine our architecture by resolving promises deep in the component tree and fetching data locally inside components using react cache to do duplicate work. We can avoid excessive prop passing to client

components by using context providers combined with react use.

Second we can compose serving client components using the donut pattern to reduce client side JavaScript. Keep a

clear separation of concerns and allow for component reuse. And this pattern will further enable us to compose um to cache our composed server components later.

And finally, we can p cache and pre-render with use cache either by page component or function to eliminate redundant processing, boost performance and SEO and let partial pre-rendering statically render these segments of the

app. And if our content is truly

app. And if our content is truly dynamic, we can suspend it with appropriate loading fallbacks.

And remember that all of this is connected. So the better your

connected. So the better your architecture, the easier it is to compose and the easier it will be to cache and preend with the best results.

For example, resolving dynamic APIs deeper in the tree will allow you to create a bigger partially printed static shell.

And with that, this is the repo of the completed version of the application.

There's like so many so many things I didn't even show in there that you can check out and you can scan the QR code uh to find my socials there alongside the repo if if you don't want to take a picture and type it in yourself.

So yeah, that's it for me. Thank you.

Thank you next for having me here. Thank

you.

[Music] [Applause] >> Holy moly.

>> Please welcome to the stage Sanity principal educator Simeon Griggs.

[Music] [Applause] [Music] Hello everyone. Hello nextJS Conf. Uh

Hello everyone. Hello nextJS Conf. Uh

those who are watching online, hello internet. And uh to those in our sanity

internet. And uh to those in our sanity slack, hello water cooler. I'm here to talk about content operations in the age of AI generated content.

So when it comes to creating content, the scales have been tipped into the favor of people who don't care about wasting your time.

When we create content without context, we know these tools don't know your audience. They don't learn. They don't

audience. They don't learn. They don't

get any better.

And AI tools, as they're currently sold, do a really good job of convincing you that you human are not good enough to do the thing.

You can't write an email on your own.

You can't write a blog post on your own.

You can't write meeting notes on your own. You can't be human without the help

own. You can't be human without the help of AI.

You're just a sack of meat that needs AI's help. And I don't think this is

AI's help. And I don't think this is right. I think AI is awesome, right? It

right. I think AI is awesome, right? It

gets a superpower. It's a technology upgrade better than any I've ever seen in my career. I'm not a doomer.

It's just that I think that you're awesome too.

the AI SDK, next share 16, API routes and all of this along with sanity. So

all of us get to determine what's the future going to look like because we've got tools that look like they came and were given to us from the future, right?

But we've got them now. So we get to decide what does now look like and that's in our hands. What would we like to do with that?

Uh so Hollywood's given us visions of the future for years and we get to choose what what do we want now to look like? There's all sorts of different

like? There's all sorts of different visions of the future. And I know our CTO is very fond of a particular vision of the future that looks something like this. And that's what I think that

this. And that's what I think that working with AI can be like, particularly when we're creating content, right? We don't just have to uh

content, right? We don't just have to uh let the machines take over and completely seed control. But we're still in control. Still got our hands on the

in control. Still got our hands on the joysticks there. We can take on uh

joysticks there. We can take on uh whatever's greater than us and take it on with absolute confidence.

So before we get to the demo, let's just set the table. What's sanity? Actually,

first of all, who am I? I'm Simeon.

Hello. Uh, I'm an educator at at Sanity and we've been at Next.js conference for a number of years, which is excellent.

Sanity is a content operating system. If

you've heard of Sanity, you probably heard of it as a content management system, and that's fine. If you want to call us that, I've heard people out at the booth today saying, "Oh, Sanity, they're a CMS." That's fine, but we'll get to unpack why we're a bit more than

that soon as well. And as I said, we've been at NextJS Conf now for like uh the last 5 years, I think. Um, really been close to this community. We used to a lot of packages, templates, guidance.

Our engineers work directly with Versel's engineers to make sure that you get the very best experience. Sanity is

the perfect pair with Nex.js for content driven applications and some of the things we've said over the journeys as we talk about structured content. So

what's structured content? That means

you want to store things based on what they are, not what they look like. And

you might be saying, "But Simeon, one of the great things about AI is that it can take just unstructured junk content and turn it into structured content." That's true, but you'll be

content." That's true, but you'll be able to do much more when you are creating a source of truth that is structured content. And actually means

structured content. And actually means there's never been a better time to migrate into a system that values structured content.

We've also talked about content is data for the longest time. And uh it's really cool to see as we've got all these new tools and the language has evolved over the journey that um I suppose we could probably change some of this language

and it still means the same thing. I

think these days we'd say content is context. So what your company knows and

context. So what your company knows and what you store in your data set, that's all context that can help the AI generation process. So that's how we

generation process. So that's how we sort of set ourselves a little bit apart. But what's a CMS for? What's a

apart. But what's a CMS for? What's a

content management system for? This has

typically been the experience if you work with CMS's in the past is that an author or a content operator, somebody on a content team, they use a CMS and they create a page and they that individual person makes individual pages

and when they need to modify those individual pages, they go and make those changes. And there's this onetoone

changes. And there's this onetoone relationship between what it looks like on the website and this individual that works on that page. And that doesn't scale. And that's what I want to talk

scale. And that's what I want to talk about today is ability to scale with AI tools without producing junk. So one

author uses one page. And that's the experience that us developers in the room have given content creators for the longest time, right? But that's not how you and I work. You and I work more like this. So as developers, we work with

this. So as developers, we work with source code and we put that through a compiler and then we serve that built uh content to the page, those built scripts to the page. And uh so we only ever

concern ourselves with the source code.

So that's how we work. That's not how the the experience we've given to content teams. And I know because we're all very good developers and we're all getting on boarded with AI tools and we're all trying to impress everybody.

We're moving towards a future of spec driven development, aren't we? Aren't

we? And this is the future that we're told we can go towards that if we just keep modifying the spec and we keep modifying the PRDs, then the LLMs will create the response and we shouldn't need to modify the response. We'll just

modify the original spec. And that's the future we're heading towards. But again,

you can see it's just one way traffic.

And I think that's an experience that we can give to content creators is that if we put them in charge of creating the source content, that is what your business knows, the source of truth, those prompts, and put all of those inside of a content operating system,

then your authors only ever concern themselves with just generating lots and lots of output, and they don't have to go and click and drag individual blocks and pieces into place. If you're working with a CMS that's still based around

pages, well, difficult to imagine that scaling with AI. So uh developers, you write source code, you and I write source code. So why can't we get our

source code. So why can't we get our content authors to write source content?

It's a future we can get towards. So uh

instead of thinking about a content management system, what if we think about a context management system? And

as somebody who writes these slides for a living, I pat myself on the back when I came up with that one. I thought it was pretty good. Uh and then I was immediately told by one of my colleagues that I'm not the first person to say

that. So uh anyway, so what are content

that. So uh anyway, so what are content operations? I've mentioned those a few

operations? I've mentioned those a few times today. it's probably not entirely

times today. it's probably not entirely understood. So content operation

understood. So content operation something like this when you're working with a CMS it's mostly concerned with this action here just you are working in something else and you copy it and you paste it into your CMS whatever variety that is and you hit the publish button

and that's kind of the journey of content in a CMS but there's we know we know there's so many things that happen before we get there there's this whole life cycle of content that happens before you get to the publish button you

are researching writing getting uh approvals going through tasks and comments and ideulating all of this thing there's all this life cycle of events that are happening before you press publish those are all content operations people are involved in those

steps. It might not be something that

steps. It might not be something that you as a developer get too terribly involved with, but there's a lot of work happening there and that's what we are concerned about and not only does things happen before you press publish, there's

a lot that happens after we press publish as well. So, uh we need to kick off cache invalidation, we need to rebuild the site, we need to redeploy things, syndicate content across all sorts of systems. You know, there's all sorts of things that need to happen in

the life cycle after we press publish.

and that full breadth of uh of operations there, those are things that people are involved in doing. And that's

content operations either side and while your CMS is probably still just only ever really concerned with that thing in the middle. So sanity is for everything

the middle. So sanity is for everything before and after pressing publish. I'm

going to say that again for effect.

Sanity is for everything before and after pressing publish. So yes, we're a CMS sort of we've got these pieces as a CMS, but I've got too much ambition to leave it there. So, we created a content

lake, which is like a real-time multiplayer editing data store, and we knew that we'd need to do more with that. So, we built a real-time CDN. We

that. So, we built a real-time CDN. We

showed that off last year. So, real time CDN that scales massively for queries and assets. We also knew that we'd need

and assets. We also knew that we'd need to be able to query JSON. So, we built this query language called Grock, a name that's now been co-opted by others, but it's kind of like SQL for JSON. It's an

open standard that we shipped. We also

ship portable text. It's an open standard for portable text and block for block content and uh and rich text. And

we ship an editor editor that's built in React and a serializer for that in a number of different languages. And we

knew that once you start building out this content operations infrastructure, you'll need be able to deploy that as code. So we built blueprints. It's like

code. So we built blueprints. It's like

this infrastructure as code ability to deploy that. We knew we'd need to be

deploy that. We knew we'd need to be able to kick off functions to perform actions in that after cycle of after you press publish. And uh we knew that the

press publish. And uh we knew that the sanity studio alone would not be enough.

You need to build multiple applications to be able to do content authoring. So

that's where we built the app SDK. And

so all of these things together, this is the content operating system where organizations that know they need to put their content into something where they can extract value from, they put it into a content operating system. We've also

sort of shopped this around. I don't

think it makes much more sense, but I thought I'd put it in anyway. So

hang on, Simeon. I'm getting impatient.

I thought this talk was about AI and uh I promise I'm getting back to it. So

again, if we think back in the history of Sanity's been doing this for a long time, we want humans and robots together to work together, and we always have.

Um, so for as long as the the content lake concept has been around, we've always had that multiplayer editing capability. It's just that originally we

capability. It's just that originally we thought that would be like APIs sending edits to documents for the longest time, we didn't know that so quickly we'd have the ability to work with LLMs,

and we do. But now there's this term about keeping a human in the loop and that seems to like relegate the role of a human into like this innocent bystander role where the robots are doing all the work and we're just in the

loop. We just check in occasionally and

loop. We just check in occasionally and uh this this seems like the wrong paradigm I think this human in the loop idea where we sort of just check in occasionally while the the robots create all of this.

Um so like this is the easiest thing to make with AI content. If we just want to generate content at scale, if I say I need a 100,000 pages, you can very easily make 100,000 sloppy pages. And we

all know AI slop. You know, you've been reading a Twitter thread and it's kind of got that weird smell to it like uh uh it seemed interesting at first and then it sort of tails off. It doesn't make any sense anymore and it's got too many

m dashes and it says it's not just X, it's Y.

But you can work out that that there's swap and maybe eventually we get to a point where you can't detect it, but we're not there now. So swap is like contextf free. It doesn't say anything

contextf free. It doesn't say anything in particular about uh you and your organization, your goals. It doesn't

convert because it's just easily dismissed. How many AI generated images

dismissed. How many AI generated images and things have you just scrolled past in your timeline because it's just it's very clear. It's really easy to scale.

very clear. It's really easy to scale.

And that's a problem if you need to create content driven applications at scale. This is kind of the easiest thing

scale. This is kind of the easiest thing to do. Unfortunately, but the the thing

to do. Unfortunately, but the the thing to think about too is that slop's not new. We've had like humanpowered slop

new. We've had like humanpowered slop for the longest time. I mean, I was walking past my local coffee shop the other day and there was a picture in the window of like an AI generated picture of brownies. They got a sale on brownies

of brownies. They got a sale on brownies if anybody's in Whitley Bay um soon. And

you know, once upon a time that would have just been clip art or like a stock photo or word art or something. So like

we've had contextfree bad content for the longest time. It's just now you can generate this at scale. So we've got two problems I want to solve today. So one,

authors writing outputs doesn't scale.

If you are going to task somebody who is authoring a CMS to create individual pages, they're not going to create very many because they're time limited. And

the other problem is if we just use AI to generate all that content, it doesn't convert. Doesn't help to just generate

convert. Doesn't help to just generate huge volumes of content.

If it uh if it doesn't help if anyone's been tracking the number of times I've said content, I'm going to check this in the transcript afterwards. So anyway,

we don't have to pitch this as a binary choice of which end of this graph we're going to be at. Are we going to create either human created content or AI created content because both ends of this chart have their problems? One's

too slow, it won't scale, and the other's too sloppy, it won't convert.

But somewhere in the middle there's this idea that if we keep a human involved, not in the loop, but orchestrating the content creation experience and we can enhance their skills with AI, that's where we can actually compete at scale

and make a great amount of content. And

we're sort of back at this analogy again. So the goal is to have a world

again. So the goal is to have a world where one author can use a content management system or a content operating system and write many prompts and create a near infinite amount of pages. And if

this sounds like philosophical or farical on not from the real world, we've got customers who are doing this right now. So like one particular

right now. So like one particular customer uses sanity, they're a travel tech startup in the uh in the UK there.

They use uh a system which I'll demonstrate a version of in a moment where they can generate content for like 50,000 hotels on demand, but it's very specific content specific to them and

their audience and they can personalize that. So this is actually happening now.

that. So this is actually happening now.

So once upon a time where individual authors would have had to have written individual documents, they can instead now generate content at massive scale which they uh approve the quality of.

And so their content management system looks something like this. I included

this slide but I'll show this in a moment in my demo. So if we can go over to the screen now, I'll show you what this looks like.

All right, I'll give it a refresh for good luck.

Um, oh, I've got a notification here. We

introduced a new feature today that's not part of this slide. If you'd like to go and uh go and see our booth about the sanity content agent, um, someone will show you a demo of that. That's very

useful. So, what I've got here is an XJS 16 application on the lefth hand side.

This is in our presentation mode. And on

the right hand side, that's the sanity studio. It's that main content editing

studio. It's that main content editing interface. And what we've got here is a

interface. And what we've got here is a uh just an individual page uh for a particular location and sort of the the the sloppy version of creating content here is our description is currently uh blank. And so what if we need to

blank. And so what if we need to generate a description on demand? Well,

I've created a API route using the AI SDK and it's just going to create a simple description. If we go ahead and

simple description. If we go ahead and look at the prompt for that, it just looks like this. Only return plain text.

Write a description for this location.

So this is like the most basic version of this prompt, but all the prompt is kept on my side in the code here. And so

what do we get? We get sort of that very generic, unhelpful, boring, um uh sloppy version of this content. This isn't

helpful. We've generated text. Yes, but

this isn't going to help anybody. So

sort of the the whole intention and the value of this talk that I'm mentioning today is the idea that what if we can put the authors themselves, the content operators themselves in control of what we create. By the way, we've got some AI

we create. By the way, we've got some AI tooling in uh Sanity like built-in tools that we can uh we could have used here, but it's for cell next.js conference. I

wanted to use the uh use the platform so to speak. So, what I've built here is a

to speak. So, what I've built here is a custom tool if I stop right clicking there where I can generate a description using a few prompts. And all of these documents are documents stored inside of my sanity data set. So, I know it's not

news to anybody in this room to like write prompts and get better content, right? The difference is here that if I

right? The difference is here that if I build all those prompts into the system where my content operators work, then they are in control of the experience of what comes back. So I don't have to get called in every time that content being

generated isn't very good. They're able

to modify the prompt. So what if we go and select all of these prompt documents and and then we can add other contextual information like do we have offers at the moment for this particular location?

And then we can create really specific content. Um

content. Um and where where are all these prompts coming from? I'll jump over to uh uh

coming from? I'll jump over to uh uh over to here. We can see that these prompts are being stored in the studio.

And this is literally what the the love holidays uh team are doing there. And

while I'm going to generate this content here one by one, um I'm told that uh the team there have been able to refine their prompts to the degree where they can now initiate actions like this on

mass and just about trust that what they're going to get back is exactly what they need. And so they're no longer really needing to review content. they

can pretty much trust that what they're getting back now because the prompts have been so well refined is exactly what they'll need. So if we go ahead and include all these other things here, I'm going to generate that description again. And if we have a look at that uh

again. And if we have a look at that uh API route, we can see, you know, I've I've pulled in these prompt documents which are being worked on by those content authors. I can pull in any

content authors. I can pull in any current offers. So if there's offers

current offers. So if there's offers running in this location at the moment, we'll include that into the text of the article. And so this is an idea of how

article. And so this is an idea of how we can make sure that we've got not only uh relevant content to the tone of voice and everything of our uh of our customers here, but also if we have uh uh promotions, we can make sure we can

personalize this offer and we'd be able to take this even further. Localization

uh personalization both based on the customer profile, all of this. So what

we get now is a very different version of the content. So that last one was just like one um one paragraph there of very generalized content where here we can see the very first sentence now mentions one of those offers. Uh the the

content's much shorter. It's uh to the point and it follows those prompts much closer than it did before. So again,

this is just an example of creating one page uh uh on demand here, but we could imagine how we'd be able to then put bulk tools into the hands of content creators to generate that content on mass.

And so that's just a really basic demo of uh of of using AI SDK. But again, the sort of the value of what we're trying to demonstrate here is that if we put our authors in charge of building out a content model that describes what our

business knows and how our business talks, then they no longer have to worry about creating individual pages. And you

can go from creating a page a day to uh creating tens of thousands of pages of highquality content on demand. So if

you'd like to know anything more about what I've shown today, um go and have a look at sanity.io/next.

And uh if you're here at NextJS Conf, make sure you come and swing by the booth as well and we can show you more demos of this and more of the AI tooling that we've got available for uh content operations. And so that's it.

operations. And so that's it.

Please welcome to the stage Versel software engineer Ree Sullivan.

[Music] [Applause] Great. Thank you. Hi. So, I'm Ree. I'm

Great. Thank you. Hi. So, I'm Ree. I'm

an engineer here on the domains team at Visel. And I wanted to really kind of go

Visel. And I wanted to really kind of go over uh use cache, what it unlocks, kind of how we got here, and why I'm excited about it.

So, going over the agenda, we're going to look at kind of the evolution of use cache index.js, JS the functionality that it unlocks and then how you can kind of use use cache to make it hide

away into the background and more of an implementation detail.

So from the talk title what is reactive state? Um so it's kind of this concept

state? Um so it's kind of this concept of when you make a change all of the people should see that change reflected without having to kind of encounter stale data. It's a common problem that

stale data. It's a common problem that kind of many apps that you have use. For

example, how many times have you flipped a toggle on one browser tab, but the other browser tab that you have open doesn't reflect the changes that you just made? This scenario here kind of

just made? This scenario here kind of highlights it, but it might not be super common where you have two tabs open at the same time, but you can imagine, you know, your teammates updating state at the same time and the page doesn't

reflect that state. Same thing, we're kind of leaving a comment and then not seeing that uh reflected. On the front end, we have a lot of tools to solve this problem. We have the ability to

this problem. We have the ability to listen to changes to data of a web sockets or SSE and it's pretty fixable in that you set up sub subscription on the client to the data that's changing

and then refle refresh the state based on that and we've seen more apps kind of adopt this over time. For example, one of our products v0 when you send a

message it updates in real time across the browser clients. um it's a fair bit easier to kind of ship updates to spar-like apps as you can start a listener on the client. But whereas when

you're server side rendering your pages in either a distributed or a serverless setup, you don't have that persisted client to kind of start a connection.

And along with that, um you might be managing multiple cache locations, whether that be your data cache or your CDN cache. Um it's a lot of state that

CDN cache. Um it's a lot of state that you have to track.

Um so kind of reactive state and now kind of my background um a lot of the experience that I have is building really high f fast performing uh high scale sites in nextgs in the past few

years years I've helped build next faster which is a demo e-commerce site with over a million products and instant page navigations answer overflow an xjs

app that has over 2 million pages and then a uh and then v domains which is a real-time search for domain names and so kind What do all of these have in

common? Well, they all make really heavy

common? Well, they all make really heavy use of caching. And the reason they use caching so extensively is to get data close to the user. And so if caching is so great, then why aren't all sites

fast? And why don't we just cache

fast? And why don't we just cache everything so it loads instantly? Well,

you have to account for all of the places the cache data is used in order to revalidate them and also make sure that that data isn't stale. And so with next faster and answer overflow while both of them are great at serving lots

of pages they're not super reactive to the changes that are made.

And so kind of looking at why that is and how caching has evolved with the nextjs in next.js uh starting with the pages router you have two options for caching. The first one was either

caching. The first one was either manually writing data to reddus or some similar data store. This would be what you would use if you wanted to cache the results of a call uh in between kind of

deployments or keep it in memory. Uh the

other one would be writing pages to ISR cache. And this would be done by adding

cache. And this would be done by adding get dynamic props or get static props along with specifying a revalidate time.

And you'd use that when you wanted to kind of serve pages from the edge cache or CDN cache.

That's how you cache data from the pages router. But then invalidation was a much

router. But then invalidation was a much more painful story. you'd have to call a route handler and then figure out all of the places your data was used and then call revalidate for every single path

that that data was used on. This meant

you'd be doing a bunch of manual work to track all of the places that your data was being used and then also calling revalidate.

And so why did you have to call revalidate so many times? Well, from the perspective of the framework, the in the pages router, a route map mapped to a rendered page, but the framework didn't

know what data went into rendering that page. In this case, post ID calls get

page. In this case, post ID calls get content, get views, get users. But the

framework doesn't know about that mapping and that data that the page depends on. All it knows is that the

depends on. All it knows is that the route maps the render page.

So then came along app router and with app router, it introduced the concept of your framework understanding caching and providing primitives for you to use to cache data. It started with the kind of

cache data. It started with the kind of awkward teenagers of unstable cache and then an option for fetch caching. Uh but

that API is pretty clunky to use and fairly limited in its ability. For

example, you weren't able to use the result of a call uh with unstable cache to set cache tags based on it. It then

pretty quickly evolved to use cache which is a really nice interface which lets you cache both react components and data. And it's the shared interface for

data. And it's the shared interface for both.

And so when kind of use cache was introduced and then the other caching primitives that XJS has a question you might ask yourself is why is it introducing these primitives? Why does

my framework care about how I cache data? And that kind of comes back to

data? And that kind of comes back to fine grain reactivity and caching of data. So for a given route, it knows

data. So for a given route, it knows exactly what data makes up that route and then the framework is able to respond with either like a cache header

that then it kind of downstream cache is able to use uh to do invalidate granular revalidation of changes. And so kind of from this diagram we can see that for the path post ID it calls get content

get views get users again in order to render the page. But in this case the page actually knows about the fact that those functions went into it. And not

only does it know that those functions being called called to render that page through cache tags you're able to provide metadata about the returned results. And so it knows not not only

results. And so it knows not not only that the functions are called on that page but also the returned data is that that page is dependent on specific data.

And so kind of when you contrast that to pages router for doing validation, it's just one call to revalidate all of the places that data is used on. And not

only is it just that one call to revalidate the pages, also if you're storing uh data in like a data cache, like for example, how you would use to store it in Reddus, the it's the same

revalidate tag call to that location.

And so because the framework handles the caching and tracking for you, it's able to handle this for you rather than you having to do all this manual work to specify revalidate locations.

And so if you kind of look at the APIs here of the traditional Reddis cache to unstable cache to finally use cache, the DX are really clear on use cache. You

don't have to add conditional logic to caching to your function flow. Uh and

not only is it a nicer API for handling caching, it's unlocking more functionality while providing a nicer API that wasn't possible before in pages router.

So that's kind of the background on why use cache was introduced and what is unlock what is it uh what it unlocks but how does it look like in practice and kind of how am I thinking about building with it here's where things I think get

really cool is you're able to set up your codebase in a way where you very rarely think about use cache instead you're able to just write your code as normal and then you can keep your content fresh automatically by

subscribing to changes this is accomplished by localizing use cache the places data fetched calling revalidate tag on data change and then using narrow tags to just revalidate the data that changes. We're going to go over two

changes. We're going to go over two examples. The first one's using Convex,

examples. The first one's using Convex, which is a database, and the next one is using a CMS. So, first starting with Convex. For

those of you unfamiliar with Convex, it's a database that's excellent at tracking the changes uh to your data and broadcasting to subscribers when data changes. The kind of setup that I have

changes. The kind of setup that I have for it is I created a wrapper function that starts a subscription on query changes and then when a change comes in to that specific query, it goes a web

hook to revalidate that tag. It's a

small amount of one-time setup and the benefit is I'm able to reuse that all throughout my application uh to have up-to-date rent serverside rendered content. Let's dive into the code to see

content. Let's dive into the code to see how this looks.

So everywhere that I need to fetch data in my application, this is what it looks like. I pass the name of the function

like. I pass the name of the function and the aux the wrapper function get the result and can use it as normal. I think

what I love so much about this is that it's just kind of the standard like await data use the result but with kind of putting use cache in the background

and an implementation of this function.

It's never stale. It's granularly cached and you're getting the benefits of serverside rendering and all you have to do is call your query function.

So kind of a look at what that function looks like. Uh it's got a few parts

looks like. Uh it's got a few parts breaking them down. Um the first part is kind of the use cache specifying that that function is going to be the results of it are going to be cached. Then it

creates a key which is the hash of the function name and the args and that's what we're going to be using to revalidate later on. Then it calls recx to get the ch get the results the database query. This is kind of your

database query. This is kind of your standard database lookup. Um and then after that it sets up a subscriber to listen to the changes. For the purpose of this demo, the subscriber is one that I had to set up manually. There's like a

world where providers kind of provide more APIs for exposing cache keys and things to cache on uh with callbacks.

And then finally, it sets a cache tag and that's how we're able to revalidate it. And then we're using a cache life of

it. And then we're using a cache life of weeks as we'll be handling the kind of revalidation of this data.

And then finally, this is the callback function that get gets called on data change. It's pretty simple. It's pretty

change. It's pretty simple. It's pretty

straightforward where convex gives us that key back of that which is the hash of the function name and the arguments passed to it and we pass that directly to revalidate tag in order to revalidate

it um on a notification of data change.

So looking at kind of a demo of what this looks like uh in practice. Um so

this is without the using cache and subscribing to data changes. Uh you can see that when you add a message and then do a refresh, you can see that there's kind of a flash of incorrect content.

And so what's happening here is the serverside page is rendering. It's

showing the stale content. The client is then doing a fetch and that's rendering the up-to-date content from the server, but there's a mismatch between them. And

this is because the server side rendered content is out of sync.

So then kind of like looking at the other setup um this is calling revalidate on data change and so when I add a message and do a refresh you can see that there's no layout shift there's

no flash stale content it's just revalidating the changes and so the kind of next example I wanted to walk through is hooking it up to a CMS and so it's another example where

you might have like a CMS and it's got author information and you have a function to get that author information.

Uh this function might be called on an overview of list of posts, an individual blog page or an author page. And it's

the same thing where you provide it data function to get the data. You specify

use cache and then you return the results.

And then the next step is part is making the data live. Most CMSs provide web hooks on data change. So we're

subscribing to web hook from our CMS. And then if it's a change to the author details, we re revalidate that tag. This

could also be called from your database write path. In this case, it's showing

write path. In this case, it's showing how to integrate with an external service.

So coming back to this example before of revalidating many pages, I think it really shows the kind of value in use cache and the nextgs caching system.

With one revalidate call, you're able to invalidate all of the locations that data is used and the remote cache. I'm

slightly glossing gloss over for these demos the context of how revalidate get tag gets called just because it varies a fair amount based on your data access and mutation patterns. The example of a CMS provider and integrating your

database should be both common pretty common patterns that you can build from.

The focus here is more so on what use cache and revalidate tag are able to unlock in keeping your routes rendering fresh data without having to set arbitration times.

The last thing I wanted to do was run through some kind of additional scenarios that use cache uh that you might run into of use cache to give you more items to add to your tool belt.

Ideally, you have this beautiful caching setup with granular validation, but sometimes you might it's helpful to have fallback options. So, here's an example

fallback options. So, here's an example of setting a cache tag to enterprise.

This would be help if you needed to invalidate all enterprise customers data, for example. Another one would be making sure that all data fetches related to a specific customer have the same ID. So, that way if you need to

same ID. So, that way if you need to invalidate just that customer's data, you can do one call to invalidate everywhere.

uh another kind of basic example but it makes it use cache makes it really easy to have different cache lives depending on how old your data is. So in this case if my data is not changing often I might want to use weeks as the cache time

instead of hours and I just specify that in cache life.

Next also supports bringing your own cache handler. So if you kind of want

cache handler. So if you kind of want more control over how caching happens in your app uh you're able to implement your own.

And then lastly, I wanted to call out uh kind of a difference between update tag and revalidate tag. So update tag gets called uh from server actions and it allows you to ensure the next request

will be up-to-date content. Uh so the difference between update tag and revalidate tag is revalidate tag does background revalidation and so it's kind of still while revalidate it of stale

content to the first request revalidates it and then returns up to date content.

up to date update tag forces kind of the next request to serve fresh up-to-date data.

So that's kind of the uh overview of kind of why use cache exists, what it unlocks, and then how revalidate tag allows you to make pages live. If you're

interested to get the source code for the convex demo that kind of shows how it implement this in a more production app, here's a QR code you can scan to get those resources.

Thank you so much.

[Applause]

[Music] [Music]

Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat. N.

Heat. Heat. N.

[Music]

[Music]

[Music] [Music]

[Music] Heat. Heat. Heat.

Heat. Heat. Heat.

[Music] [Music] [Music] [Music]

[Music] Hey, [Music] hey hey.

[Music] [Music] Hey, [Music] hey hey.

Heat. Hey, Heat.

[Music] [Music]

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat.

[Music] Heat.

[Applause] [Music]

[Music] me.

Everybody

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

Hey. Hey, hey, hey.

Hey. Hey, hey, hey.

[Music] Heat. Heat.

Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Laughter]

[Music] Heat. Heat.

Heat. Heat.

[Music] [Applause] [Music] [Applause] Heat. Heat.

Heat. Heat.

[Music]

Heat

[Music] up.

[Music]

Nat.

Heat. Heat.

[Music]

Nat. Heat.

Nat. Heat.

[Music]

Heat. N.

Heat. N.

[Music]

Hey,

[Music]

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music] [Music]

Heat. Hey, heat. Hey, heat.

Heat. Hey, heat. Hey, heat.

[Music]

[Music]

[Music]

[Music]

[Music]

Hey, [Music] hey hey.

[Music] Heat.

[Music] Heat. N.

Heat. N.

[Music]

Heat.

[Music] [Music] Heat.

Heat. Heat.

[Music]

Woohoo! Woo!

Woohoo! Woo!

[Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Woo!

[Music]

Heat.

[Music]

Heat. Hey, Heat.

Heat. Hey, Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music] [Music] [Music] out of

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music] Yeah. Yeah. Yeah.

Yeah. Yeah. Yeah.

[Music] [Music] [Music]

[Music] [Music]

[Music]

Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

down.

[Music] Wow.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Power.

[Music] Power.

[Music]

Heat. Hey, heat. Hey, heat.

Heat. Hey, heat. Hey, heat.

Heat. Heat.

[Music] Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat. N.

Heat. Heat. N.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] out. Heat. Heat.

out. Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] [Music]

Heat. Heat.

Heat. Heat.

[Music] [Music] [Music]

[Music] [Music]

[Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music] [Applause]

Heat. Hey, Heat.

Heat. Hey, Heat.

[Music]

Heat. Hey, Heat.

Heat. Hey, Heat.

[Music]

Yeah. Yeah.

Yeah. Yeah.

Heat. Heat. N.

[Music]

Heat. Heat. Heat.

Heat. Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Laughter]

[Music]

[Applause]

[Music]

Hey,

[Music]

[Music] Hey hey hey.

[Music] Hey, hey, hey. Heat. Hey, Heat.

hey, hey. Heat. Hey, Heat.

[Music]

Heat. Hey, Heat.

Heat. Hey, Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music] Heat.

[Music] Hey, Heat. Heat. Heat. N.

Hey, Heat. Heat. Heat. N.

[Music]

[Music]

[Music] [Music]

Hey.

Hey. Hey.

[Music]

Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music]

[Music] [Music]

Hey. Hey. Hey.

Hey. Hey. Hey.

[Music]

Woo! Woo!

Woo! Woo!

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Hello.

[Music]

Heat.

[Music] every

[Music]

[Music] [Music] [Music]

[Music] Yeah.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music] [Music] [Music] [Music]

[Music] Hey, [Music] hey hey.

[Music] [Music] Hey.

[Music] Heat. Heat.

Heat. Heat.

[Music] Hey hey hey.

[Music]

Heat.

[Music] Heat.

[Music]

Heat. Heat. N.

Heat. Heat. N.

[Music] [Music]

[Music]

[Music] Heat. Hey. Hey. Hey.

Heat. Hey. Hey. Hey.

Heat. Heat.

[Music] Heat. Heat. N.

Heat. Heat. N.

[Music] [Music] Woo!

[Music] Heat!

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Power.

Power.

[Music] Wow.

[Music] Heat.

[Music] [Music] Oh yeah.

[Music] Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

[Music]

[Music]

Heat. Hey, heat. Hey, heat.

Heat. Hey, heat. Hey, heat.

Heat. Heat.

[Music] [Music] Heat.

[Music]

Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music]

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] Hey hey hey.

[Music] Heat. Hey. Hey. Hey.

Heat. Hey. Hey. Hey.

[Music]

[Music]

[Music]

Heat

[Music] up here.

[Music] [Music] Hey.

Hey. Hey. Hey.

[Music]

[Music]

[Music] [Music]

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music] Hey hey hey.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] [Music]

[Music] Hey baby.

Heat. Heat.

[Music] [Music]

[Music]

[Music] [Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

[Music] [Music]

[Music] Heat. Hey, Heat.

Heat. Hey, Heat.

Heat. Heat.

[Music] [Music] [Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] Heat. Heat. N.

Heat. Heat. N.

[Music] [Music]

[Music]

baby.

[Music] [Music] [Music] [Music]

[Music]

[Music] [Music]

[Music]

[Music] [Music]

[Music]

[Music] Hey hey hey.

[Music]

[Music]

[Music] Woohoo!

Woohoo!

[Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music]

[Music] Hello.

[Music] Hello.

[Music] [Music] [Music] [Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music] [Music] Hey, Heat. Hey, Heat.

Heat. Hey, Heat.

[Music] [Music] [Music]

[Music] [Music]

[Music]

Heat. Heat. N.

Heat. Heat. N.

Heat [Music] up.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Please welcome to the stage Datab Bricks developer advocate Ryan Vogle.

[Music] [Applause] [Music] I will not be talking about cache

components today. So this is AI.

components today. So this is AI.

All right.

Please build me a million dollar B2B SAS using Nex.js and Neon DB. No mistakes.

Please send it. And that's the talk.

Thank you.

No, no, no, no, no, no. So, we've all seen these these vibe coders though where they are just shipping unsafe production code and it's just it's just breaking data and they're spending

thousands and thousands of dollars on like exposed API keys. But at the same time, it begs the question, can you like reliably use AI in your daily workflow?

Yes. Yes, you can. My name is Ryan Vogel. As he said, I am a deval for data

Vogel. As he said, I am a deval for data bricks and I've shipped a ton of personal and professional projects with many over having users over thousands and thousands of users and I've used

billions of tokens on cursor which is probably the only accolade I need.

Anyways, we're going to be focusing on a side project of mine called inbound.

Inbound is my email infrastructure project and I've personally shipped over 500 commits in the past 6 months. The

average bug is fixed and deployed within 10 to 20 minutes because I'm using my agents. So, let's look at the inbound

agents. So, let's look at the inbound progress. A little bit more background

progress. A little bit more background context. Inbound is an email service

context. Inbound is an email service that allows you to send, receive, reply in threads, making it easy for AI agents to sort of manage and work with email.

This is the landing page 4 months ago.

This is it now. That is I mean that just looks so

now. That is I mean that just looks so much more of a legit product if you're if you would tell me. I would believe that. I wouldn't even believe that. That

that. I wouldn't even believe that. That

looks like a scam. Anyways, I've done a ton of great stuff with AI and I've learned how to use it incredibly well.

I've been using it for almost 2 years.

So, let's talk about the important part of treating AI as a teammate.

AI isn't here to replace you, but it's your teammate. But people badly prompt

your teammate. But people badly prompt an agent like we'll jump into where they're like please fix this or something like that and it doesn't fix it. You've got to break down what you

it. You've got to break down what you want in your head like the problem like you would diagnose it yourself and give that to the agent.

So this is the email flow page in inbound and it worked. It is aggregating a ton of data from a bunch of different data sources. the outbound emails table,

data sources. the outbound emails table, the inbound email table, all the scheduled stuff, but it was slow. It

took like 3 seconds to load like a week ago. And it would work, but why why can

ago. And it would work, but why why can it be this slow? We can't have it be like that. Now, I could have manually

like that. Now, I could have manually investigated this and looked into it and been like, "All right, let's dive into the code. Let's look into the drizzle

the code. Let's look into the drizzle stuff and the SQL queries. Where can I optimize this?" But why if I why would I

optimize this?" But why if I why would I do that if I have an AI assistant to help me? So, I opened up cursor and uh

help me? So, I opened up cursor and uh just opened up my prompt and said, "Can you fix this, please? I don't need grammar. AI can understand me, right?"

grammar. AI can understand me, right?"

And uh yeah, it didn't really fix anything. There's no scope. There's no

anything. There's no scope. There's no

context. There's no end goal. What could

any of you, if I gave this prompt to you, would any of you know what to do?

No. There's no nothing to do. This is a C-tier prompt. All right, let's upgrade.

C-tier prompt. All right, let's upgrade.

Let's upgrade. We'll go from Ctier to Btier. All right, I'll give you a second

Btier. All right, I'll give you a second to read this.

It's a little bit of a better prompt.

It's an upgrade because we're having more refined scope. It gives the agent a better mental picture of what we're actually working with. We have some in

goals kind of set. But this is the prompt that gets you the dreaded, let me generate a readme file for this. So that

way we can just put everything together and and and put it all together. And no,

we don't want readme files generated and markdown stuff. So, we'll upgrade again.

markdown stuff. So, we'll upgrade again.

A tier prompt. This is probably the prompt that I use every single day. If

you read it, you'll notice that it's basically the exact same prompt that I had on the other one. But there's one important feature context. This

references the page.tsx file. It

references the drizzle schema. It

references my hooks folder. All our

important data attributes that would go into actually compiling the data for this page. Now, what if we could take

this page. Now, what if we could take this a step further? I know that's crazy, right? It's It's crazy to think

crazy, right? It's It's crazy to think how we could take this a step further.

What if we could let the AI agent deal and work with some of our data, but not have to worry about that affecting our production instance? Enter neon

production instance? Enter neon branches. I love neon branches. It's

branches. I love neon branches. It's

probably the favorite part about Neon that I just it's amazing. So, imagine

this. We've got our main branch right there. And I want to give the agent

there. And I want to give the agent access to this so that way I can let it kind of branch off, try some indexes and optimize queries and maybe drop some tables or stuff like that to check optimizations to see if maybe I'm I'm

just pushing something really really badly. And let's say it just does the

badly. And let's say it just does the the typical, you know what, I think we can make this faster by dropping all the rows. Typical AI fashion. Now, in a

rows. Typical AI fashion. Now, in a traditional production database, I would be like, oh no, that's not good. There's

all my production data. Yikes. But not

with Neon. It's a branch, which means that if it deleted all the data on that branch, it doesn't matter. I can just reprune that branch or reset that branch

from main and I would just have a fully functional branch working again. Now,

let's take this, pair it with our ATR prompt, and we get this. This is the actual output I got when I was running this a couple weeks ago. And if you look at it, we pair them together, and it

gives like it was this. it give a little absolutely let's get started on this and then it gave it this and text body HTML body attachments and some other stuff

those are massive fields in the database that I'm pulling from and that was that was slowing down the page extremely much it was like so much slower than it should have been and I was able to just

quickly implement it and fix it and it was able to be working again now the important part is I worked with the agent on this I didn't just oneshot it if you don't know what oneshotting means it basically means opening up an AI I

agent like prompt window with the intent to complete it in one step because everyone wants to complete it in one step. But seriously, how many of you

step. But seriously, how many of you have told like a co-orker, hey, can you fix this? And they've done it in one

fix this? And they've done it in one step. I don't think that's ever

step. I don't think that's ever happened. So, you got to treat AI in the

happened. So, you got to treat AI in the same way. So, let's jump into we we know

same way. So, let's jump into we we know how to like prompt the AI agent a little bit better now, but now we got to worry about threading. So AI provides this

about threading. So AI provides this unique opportunity to stop a conversation mid-generation, like it's mid-generation, and then go back to the previous prompt and edit. Imagine if you

could do that in real life with like talking to someone. That'd be amazing. I

would love that. So this is really really a uh a unique opportunity because it allows you to compact data a lot easier than what we see over here on the right. So, if I were to have an initial

right. So, if I were to have an initial prompt like this and then it didn't generate exactly what I wanted, I would then revise it and say, "Actually, don't modify that file. Go and modify this file." And then I would do it again.

file." And then I would do it again.

That's like three times more tokens that I'm using when I could just pause the conversation, go to the original prompt and say, "Don't touch the hooks folder or do touch this folder or anything like that." You just got to try to compact as

that." You just got to try to compact as much data as possible because as you'll learn today, context management is key.

But this means that you've got to be active in the conversation. Which brings

me to my next point. You can't just sit back and start your agent and go sip a cup of coffee while you're doing your laundry. When I send a chat, I read the

laundry. When I send a chat, I read the output intently, almost as if like I'm I'm quizzing it. And thankfully, a lot of agents nowadays have their reasoning and their thinking patterns exposed. So,

you can actually see what it's kind of thinking and working through. And if I see something where I'm like, "Hey, you shouldn't be thinking that." I stop the model, revise it, and iterate. Now, the

two main models that I use are uh CHAGBT or not CHBT, GPD5 and Cloud 4.5 Sonnet.

And they're like two different brothers, always trying to impress you with their new features and everything like that, but you've got to understand that they're not the same. They do have different personalities that you can pick up on and understand. So, GD5,

we'll jump into this. GD5 is the representation of the measure twice, cut once ideology. You will give it a prompt

once ideology. You will give it a prompt and it will just let me look at this file and this file. Okay, this file I could write. Nope, I'm not going to

could write. Nope, I'm not going to write. Let me go look at this file. Uh,

write. Let me go look at this file. Uh,

let me do some grepping. Let me do some of this. It just does a ton of research.

of this. It just does a ton of research.

But then when it does edit something, it is extremely precise. But what I my favorite part about GBD5 is it's incredible at design. So look at this.

This is this is actually real. This is a Figma um of the inbound dashboard. It's

uh the domains page where you can see all the domains that you've added. This

is Figma. I used GBD5 for about an hour and a half with the Figma MCP server and I created this. This is live in an hour and a half with the Figma MCP server.

I'm not a designer. That's my

girlfriend. I am not the designer.

That's that's pretty impressive, right?

Let's show the Figma again and then the live dashboard. That is pretty much

live dashboard. That is pretty much nearly accurate. I just It's crazy that

nearly accurate. I just It's crazy that you can do this nowadays because like four or five years ago, you wouldn't even fathom to begin with this. It's

just wow.

All right, let's jump to the other brother Claude.

Oh, Claude. Claude, it's always better at backend, but it just loves to start editing files right away. It's like, all right, let's edit this and this and test this and run lints on this and no, no,

no need. But that's where you got to

no need. But that's where you got to learn the model personalities. That's

where you got to learn how they work.

You got to know that GPD5 does a lot of research first, but Claude does a lot of editing first. It's very smart, but it

editing first. It's very smart, but it can also be very confidently wrong. So

when you're doing stuff that is great at like summarizing a needle in the haststack recovery, you got to be able to make sure that you can verify that with a third party to make sure that you're not exactly just taking in

hallucinated data. So another important

hallucinated data. So another important part is you got to treat AI as a trusted teammate. And I've got a great story for

teammate. And I've got a great story for this that I just I love to say. So I

have a Tesla Model 3. It's got the full self-driving in it. I drive it every day and I try to use full self-driving as much as I can. And obviously since I've been using it every single day, I know

how the full self-driving model works. I

know that it breaks late. I know that it's very aggressive when it's turning.

I'm from Florida, so everyone's got to be aggressive. But my girlfriend, on the

be aggressive. But my girlfriend, on the other hand, doesn't. So when she gets in the car and she rides with me, rightfully so, she sees that we're not

breaking and it's like, "Oh, no, no, no, no, no." But it makes sense because I

no, no." But it makes sense because I interact with that model every single day where she maybe interacts with it like once a week. Same thing applies to AI. You got to learn how to work with

AI. You got to learn how to work with the models so that way you can learn their strengths and weaknesses. So now

that we have trust with our AI teammate, we can now go into context. So context

is the memory that you and the AI curate over a conversation. Whether that be messages, tool calls, and files. If you

think about it like a haystack and every single piece of message or like tool call or anything like that is a piece of hay in the haystack, it's a lot easier to visualize. But if you don't regulate

to visualize. But if you don't regulate it correctly, it can lead to a bunch of irrelevant files, data, and more in your chat history. So it's a lot easier to

chat history. So it's a lot easier to manage and pick out stuff in this hstack than this haystack, wouldn't you say?

So, Cursor actually launched a really cool feature for this where it shows a usage indicator. And I keep my context

usage indicator. And I keep my context below 20 to 30% on all models, like on average. And if at most, if I'm like

average. And if at most, if I'm like working on like a really hard bug fix, I let it go up to 50. If it's above 50%, I've seen on average it just performs worse. More hallucinations, more um this

worse. More hallucinations, more um this absolutely all right. Oh, I was completely wrong. It just doesn't work.

completely wrong. It just doesn't work.

So, for example, in inbound, got another great example is threading. If you've

ever worked with email threading and sort of that type of stuff outside of like a Gmail, Outlook or something like that, you know, it's a little bit hard.

And I was working on a feature for inbound that makes it easier for agents to use and read and summarize from threads. And I was building it on top of

threads. And I was building it on top of AWS SCES and it was late and I got stuck. So, I open up Google like the

stuck. So, I open up Google like the good old days and I found this great Stack Overflow article. It's a savior, right? And it was a great answer that

right? And it was a great answer that basically solved my problem. Now, most

coding agents and coding uh platforms allow you to just like copy the link of a website, paste it in, and yeah, you got your answer, right? You think you should be good? No, don't do that.

Here's why. So, I built another project called preview down. And this is actually live, so it's not just for the thing. You can go to it right now. And I

thing. You can go to it right now. And I

made it so that way it can show how many tokens a website would use if you paste it into an AI agent chat. So obviously I took that Stack Overflow article, picked

it up, threw it in 7,000 tokens.

That answer was like two paragraphs. Now

this may not seem like a lot, but it can build up quickly if you have six or seven other different links. This is

just not needed and there's a better way to do this. So what I do is I go into a prompt like this. I give it a little bit of back context. I say, "Hey, looks like according to some research, we just need the references and reply to header based

on this stackfl." And then I paste those two paragraphs. That is only 131 tokens,

two paragraphs. That is only 131 tokens, which means that we saved 98% that's a 98% smaller prompt with the same meaning

and answer. All right, so

and answer. All right, so this is great. We've got we know how to use AI, right? We know how to manage our context and our threads and everything

like that. But how does Nex.js and open

like that. But how does Nex.js and open source kind of lead into this? So

personally starting off with Nex.js, Nex.js JS is great for AI because it has massive public code exposure. There's a

ton of open source repositories on GitHub and other site that are overlapping in the LLM training data. So

AI knows what Nex.js is and how to use it. Additionally, it's a very

it. Additionally, it's a very standardized framework especially with app router and pages router. And there's

also additional massive overlap in the uh in the training data. So this is an example of another great open source NexJS repository called Dub. And I was

working on a feature in inbound for Dub.

If you're not familiar with what Dub is, it's essentially link click tracking and all that other stuff. And I wanted so that way people in inbound could have link tracking and click tracking directly in email but linked to their Dub account. So that way they could have

Dub account. So that way they could have everything from a centralized dashboard.

Now I was working on something and trying to get the OOTH config to work and it was late. I just like didn't have the bandwidth to do it and it just wasn't working and I got like my ninth error of the night and I'm just like I

just I can't do this anymore. So what I did was I went and grabbed the GitHub repository for dub brought it into cursor tagged it and basically said hey can you figure out what's wrong? I don't

know what's going on now. No, am I saying anything about Nex.js in this? I

don't think I am. No I'm not. I'm not

saying anything about Nex.js in this.

Which is right because AI knows Nex.js.

It's basically what it's best at. So, it

understood the problem and it was able to fix it within a matter of minutes.

So, got another great example. Remember

preview down? So, that idea came to me while I was writing this talk because I was like, I'm I'm interested how many tokens am I actually using up? And I saw that Open Code, which is an open- source

CLI tool that basically is like a cursor but inside of your terminal. and it I knew that it had a web search tool and I was like, "Hey, wonder if I could just grab that web search tool and then put

it into a website and just make it work." So that's what I did. I and I'm

work." So that's what I did. I and I'm not even joking. I cloned the open source repository onto my computer. Then

I installed Open Code. This is Open Code. I prompted Open Code to find out

Code. I prompted Open Code to find out what library is used to convert HTML to markdown. Like all within itself, it

markdown. Like all within itself, it seems like some inception thing. And

this is another thing where I'm using my context of being a developer to know that okay, I know how to get HTML from a website. I don't need the AI for that.

website. I don't need the AI for that.

But I do need to know how to reliably convert HTML into markdown which the LLM's ingest. So I did that and it's a

LLM's ingest. So I did that and it's a very simple prompt if you read it and it's just like I need you to look inside this project to see what libraries use to convert from HTML to markdown should be inside the web search function.

Another developer thing I know it's inside of a web search tool call or something like that just to help the GPS a little bit. open code found it in two minutes and here's where it gets really crazy. It was able to identify that it

crazy. It was able to identify that it was turned down and it found some references of how is that how it was actually used. But then I took in the

actually used. But then I took in the same session I said hey I want to create and I essentially described the preview down website. I said hey I want to

down website. I said hey I want to create this website where you can paste in a URL and just have it output the context and the token usage and everything like that. Generate a prompt so that I can create it. So like it

would basically build it out for me, but I don't want to build anything. Come on.

It's the age of AI. I'm not going to build anything. So I took that generated

build anything. So I took that generated prompt into V0 and it oneshotted it. The

live site that you see now is the oneshot. Obviously, it's very simple,

oneshot. Obviously, it's very simple, but still the live site on previewdown.com is the live site that was generated by

Vzero. That whole thing took 10 minutes

Vzero. That whole thing took 10 minutes from idea to deployment on Verscell, right? I mean, that's like that's crazy.

right? I mean, that's like that's crazy.

I I genuinely can't even conceive. It

reminds me back in the days when I was doing FileZilla with PHP and just like double clicking, waiting for it to upload.

This I love I love this world so much better. So, anyways, let's wrap it up.

better. So, anyways, let's wrap it up.

You got to treat AI as your teammate.

You got to trust it. You know why? Who

why would you ever want to work with a co-orker that you don't trust? You got

to trust your co-workers. Now, do you let them do whatever they want? No. Do

not give them access to your terminal, especially always run that in a sandbox.

Trust me, I know.

Then you got to you got to know the personalities, the different models. I

just highlighted the OpenAI and anthropic ones, but there's lots of different models and try out those different models. Maybe you'll find one

different models. Maybe you'll find one that you like to work with better. The

most smartest model is not always the best model for the job. play around with them, especially in cursor where it allows you to have access to those models really easily. It's just it's a perfect opportunity to try out like a

gro code or something like that. And

then obviously leverage open source for AI projects or any other project because even if your project isn't like licensed to open source, it still can be extremely helpful to developers like in

that dub example that are trying to figure something out and they're like, I just wish I could see the inside of the codebase and I could. So it was great.

Now especially uh leverage projects that are open source on Next.js. But wait,

don't open up Google or ChatgBT or any other AI search and say open source Nex.js repositories.

Look no further than inbound. You can

check out the full Nex.js inbound repository at git.new/inbound.

And you if you want to try out the best way to interact with email and agents in the future, inbound.

And if you thought I did a good job for my first ever conference talk, give it a star. But don't if I didn't do a good

star. But don't if I didn't do a good job. But thank you.

job. But thank you.

[Music] Please welcome to the stage Versel

software engineer Luke Samberg.

[Music] Ow.

Should have practiced more with this.

Okay. Thank you everyone. Hello. My name

is Luke Sanberg. I'm a software engineer at Verscell working on Turboac.

So, I've been at Forcell for about six months uh which has given me just enough time to come up here on stage and tell you about all the great work I did not do.

Prior to my time at Verscell, I was at Google where I got to work on our internal web tool chains uh, and do weird things like build a TSX to Java

bite code compiler and work on the on the closure compiler.

So when I arrived at Purscell, it was actually kind of like stepping on to another planet. Like everything was

another planet. Like everything was different and I was pretty surprised by all the things we did on the team and the goals and the goals we had. So today

I'm going to share a few of the design choices we made in TurboAC and how how I think they will let us continue to build on the fantastic performance we already

have. So

have. So to help uh motivate that this is our overall design goal. So from this you

can immediately infer that we probably made some hard choices. So like what about cold builds? Those are important.

Um but you know one of our ideas is you shouldn't be experiencing them at all.

Um and that's what this talk is going to focus on. In the keynote, you heard a

focus on. In the keynote, you heard a little bit about how uh we leverage incrementality and and I'm not entirely sure that that's a word, if that's what we're

calling it. Um to improve bundling

calling it. Um to improve bundling performance.

The key idea we have in uh for incrementality is about caching. We want

to make every single thing the bundler does cachable so that whenever you make a change, we only have to redo work related to that change. Uh or maybe to

put it another way, the cost of your build should really scale with the size or complexity of your change rather than the size or complexity of your application. And this is how we can make

application. And this is how we can make sure that Turopac uh will continue to give developers good performance uh no matter how many icon libraries you import.

So uh to help understand and motivate that idea, uh let's imagine the world's simplest bundler, which maybe looks like this.

So, uh, here's our baby bundler. And

this is maybe a little bit too much code to put on a slide, but it, uh, it's going to get worse. So, we, so here we parse every entry point. We follow their

imports, resolve their references recursively throughout the application to find everything you depend on. Then

at the end, we just simply collect everything each entry point depends on and plop it into an output file. So

hooray, we have a baby bundler. Um, so

obviously this is naive, but if we think about it from an incremental perspective, no part of this is incremental.

So we definitely will, you know, parse certain files multiple times, maybe depending on how many times you import them. That's terrible. Uh, we'll

them. That's terrible. Uh, we'll

definitely resolve the React import like hundreds or thousands of times. Uh, so

you know, ouch. So if we want this to be at least a little bit more incremental, we need to find a way to avoid redundant work. So let's add a cache.

work. So let's add a cache.

So you might imagine this is our parse function. It's pretty simple and it's

function. It's pretty simple and it's probably kind of the workhorse of our bundler. You know, very simple. We read

bundler. You know, very simple. We read

the file contents, hand them off to SWC to give us an a. So let's add a cache.

Okay, so this is clearly a nice simple win. Um, but you know, I'm sure some of

win. Um, but you know, I'm sure some of you have written caching code before.

Maybe uh there's some problems here like you know what if the file changes? This

is clearly something we care about. Um,

and you know what if the file isn't really a file, but it's three sim links in a trench code. A lot of package managers will organize dependencies like that.

Um, and we're using the file name as a cache key. Is is that enough? Like, you

cache key. Is is that enough? Like, you

know, we're bundling for the client and the server. Same files end up in both.

the server. Same files end up in both.

Does that work?

We're also storing the as and returning it. So, now we have to worry about

it. So, now we have to worry about mutations.

So, you know, uh, and then finally, isn't this a really naive way to parse?

Uh, I know that everyone has massive configurations for their for the compiler. Like, some of that has to get

compiler. Like, some of that has to get in here. So uh yeah, these are all great

in here. So uh yeah, these are all great feedback. Uh and uh this is a very na

feedback. Uh and uh this is a very na naive approach and to that of course I would say yeah this will not work. So

what do we do about fixing these problems? Please fix and make no

problems? Please fix and make no mistakes.

So okay so maybe this a little bit better. uh

you know you you can see here that we have some transforms. We need to do customized things to each file like maybe down leveling or implement use cache.

Uh we also have some configuration and so of course we need to like include that in our key for our cache but maybe right away you're suspicious like is

this correct? Like is it actually enough

this correct? Like is it actually enough to identify a transform based on the name? I don't know. Maybe that has some

name? I don't know. Maybe that has some complicated configuration all of its own and okay and like is this two JSON value going to actually capture everything we

care about? Will the developers maintain

care about? Will the developers maintain it? How big will these cache keys be?

it? How big will these cache keys be?

How many copies of the config will we have? Um okay. So I' I've actually

have? Um okay. So I' I've actually personally seen code exactly like this and I find it next to impossible to reason about.

Okay. We also tried to fix this other problem around invalidations.

So we added a callback API to read file.

This is great. So if the file changes, we can just nuke it from the cache. So

we won't keep serving stale contents.

Okay, but this is actually pretty naive because like sure we need to nuke our cache, but our caller also needs to know that they need to get a new copy. So

okay, so let's start threading callbacks.

Okay, we did it. We threaded callbacks up through the stack. You can see here that, you know, we allow our caller to subscribe to changes.

We can uh just rerun the entire bundle if anything changes. And if a file changes, we call it. Great. We have a reactive bundler.

But this is still hardly incremental. So

if a file changes we need to walk all the modules uh again and uh and produce all the output files.

So you know we saved a bunch of work by par uh by having our parse cache but uh this isn't really enough. And then

finally there's all this other redundant work like we definitely want to cache the imports. We might uh you know we

the imports. We might uh you know we might find a file a bunch of times and we keep needing its imports. So we want to put a cache there. And you know, resolve results are actually pretty

complicated. So we should definitely

complicated. So we should definitely cach that. So we can reuse the work we

cach that. So we can reuse the work we did resolving React.

Um but uh okay, now we have another problem. Uh your resolve results change

problem. Uh your resolve results change when you update dependencies or add new files. So we need another call back

files. So we need another call back there.

And we definitely also want to like cache the logic to produce outputs because you think about in an HMR session you're editing one part of the application. So why are we rewriting all

application. So why are we rewriting all the outputs every time?

And oh also you might like delete an output file. So we should probably

output file. So we should probably listen to call back uh listen to changes there too.

Okay. So maybe we solve all those things but we still have this problem which is every time anything changes we start from scratch.

So kind of the whole control flow of this function doesn't work because if a single file changes, we'd really kind of want to jump into the middle of that for loop.

And then finally, our API to our caller is also hopelessly naive. They probably

actually want to know which files changed so they can like push updates to the to the client.

So yeah, so this approach doesn't really work. And even if we somehow did thread

work. And even if we somehow did thread all the callbacks in all these places, um do you think you could actually maintain this code? Do you think you

could like add a new feature to it? Uh I

don't. Uh I think this would just crash and burn. Uh and you know to that I

and burn. Uh and you know to that I would say yeah.

So once again, what should we do?

uh you know just like when you're chatting with an LLM, you actually first need to know what you want. Um and then you have to be extremely clear about it.

So what do we even want?

So you know uh we considered a lot of different approaches and many people on the team actually had a lot of experience working on bundlers. Um so we came up with these kind of rough requirements. So, we definitely want to

requirements. So, we definitely want to be able to cache every expensive operation in the bundler.

And it should be really easy to do this.

Like, you shouldn't get 15 comments on your code review every time you add a new cache.

And um and then I don't actually really trust developers to write correct cache keys or track inputs uh or track dependencies by hand. So, we should

handle uh we should definitely make this foolproof.

Next, uh, we need to handle changing inputs. This is like a big idea in HMR,

inputs. This is like a big idea in HMR, but even across sessions. So, mostly

this is going to be files, but this could also be things like config settings. And with the file system

settings. And with the file system cache, it actually ends up being things like environment variables, too. So, we

want to be reactive. We want to be able to recomputee things um as soon as anything changes, and we don't want to thread callbacks everywhere.

Uh finally, we just need to take advantage of modern architectures and be multi-threaded and just generally fast.

So maybe you're looking at this set of requirements and some of you are thinking uh what does this have to do with a bundler and to that I would say of course you

know my management team is in the room so we don't really need to talk about that. But really, I'm guessing a lot of

that. But really, I'm guessing a lot of you jump to the much more obvious conclusion, this sounds a lot like signals.

And yeah, I am describing a system that uh like signals. It's a way to compose computations, track dependencies with some amount of automatic memoization.

And I should note uh that we you know we drew inspiration from all sorts of systems, especially the Rust compiler and a system called Salsa.

And there's even an academic literature on these concept called adaptons if you're interested. Okay, so let's take a

you're interested. Okay, so let's take a look at what the uh let's see what this looks like in practice and then we're going to take a very jarring jump from code samples in JavaScript to rest.

So here's an example of the infrastructure we built.

Uh a turboask function is a cached unit of work in our compiler.

So we can uh once you annotate a function like this uh we can track it.

We can construct a cache key out of its parameters. Um and that allows us to

parameters. Um and that allows us to both cache it and reexecute it when we need to.

These VC types here you can think of like signals. This is a reactive value.

like signals. This is a reactive value.

VC stands for value cell but um signal might be a little bit of a better name.

uh when you declare a parameter like this, you're saying this might change. I

want to I want to re-execute when it changes. And so how do we know that? So

changes. And so how do we know that? So

we read these values via await.

Once you await a reactive value like this, we automatically track the dependency.

And then finally, of course, we do the actual computation we wanted to do and we store it in a cell. So

because we've automatically tracked dependencies, we know that this function depends on both the contents of the file and the value of the config.

And by and every time we store a new result into the cell, we can compare it with the previous one. And then uh if it's changed, we can propagate notifications to everyone who's read that value. So this concept of changing

that value. So this concept of changing is key to our approach to incrementality.

Um and yeah, again the simplest case is right here. If the file changes, TurboAC

right here. If the file changes, TurboAC will observe that it invalidate this function execution and re-execute it immediately.

And then if we happen to produce the same a uh we'll just stop right there because we compute the same cell.

Now, you know, for parsing a file, there's hardly any edit you can make to it that doesn't actually change the a um but we can leverage the fundamental

composability of turboac uh functions to take this further.

So here we see another turbopac cache function uh extracting imports from a module. Uh

you know you can imagine this is like a very common task we have in the bundler.

We need to extract imports just to actually find all the modules in your application. Uh we leverage them to pick

application. Uh we leverage them to pick the best way to group modules uh together into chunks. And of course the import graph uh is important to basic tasks like tree shaking.

Um and so because there's so many different consumers of the imports data, a cache makes a lot of sense. So this

implementation isn't really special.

This is like what you would find in any kind of bundler. We walk the a collect imports into some special data structure that we like um and then we return them.

But the key idea here is that we store them into another cell.

So if the module changes, we do need to rerun this function because we read it.

But if you think about the kind of changes you make to modules, very few of them actually affect the imports. So you

change the module, you update the function body, you know, a string literal, uh any kind of implementation detail, it'll invalidate this function and then

we'll compute the same set of imports and then we uh then we don't invalidate anything that has read this. So if you think about this in like an HMR session,

this means that we do need to reparse your file, but we really don't need to think about how to do chunking decisions anymore. We don't need to think about

anymore. We don't need to think about any kind of tree shaking results because we know those didn't change. So we can immediately jump from parsing the file doing this simple analysis and then

jumping right to producing outputs. And

this is one of the ways we have really fast um fast refresh times.

So uh this is pretty imperative. Uh

another way to think about this basic idea is as a graph of nodes.

So here on the left you might imagine a cold uh a cold build.

So we you know initially we actually do have to read every file parse them all analyze all imports and as a side effect of that we've collected all the dependency information from your

application and then when something changes we can leverage that dependency graph we built up to to propagate invalidations back up

the stack and reexecute turbopac functions. Um and so if they produce a

functions. Um and so if they produce a new value, we stop there. Otherwise, we

keep propagating the invalidation.

So great. Um but you know, this is actually

great. Um but you know, this is actually kind of a massive oversimplification of what we're doing in practice. Uh you

might imagine. Uh so in TurboAC today, there are around 2500 different Turboask functions. And in a typical build, we

functions. And in a typical build, we might have literally millions of different tasks.

So, it really looks maybe a little bit more like this.

Now, I don't really expect you to be able to read this. Couldn't really fit it on the slide. So, maybe we should zoom out.

Okay. So, that is not obviously helpful.

Um, in reality, we do have better ways to kind of track and visualize um what's happening inside of Turbopac, but uh fundamentally those works by throwing out the vast majority of dependency

information.

Um, and now I'm guessing that some of you maybe actually have experience working with signals. Uh, maybe bad experiences.

Uh, you know, I for one actually like stack traces and being able to step into and out of functions in a debugger. So

maybe you're like suspicious that this is a complete panacea. Like it obviously comes with trade-offs.

Um, and yeah, so and to that I would of course say well you know what I'd actually say is all of software engineering is about

managing trade-offs. We're not always

managing trade-offs. We're not always solving problems exactly, but we're really picking new um sets of trade-offs to deliver value.

So to achieve our design goals around incremental builds in Turboac, we put kind of all our chips on this incremental reactive programming model.

Um and this of course had some very natural consequences.

So you know maybe we actually really did solve the problem of handrolled caching systems and cumbersome invalidation logic. um in exchange we have to manage

logic. um in exchange we have to manage some complicated caching infrastructure.

Um and of course you know that sounds like a really good trade-off to me. I I

like complicated caching infrastructure.

Um but we all have to live with the consequences.

Um so the first of course is just the core overheads of this system. you know,

so if you think about it, uh, in a given build or HMR session, uh, you're not really changing very much. So, we track all the dependency

much. So, we track all the dependency information between like every import and every resolve result in your application, but you're only going to actually like change a few of them. So,

most of the dependency information we collect is never actually needed.

So, you know, to manage this, uh, we've had to focus a lot on driving on improving the performance of this caching layer, um, to drive the overheads down and let our system scale

to larger and larger applications.

And the next and most obvious is simply memory. You know, caches are always

memory. You know, caches are always fundamentally a time versus memory trade-off, and ours is doesn't really do anything different there. uh our our simple goal is that the cache size

should scale linearly with uh your the size of your application but again we have to be careful about overheads.

Uh this next one is a little subtle. Uh

so we have lots of algorithms in the bundler as you might expect and some of them kind of require understanding something global about your application.

Uh well that's a problem because anytime you depend on global information it means any change might invalidate that operation. So we have to be careful

operation. So we have to be careful about how we design these algorithms. Compose things carefully so that uh we can preserve incrementality.

And uh finally uh this one's maybe a bit of a personal gripe.

Uh everything is async in Turboac. And

so this is great for horizontal scalability but once again it harms our fundamental like you know debugging performance profiling goals.

Um, so, uh, I'm sure a lot of you have experienced debugging async in like the, uh, in the Chrome dev tools, and this is generally a pretty nice experience, not

always ideal, but I assure you, Rust with LLDP is like light years behind.

Um, so to manage that, we've had to invest in custom visualization, instrumentation, and tracing tools. And

look at that, like another infrastructure project that isn't a bundler.

Okay, so let's take a look and see if we made the right bet.

So, uh, at Verscell, we have a very large production application. Uh, we

think it's maybe one of the largest in the world, but you know, we don't really know. Uh, but it does have around 80,000

know. Uh, but it does have around 80,000 modules in it. So, let's take a look at how TurboAC does on it. for fast

refresh. Um, we really dominate what Webpack is able to deliver. Uh, but this is kind of old news. Uh, you know, Turboac for Dev has been out for a while and I really hope everyone is at least

using it in development. Um, but you know, the new uh the new thing here today, of course, is that builds are stable. So, let's look at a build.

stable. So, let's look at a build.

Um, and here you can see a substantial win over Webpack for this application.

Um, this particular build is actually running with our new experimental file system caching layer. So about 16 of those 94 seconds is just flushing the cache out at the end. Uh, and this is

something we're going to be working on improving as file system caching becomes stable. Uh, but of course the thing

stable. Uh, but of course the thing about cold builds is that they're cold.

Nothing's incremental. So let's take a look uh at a actual warm build. So using

the cache from the cold build um we can see this. So this is just a peak at

see this. So this is just a peak at where we are today. Uh because we have this fine grain caching system, we can actually just write out the cache to disk and then on the next build, read it

back in, figure out what changed and finish the build. Okay, so this looks pretty good, but a lot of you are thinking like, well, I you know, maybe I personally don't have the largest NexJS application in the world.

So let's take a look at a smaller example. The React.dev website is quite

example. The React.dev website is quite a bit smaller. Uh it's also kind of interesting because it's a React compiler. It's unsurprisingly an early

compiler. It's unsurprisingly an early adopter of the React compiler and the React compiler is implemented in Babel and this is kind of a problem for our approach because it means for every file in the application we need to ask Babel

to process it. So and fundamentally I would say we or me I I can't make the React compiler faster. It's not my job.

My job is TurboAC.

Uh but we can figure out exactly when to call it. So looking at fast refresh

call it. So looking at fast refresh times, uh I was actually a little disappointed with this result. Uh and it turns out that about 130 of those 140 milliseconds is the React compiler. Um

and both Turbopac and Webpack are doing that. But with Turbopac, we can after

that. But with Turbopac, we can after the React compiler has processed this change, we can see, oh, imports didn't change. Chuck it into the output and

change. Chuck it into the output and keep going.

Um once again on cold builds we see this kind of consistent 3x win. Um and just to be clear this is on my machine. Uh uh

but again no incrementality on a cold build and in a warm build we see this much better time. So again uh with the warm build we already have the cache on

disk. All we need to do is basically uh

disk. All we need to do is basically uh once we start figure out what files in the application change, re-execute those jobs and then reuse everything else from

the previous build. So the basic question is are we turbo yet? Yes. So yeah uh we uh this was discussed in the keynote of

course. Turopac is stable as of next 16

course. Turopac is stable as of next 16 and we're even the default bundler for next. Uh so you know mission

next. Uh so you know mission accomplished. You're welcome. Uh but

accomplished. You're welcome. Uh but

And uh if you notice that uh revert revert revert thing in the keynote, that was me trying to make Turboac the default. It only took three tries. Uh

default. It only took three tries. Uh

but what I really want to leave you with again is this uh you know because we're not done. We still have a lot to do uh

not done. We still have a lot to do uh on performance and finishing the swing on the file system caching layer. I

suggest you all try it out in dev. Um

and uh that is it. Thank you so much.

Please find me, ask me questions.

[Applause] [Music] Please welcome to the stage consent

founder Christopher Burns.

[Music] Hey everyone. uh or I like to say y'all

Hey everyone. uh or I like to say y'all here because obviously I'm from across the pond. It's my most favorite American

the pond. It's my most favorite American saying. So I'm here today to talk about

saying. So I'm here today to talk about the one thing that everybody hates on the internet and that is consent banners, cookie banners.

So let's start with a common situation that most developers feel almost every single time we're putting a website into production.

We tell the team, hey, the website is ready to go. We've ran the lighthouse scores and they are green, green, green, success.

And then the marketing team comes along and says, you know, we just need to add a few scripts and then the lawyers come around and say, you need you need to

remember that you need to put the cookie policy and the privacy policy and the consent banners. So you do all of them

consent banners. So you do all of them things and then the whole website's performance is broken.

And this is something that is so common for all of us. And the marketing team will say, well, we need them to do our

job. And the lawyers will say, we don't

job. And the lawyers will say, we don't want to get sued.

So why do we have to sacrifice compliance for performance?

And I think that when we think about the European Union and around Europe, you know, we've got this really big problem. So let's just uh get

rid of that. Just accept all as we all do. And

do. And one of the biggest questions that always comes up and back to the lighthouse scores is we see

how it affects our numbers from greens to reds and it's really hard to tell actually how much these things are impacting our websites because things

like our Lighthouse scores just don't say it. So, what happens if we could

say it. So, what happens if we could actually benchmark these consent banners, the marketing scripts to find out how much they're actually affecting

our production websites?

So, I created a tool called Cookie Bench uh cookie uh banners benchmark. Bang it

together, we get a cookie bench. And it

was made to showcase the performance of all the most popular consent management providers. So, if you don't know, a CMP

providers. So, if you don't know, a CMP uh is what the category is around cookie banners. So,

banners. So, you can find on consent at cookookiebench.com and here's actually some of the

listings. So, um what we can see by this

listings. So, um what we can see by this is we created a score. So, we did all the best-in-class things that we will speak about today, and we ranked all of

these CMPs to work out things like network impact, uh, time to first bite, but most importantly, the banner visibility, how long did it take for

that cookie banner to show if we're based in the European Union, every time you go on a new website, you look at your phone and you pause and wait and

then it pops up. something that we've all felt. So some of the most important

all felt. So some of the most important providers and some of the biggest providers that most enterprises enterprises are using most of the

biggest Nex.js websites in the world are using are slowing down the website. A

514 millisecond delay is terrible. So no

matter how much we try and perfect the performance of using caches and all of this exciting Nex.js React serve components, the marketing scripts will

always ruin it and the consent banners.

So, I started to notice patterns as I was benchmarking all of these tools uh between all the consent banners is that they've got terrible dashboards that

haven't even heard of things like Bootstrap, let alone shadow CNN.

They use all of them use script tags that sit on the header. Do I inline it?

Do I critical it? Do I put critical flag on it? It's never clear. Never

on it? It's never clear. Never

implementing well with the modern tools that we use every day. But my favorite is no developer needed. All the big platforms are like we do not need you to

have the developer. You don't need to speak to the developer. We're marketers.

We're lawyers. Um so

I set out when I created uh consent to really redefine what I thought consent management was.

And I was frustrated by all of the tooling out there never being quite good enough, never having component, react components or hooks or APIs. So I took

all the best-in-class learnings from people like Versel, Clerk, Resend, and I created C15T.

What is C15T is that it's a brand new standard for the web, mobile apps, TVs, and even aentic flows that takes consent management,

takes all the middle layers out and creates C15T very much like we have i18N a1y accessibility. Um, so I really wanted to

accessibility. Um, so I really wanted to create this new open source standard that everybody could use to finally fix consent banners for everybody.

So, it's on GitHub and we're currently at 1.3,000 GitHub stars. I think it was this

GitHub stars. I think it was this morning we hit there. Um, so you can find it. Thank you so much. I really

find it. Thank you so much. I really

appreciate it. Um, so let's actually explain what it is and why a developer will think, hey, I could vibe code this in 10 minutes. This isn't a hard thing to do, but actually there's so many

complications to it. So, I thought about how do I explain this to people? How do

I explain that? You know, consent banners actually need two sides. First,

the client and the second being a server. So, I created this Lego block

server. So, I created this Lego block system. So, let's see how it works at

system. So, let's see how it works at explaining it to you all. So, first

we're going to start with the consent banner, something that everybody knows.

Here we go. And you may notice that it already looks slightly different to the consent banners that you're used to on the web. You know, you're used to accept

the web. You know, you're used to accept all being the main button, but the thing is regulations are always changing and as a developer, you don't have time to

know which which one you should use. So,

C15T has all of the best-in-class components to just do it right as high level abstraction as possible. This

doesn't actually have any code in showing it, but this is like a simple JSX component.

And what's really interesting about this is maybe you don't want to use uh our default blue and you want to make it

match your website. And this was also one of these core pillars when I was creating C15T that I felt like every solution sucked on. You know, I wanted them to handle the logic, but I wanted

to make it feel like my brand. So I

wanted to, you know, in this use case, remove the rounding on the corners, make it yellow, add my own font face.

But what happens if you wanted to completely customize it and go further?

So instead of using our best-in-class components, you can opt for a headless mode and even bring your own components or use something like shad CN. And this

is another example.

So that's consent banners styling and then one of the next fundamental pillars is I18N. So when we think about consent

I18N. So when we think about consent banners, you know, we think about the English web, but in the EU alone,

there's 24 native languages, 24 languages that you need to support to, you know, handle things like GDPR. So

things like internationalization is built in. For example, in the UK you

built in. For example, in the UK you will see English.

In places like Italy you'll see Italian and even French.

So that is styling the consent banner and you think that's it done, right? But

something that a lot of developers also forget about is that you need to also build in a consent manager to allow your

users to change the consent on a fine grain permission. So for example, when

grain permission. So for example, when you click customize, you'll see the consent manager and it will show all the categories that are currently active on

the website and allow the user to sure accept all but in most cases less.

So this is taking all of these best-in-class components and these two are the main and critical ones that most people will know and and need. But the

other things that are important is things like frames. So this is a new component that we added just 2 3 weeks

ago and this is to also wrap all of the things like your frames. So if you're a CMS or an e-commerce website, you need

to put them YouTube videos in frames. So

that cookies, that third party data cannot leak until consent is given. And

what we have to remember here as well is that everybody depending on their region will get different results. So in places like the

different results. So in places like the US you won't see the frame but in places like the European Union you will. And

obviously when we click that we get a little video.

So, one of the really big final things is that it's great to provide components and we've already spoken about the

headless mode and I really set out to make C15T as customizable as possible no matter if it was serverside rendering, client side rendering, but one of the

most important things was the hook system. And one of my favorite hooks

system. And one of my favorite hooks that you know if I had to pick one to display is the has capability. So during

things like our dashboards or our marketing websites you know we need to conditionally render scripts or maybe components depending on if marketing has

been given consent or analytics. You may

even want to go more spicy and go marketing and analytics or a really complicated statement.

And what's really interesting about things like this API is right now we're just using things like

GDPR categories, but C15T is built to be agnostic consent engine. So you could quite easily be saying has claude access

or has bank account access. We're just

currently using it for the GDPR uh categories and we'll be expanding out soon around this area. So that's a quick

overview of why just building a client side uh consent banner is more complicated than you think. Um, but

obviously we had the other side that's the server. And now this one was really

the server. And now this one was really hard to visualize when I had to make this block system. So let's give it a

go. So one of the most important things

go. So one of the most important things for a consent banner is geol location.

And this affects not only Europe but also Americas. And why? Because in

also Americas. And why? Because in

places like Germany, you should see a cookie banner. In places like the UK,

cookie banner. In places like the UK, you should see a cookie banner. But then

the US, no. C15T is intelligent to only show the cookie banners and consent banners where they are needed. And that

has a two-tone effect, as in one, people in places like California, where we are now, um cannot sue you for having a cookie banner. and that has came across

cookie banner. and that has came across our desk uh already. But secondly,

even though the consent banner is not being shown, C15T is still running in the background intelligently activating

all the things to handle them consents.

So again, in the US, if the website uses C-15T, you should never see another consent banner, even places like Australia and the

Netherlands is you will. So

what is next is IT 18N. And we've

already spoken about it i18N on the client side. But again, one of the

client side. But again, one of the biggest things about C15T is really pushing the edge on performance. You

know, if this is something we all have to do, why can't it be a Ferrari? You

know, like why can't it be as best as possible? So, if you think back to what

possible? So, if you think back to what I said earlier is there's 24 native languages inside the EU. So, that would

mean bundling 24 languages. So part of C15T is serverside rendering them. But

it comes at a catch and a lot of other CMPs get this wrong. So let's explain why. So if the device is in Germany and

why. So if the device is in Germany and the language is also German, then you should see a German cookie banner.

But if the device is in Germany, but say the language or the website is in English, then you should see an English

cookie banner. And a lot of the current

cookie banner. And a lot of the current solutions get that wrong.

So that's INI um uh languages. But what else is really

important? So why even have a server is

important? So why even have a server is because to do things like GDPR correctly or even other regulations you need to

store the consent in a database.

So when we was building C-15T we said hey why don't we support Postgress but also why not MySQL and then SQLite and then and

cockroach DB and hey why not stick Microsoft SQL in there as well and the other thing is we're supporting all these databases so how are you actually

going to get access to that consent data no matter if it's GDPR or for agentic flow uh like accessing consent is using

things like an OM. So in terms of OM we support Drizzle also Prisma typ and also

Keithley the whole package. Um so that was really impressive to build out and and get working. But the really big thing as well is maybe you don't have

access to the database, you don't have access to the OM. So we still wanted to think about every layer so important so important and think about the

composability.

What about if you can't use our best-in-class JavaScript components?

What happens if it's a PHP website or a Ruby website? So we support things like

Ruby website? So we support things like open API so you can easily access every API functionality.

But the biggest thing that we're also building out is things like node SDKs and other frameworks as well. And this

is really important for things like then um agentic flows. So not only can you save the consent to the database to say

hey in this example we're using consent banners but it could be something like identic flows, it could be bank accounts, so much more. Um, you can also

retrieve it through a type safe SDK.

So that is a really quick tour of the client side and the

server side and why it's so much more complicated than a five minute vibe coding job and why you should use

something like C15T that you know we have built to be an open web standard.

So how do we build it to be an open web standard is that you can self-host C15T

the front end and the back end. And one

of the p the most powerful reasons to self-host C15T's back end is for data residency. So people like banks or

residency. So people like banks or specific European countries where they're very restrictive of where their data is actually saved can actually host

it themsel and to be a web standard it should all be open source.

So that's self-hosting but I'm here to also uh as the founder of consent. So

what actually is consent is that it is a hosting platform for C15T. So we really think about this in a way of if it's

something purely for a developer then it goes into C15T but if it's something for your legal departments or your marketing departments that's where you really need

a full observability in a dashboard where you can spin it up where you can say here I've implemented it from the developer side now you can hand it off.

So that is where we set up consent.

So that's really cool and I always want to do one of these things like like one more thing that I wanted to cover and this is a brand new component inside the

Nex.js JS um package and it's client side for a reason and we always thought back to the start of the sides of we've handled the consent management we've

handled things like uh showing the consent banners but also what about the marketing scripts so we built a type

system for marketing scripts things like your Google tag manager as we can see it should be simple easy and be and type

safe.

So we call this C15T scripts and C15T is built to be very composable as smaller bundle size as possible. So what kind of integrations have we already built?

Things like Postto, Metapixel, Google Tag Manager, Analytics, Xpixel, LinkedIn, and even things like Tik Tok and Microsoft Uet.

This is a really important step for consent and C-15T because we're trying to say, hey, let's normalize all of these different marketing scripts into a

new type safe system that a developer can simply implement once the consent and the categories are automatically

defined to really stop that performance lags that we see. So

when we talk about them performance lags, GMO asked me many times, how fast is C15T?

So back to our benchmark and C15T is incredibly fast uh with an 89

millisecond banner visibility. And when

we set out our scoring system that has all the best-in-class um accessibility uh also best practices like we spoke

about with geoloccation ITN we also crush it on the scores. So what does it actually mean? Thank you.

actually mean? Thank you.

So what does it actually mean to the industry leader is quite a lot as you can see and to really say again 89

milliseconds that is at the price of being dynamic. So 89 milliseconds for

being dynamic. So 89 milliseconds for showing a cookie banner in the EU or the US you won't even see one. incredibly

fast and that's 7.9 times faster than the slowest consent banner that we benchmark and 1.9 times faster than any

other benchmark uh banner that we put on there. So C15T is bundled by default and

there. So C15T is bundled by default and one of the big reasons it can be so fast is also that it's got zero third-party requests. when it's loading onto your

requests. when it's loading onto your client, it is bundled versus all of these other CMPs that will start a waterfall effect. So, if you're on a

waterfall effect. So, if you're on a slow connection, expect them load times to be even slower. The average is 5.6 out of all the ones that we have

benchmarked. So, you know, that's C15T

benchmarked. So, you know, that's C15T and yeah, we're growing really, really fast. And you can find C-15T at

fast. And you can find C-15T at c-15t.com.

c-15t.com.

But one of the also really exciting things is if you've got a Nex.js project, a React project, you can put

into your terminal npx app C15TC CLI and within 5 minutes all of them cookie banners, GDPR things will automatically

be set up in your Nex.js application.

So there's a la last few things I want to cover just to wrap it all up. So

we've talked about the technology, but some of the most important things here and the reason we create C15T is we don't teach the regulations.

We teach the abstractions. As a

developer, you shouldn't need to know the difference between GDPR, CCPA. You

should only need to know the abstraction. Things like the consent

abstraction. Things like the consent banner.

And why is that? Is because regulations are always changing globally. And how

are you meant to keep up to date with that? You're just a developer. You have

that? You're just a developer. You have

so many more things to do with your time. And that's why we shouldn't let

time. And that's why we shouldn't let regulation slow down our innovation. We

should use these tools to help increase that.

So, the hosting platform of C-15T Consent is growing really, really fast and week on week, we've had some amazing companies

joining us and we really can't wait to keep on growing. Um, excited to have everyone and really show off how good

this can be. So my real final side is I fight for the users and this is something that the whole presentation is not speaking about. It's not speaking

about the developer. Well the

presentation is speaking about the developer or the marketing or the legal but the most critical thing is the users is all of us who have to use the web

every single day or when we think about future agentic flows and how consent is given. It is really important that we

given. It is really important that we actually think about how these tools we're using affects the users and that is why I set it as my goal with C15T to

create a new standard and obviously that's from Tron if you didn't know. Um, so just a quick roundup of of links. So you can find

C15T at c15t.com.

All of the benchmarks and there's so much more that we could talk we could have spoke about around the benchmarks from waterfalls to the tests that we run at cookiebench.com.

at cookiebench.com.

And to get started really fast with C15T is consender.io.

is consender.io.

Thank you so much for your time today.

and we just really want to make the web better for everybody. And you can find me on places like uh Twitter, LinkedIn,

but most important, you know, starring us on GitHub is one of the most incredible things to really show people why this is important. Thank you so much for your time.

Heat. Heat.

[Music] [Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] [Music]

[Music]

[Music]

Yeah.

[Music] [Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

Heat. Heat.

Heat. Heat.

[Music] Heat.

Heat.

[Music]

Power power power.

[Music]

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Heat.

[Music] Hey, heat. Hey, heat.

Hey, heat. Hey, heat.

Heat.

[Music] Heat.

[Music]

[Music]

Heat.

Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music] Heat. Heat.

Heat. Heat.

[Music]

[Music]

[Music] [Music] Heat. Hey, Heat.

Heat. Hey, Heat.

Heat. Heat.

[Music]

[Music]

[Music]

[Music] Heat.

[Music] Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music] [Music]

[Music] [Music] [Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music]

Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Hey, Heat.

Heat. Hey, Heat.

[Music]

[Music]

Heat.

Hey Heat.

[Music]

down. Hey

down. Hey

[Music]

Heat. Heat. N.

Heat. Heat. N.

[Music]

[Music] Heat.

[Music] Heat.

[Music] [Music]

Heat. Hey. Hey. Hey.

Heat. Hey. Hey. Hey.

[Music] [Music]

[Music] [Laughter]

[Music]

Heat. Hey, hey, hey.

Heat. Hey, hey, hey.

[Music]

[Music]

[Music]

[Music]

Nat.

[Music] Hey. Hey.

Hey. Hey.

[Music]

[Music]

Everybody.

[Music] Hey.

[Music]

Hey.

[Music]

[Music] Heat. Hey. Hey. Hey.

Heat. Hey. Hey. Hey.

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Hey hey hey.

[Music]

[Music]

Hey.

[Music] Hey. Hey.

Hey. Hey.

[Music] [Music] i

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

[Music]

[Music] Heat. Heat.

Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

[Music] Woohoo!

Heat.

[Music] [Music]

Heat.

[Music]

feel.

Heat.

Heat. Heat.

[Music]

[Music]

Heat. Heat.

Heat. Heat.

[Music]

Please welcome to the stage BUN staff developer relations engineer Lydia Halley.

[Music] Hi everyone, I'm Lydia. My title was actually head of propaganda at Bun. Uh

and if you know a little bit about me, you know that I love talking about JavaScript runtimes and performance. Uh

actually before I joined Bun uh I was at Verscell for a couple years to teach Next developers how to build apps faster. So I'm very excited to be here

faster. So I'm very excited to be here today and show you how much better it can get when we combine next framework performance with Bun's runtime performance.

But before I talk about Bun itself, I want to take a small step back and show you like what makes frameworks like Nex.js GS so special in the first place

because they truly redefined how we see performance on the web.

It didn't just make websites faster, it streamlined every part of the process.

We got smarter bundling with Webpack and now with Turboac, we got a built-in image and font optimization. We got

efficient server site rendering, static site rendering and now we got ISR and now I guess RSC to bring data fetching into the component itself. And all of

these improvements kind of pushed what the framework can optimize, but really only up to a certain point. There's

always been this one fundamental layer that Nex.js hasn't been able to optimize yet. And this is not because of lack of

yet. And this is not because of lack of engineering effort or you know capability, but it's just outside of next scope. And that is the runtime

next scope. And that is the runtime itself.

Normally when you run nextdev or you deploy to versel your next app runs on node. So this means that node's runtime

node. So this means that node's runtime executes your javascript. It manages the event loop file IO everything. And it

bridges your javascript code to the operating system. And this makes sense

operating system. And this makes sense because node has been you know the default runtime for like the past 15 years or so. It's battle tested. It's

reliable. But in 2025 it's also become a bit of a bottleneck. Don't get me wrong, Node is fantastic. It, you know, made it possible to run JavaScript on the

server. Before that, like before Node

server. Before that, like before Node was introduced in 2009, JavaScript was really just for the browser. Node

changed that by giving us a runtime with a JavaScript engine, an event loop, async.io, and APIs to do all the things that browsers cannot do. So like reading

files from disk, memory, all that stuff.

Now, under the hood, Node uses the V8 JavaScript engine. This is Google's fast

JavaScript engine. This is Google's fast Chrome engine, which is great for long running tasks like, you know, a tab in your Chrome browser. But of course, V8 is just an engine. It only knows how to

execute JavaScript. Um, it cannot open

execute JavaScript. Um, it cannot open files, you know, make TCP connections and all that kind of stuff. So, that's

where nodes builtin APIs come in. So,

like FS, HTTP, net, and so on. So these

APIs are kind of the bridge between our JavaScript code and the operating system. And these APIs itself rely on a

system. And these APIs itself rely on a like C library called lib uv. Um

this isn't built for JavaScript itself.

It's more of like a a generic abstraction that node uses to be able to do stuff like file IO networking and stuff across all these different operating systems.

So, you know, when we call something like fsread file in our JavaScript code, we're really just asking the computer like, I want to read this file from disk. But before we can get there, it

disk. But before we can get there, it first has to go through V8 then or like from a JavaScript code to V8. Then it

passes it to the node C++ binding. This

then calls lib UV. And this is not even mentioning like the thread work and all that kind of overhead. And only then lib uv actually makes the system call to our kernel to actually get this file from

disk. And you know while that's all

disk. And you know while that's all happening we have the event loop that lib uv uses so the rest of our javascript code can still execute and so

on. And this model works fine like we're

on. And this model works fine like we're still using node but it's not optimal.

So back in 2009 so again when node was introduced our hardware looked very different. Servers maybe had like four

different. Servers maybe had like four cores, limited memory, and storage was pretty slow. Threads were also

pretty slow. Threads were also expensive. So creating a new thread for

expensive. So creating a new thread for every connection just didn't really scale well. So nodes model was great for

scale well. So nodes model was great for that era because we could use one thread to handle like thousands of connections really efficiently. But fast forward to

really efficiently. But fast forward to 2025, our hardware looks very different.

We now have hundreds of CPU cores, terabytes of memory, and storage is like 50 times faster, but we're still using like nodes model based on 2009. It's

still pushing everything through that same event loop. And again, this is fine like nodes architecture is fine when servers ran for days. But in modern times, we often have like serverless

functions or we have dev servers, all these more like short burst scripts that need to start up fast and run for a much shorter amount of time.

So in these environments, every millisecond of startup and every, you know, like data layer matters here because they all add to latency.

Now again, when you run your next app, you are running it on node. So this

means that everything that your app does, so like rendering pages, serving assets, streaming responses, they all go through all these layers that we just saw. So from JavaScript to V8 to note,

saw. So from JavaScript to V8 to note, all that kind of stuff. And Next has done an incredible job at squeezing every bit of performance uh kind of despite having

the node runtime still blocking certain things because at the end of the day, all these improvements still run on top of node. So when you spin up like a dev

of node. So when you spin up like a dev server or rebuild files, hot reloading, you're still hitting those runtime limits. So if we really want to go

limits. So if we really want to go faster, we have to look beyond the framework. We have to go a level deeper.

framework. We have to go a level deeper.

We have to rethink the runtime itself.

So that is where bun comes in. Bun is

not just like another layer built on top of node. It is a brand new runtime built

of node. It is a brand new runtime built from scratch for the hardware that we actually have in 2025.

So instead of being written in C++ on top of lib UV like you saw with node, bun is built in zigg and this is a modern systems language that runs much

closer to the metal. So for the JavaScript engine, bun uses uh Apple's really fast JavaScript core engine and this spins up really fast mainly because it can like defer some of the

initialization optimizations that engines like V8 make. Uh, and it also just runs really quickly, which again is perfect for kind of the modern tasks that we use nowadays with dev servers,

serverless environments, and these shorter build scripts.

The core runtime itself is written in Zigg. So the bun APIs and all the parts

Zigg. So the bun APIs and all the parts that handle async IO. So where node uses lip uv for all of these async operations

uh so like file or like reading files network requests and so on bun can implement these as direct system calls to the operating system because it's written in zigg u now for network

requests we use sockets so it's a bit different but we're removing so many layers of abstraction by using zig instead of lib uv so now if you want to read a file with

the bun runtime time. Uh it still goes of course from your JavaScript code. It

now goes to the JSC engine to Zigg which can then make that direct system call.

So again fewer layers uh between our JavaScript code and the actual operating system. And the result here is that

system. And the result here is that everything feels so much faster from startup to file access HTTP servers and so on. But bun is not just about

so on. But bun is not just about performance. We also aim to be 100% node

performance. We also aim to be 100% node compatible. So we want to make sure that

compatible. So we want to make sure that all of node's own APIs work. Uh but then it also ships with tons of additional builtin APIs like S3, SQL or Squeal,

whatever you want to say, Reddis, hashing, a shell, so many things. And if

you've ever used other programming languages like Go or Python, this kind of like batteries included approach might is very familiar to you. But as

JavaScript developers, we've just gotten so used to installing dependencies for pretty much everything. like I I use like password hashing in almost all my apps, but I still have to install a

dependency every time I'm using it. So,

bun changes that the stuff that you use pretty much all the time is just built right into the runtime itself. It's just

built on the global. And again, these are not just like surface level wrappers on top of an npm dependency. They're

truly built in Zigg. So, they're

optimized for performance for the modern hardware. So for example, B bun has a

hardware. So for example, B bun has a built-in SQL client. So you can uh connect directly to Postgress, MySQL and SQLite using a single API. You don't

have to install any additional dependencies. And again, this is not

dependencies. And again, this is not just calling some npm package. Uh it is truly all bun just talking directly to the system.

And it's not just convenient that we have these builtin. Buns options are usually also a lot faster than the node and npm alternative. So for example here

bun.sql is up to uh 11 times faster than

bun.sql is up to uh 11 times faster than my sql 2 on node which is really good.

Or you can use like buns s3 client and this works out of the box with any S3 compatible storage. So Amazon S3,

compatible storage. So Amazon S3, superbase storage, Cloudflare R2, you name it. And again also this API is

name it. And again also this API is incredibly fast. So here we can see that

incredibly fast. So here we can see that it's up to like six times faster than AWS S3 SDK on node. Now of course you can still use your normal dependencies

with the bun runtime as well. You don't

have to use these built-in APIs. Um but

they do reduce your bundle size a lot because we're no longer, you know, adding these dependencies and it helps with npm vulnerabilities that we saw, you know, like last month because you don't have to install them. There are

tons more APIs. I highly recommend that you uh check out the docs as well. But

bun comes with way more than just a runtime. Uh it also ships with a really

runtime. Uh it also ships with a really fast package manager that's up to 17 times faster than yarn, seven times faster than npm and four times faster

than pnpm. And the good thing you do not

than pnpm. And the good thing you do not have to use the bun runtime to use bun install. And this is just you can use

install. And this is just you can use bun install with node. It will just work. So you don't have to change

work. So you don't have to change anything about your project.

Uh, it also has a really fast built-in bundler and transpiler, so you can serve and build your files instantly. So you

don't need Webpack, ESB build, no extra setup. And the nice thing is also

setup. And the nice thing is also supports TypeScript and JSX right out of the box.

Um, it also has a really fast test runner that's up to 14 times faster than Vest and 23 times faster than just when we SSR like a thousand React tests. So

again, you get really fast tests. You

don't have to install anything.

So this all sounds great, but how can we use the bun runtime in next? Honestly,

it's really simple. After installing

bun, you just have to update your start command or your dev command and add bun run-bun. That's it. You're you're now

run-bun. That's it. You're you're now running the bun runtime. Now you might be wondering why do I need that d-bun?

Like I'm I'm already saying bun run. Uh

that's again because bun really cares about node compatibility. So normally

like if we just use like bun run next dev bun will detect that next CLI uses that node shebang uh and in that case bun will be like okay I understand that

I have to use node so then it will just default back to node even though we said bun run but with the d-bun flag we're kind of forcing that to skip the shebang and say okay we're just using the

runtime so just as kind of an extra explanation.

So now with this command bun start your next dev server. Uh the bundler itself is still next. So that's still you know turboac I guess webpack turopac now that's the default. Uh but now the

runtime underneath all of that so the thing that's executing your javascript reading files serving responses and so on that's all bun. And because bun is designed to be node compatible you

shouldn't have to change anything about your code or your packages or your middleware everything should still be working. Uh if it doesn't, that's also

working. Uh if it doesn't, that's also considered a bun or a bug in bun. It

should be 100% no compatible. So uh if you're trying this, you run into issues, let us know. But you shouldn't have to rewrite anything.

And now that your app runs on top of bun, you get access to all of bun's built-in APIs. So for example, we can

built-in APIs. So for example, we can just use the S3 client, write in like a server function in React server component and so on. So we don't have to install any dependencies. So just to

compare what it normally would have looked like with this with node you can see that with bun we have a lot less code we have fewer dependencies and it's instantly compatible with all the other

S3 providers as well. So if you want to change from Amazon S3 to like Cloudflare R2 Superbase storage all these other providers that's super simple

uh or a more like complete one we can use S3 the bun shell and SQL directly in a React server component. So, first we like query the database with SQL to

fetch a blog post, generate a pre-signed S3 uh URL for the image, use the bunch shell to count the words. Again, there's

no extra like API layer or third party tools that Bun is calling. Bun handles

the runtime, the database connections, and the shell all natively in SIG. So,

close to the metal. And again, of course, it's not just S3 SQL. We get

access to all of bun's APIs by just adding bun run-bun uh in front of next dev. Um but of course now you might be thinking okay I don't like I'm not using postgress I'm

not using s3 I'm not using any crazy dependencies so why should I care the the thing that got me into bun before I joined was honestly just the incredible

dx. Um you can just run any js ts tsx

dx. Um you can just run any js ts tsx jsx doesn't matter file without any configuration. and it just works. You

configuration. and it just works. You

don't have to think about, you know, TS node, babble, S swc, and so on. So even

if you aren't using next, even if you're just developing, you want to quick like build script, just using bun run, honest, just try it. It makes your life so much better because you Yeah, you

don't need any configuration.

Uh, bun also comes with bun x and this is bun's kind of equivalent to npx.

Again, you don't have to change anything. You don't have to use the bun

anything. You don't have to use the bun runtime for this. You can just change npx with bunx and you see instant like startup improvement. So for example uh

startup improvement. So for example uh using bunx create next step is like five times faster than npx uh create next step and you again you don't have to use the bun runtime for this. It's just a

lot faster. Uh then of course there's

lot faster. Uh then of course there's also bun install which again does not require you to change the runtime. It

just makes your installs so much faster also on basic next projects.

Now obviously running bun locally is one thing but how do we deploy apps that run on bun because this is of course a whole like new runtime.

Now you can already use next.js on bun on several platforms like render railway or containerize your app with docker but we're all nextjs developers. Ideally we

also want to be able to deploy to versel. So naturally we uh tweeted at

versel. So naturally we uh tweeted at Giermo kindly asking him for native bun support on Versel and we quickly got a pretty promising response and then a

couple weeks later bun support has been achieved at least internally. So I'm

very excited that native bun support will come to Verscell very very soon and this means that you'll be able yes applause goes to the great engineers at

Verscell that made this possible but this is very exciting because this means that we can now run bun apps just as easy as any other next project on Verscell. So also just a real world

Verscell. So also just a real world example. I'm not sure if you can see it

example. I'm not sure if you can see it on the screen, but running a Hono API with bun uh already saw a 30% drop or CPU drop by just running bun on Versel.

This is of course a different framework.

This is Hono API, but it's the same runtime benefits that you will get if this was like a server function RSC and so on because bun saves a lot in CPU and memory usage.

Now we of course don't have to wait for native bun support to start using it in our apps. The simplest way is kind of to

our apps. The simplest way is kind of to start using it or to adopt it incrementally. So first I recommend just

incrementally. So first I recommend just switch to bun install changes nothing in your codebase. This just uses buns

your codebase. This just uses buns really fast package manager. Also if

you're interested in knowing why bun install is so much faster. Uh I wrote an article on this not too long ago. I

highly recommend you check it out. It

just explains, you know, we're not just skipping steps or doing whatever. It

kind of explains the systems engineering behind it that makes it so much faster.

Now after using bun install then try to use the bun runtime. You can just use it locally with bun run-b test your app see if everything works.

It should if it doesn't let us know. Um

and then lastly you can kind of incrementally move to bun's native APIs where it makes sense. Uh you can of course still like mix and match these npm dependencies. U but the best part is

npm dependencies. U but the best part is also that each step here is reversible.

So if you did use like one of Bun's native APIs and it didn't work, you can always just switch back to Node. Um, but

it's definitely worth checking out.

Now, before I end this talk, I just want to say a really big thank you to the amazing team of engineers at Bun. Most

people might know Jared, but there's an entire team behind Bun that work so hard every day to make Bun even faster, more stable, and so much more fun to use.

They are truly like pushing the limits of what's possible with JavaScript.

So next reimagined how we build for the web, but Bun reimagines what powers it.

Thank you so much for coming to my talk and I'm so excited to see what you'll be building with bun and next.

[Applause] [Music] [Applause] And now please welcome to the stage Verscell head of developer community.

They call her Caps.

[Music] [Applause] Oh, aloha everyone. Aloha friends. My

name is Cap. I am the head of developer community here at Verscell. I see a lot of my friends in the crowd. Hi everyone.

So, we run the open source program at Verscell and I know that the open source community is very important subject to many of you here today. So, bringing the creators andor builders of Nex.js, spelt

and next on stage was something that really mattered to me because I know it matters to you. And in all honesty, when Verscell when spelt and next labs joined Verscell, I know it sparked both

excitement and curiosity. People started

asking what does this mean for the future of my f my favorite framework. So

I wanted to create space during this conference to hear directly from the folks behind these tools to ask them about the future about open source at Verscell and the web in general. And

maybe we'll find out some fun facts about these builders. So this panel is about transparency, curiosity, and celebrating open source. I always say it, but the world runs on open source.

So let's chat with some of the builders in this community. So let's bring to stage Tim Newton's from Next.js, Rich Harris from Spelt, and Sebastian Chopen

from Next to the stage.

[Applause] All right. So, we have Tim, Rich, and

All right. So, we have Tim, Rich, and Sebastian, all builders with frameworks we use and love. Who's using one of these frameworks? Nucks. Felt next.

these frameworks? Nucks. Felt next.

>> Nice. So, let's start with intros. Could

each of you introduce yourself, what your role is in the particular framework, and give us a sentence that captures your framework for someone that's new to it?

>> That's good. Hey, I'm Tim. I uh uh work on next chess mostly uh nowadays tech lead uh but yeah started uh next chess together with Karma.

>> Nice.

>> Rich.

>> Yeah. Uh I'm I'm Rich. I have been working on spelt for the last nine years. Working at Vel for the last four

years. Working at Vel for the last four of those years. There's a team of three of us who work full-time on the framework within Vel. I manage that team. Um the the pitch I mean if I was

team. Um the the pitch I mean if I was being cheeky I would say React without the BS but this is next so um I have to watch my words. Uh it's a way of

building resilient and accessible web apps um more easily than than with alternative tools. It's a framework but

alternative tools. It's a framework but it's also a compiler and a language. Um

you should check it out.

>> And I'm Sebastian Shopan the author of Nu. I joined Versel with Nuxlab's

Nu. I joined Versel with Nuxlab's acquisition 3 months ago. Um

if I have to compare uh if you use NexJS uh I guess you are here. Um NexGS is the Vue alternative and we have maybe more feature less feature. Uh this is a

matter of taste. I highly recommend you to check it out. Um, with my team, we're also working on Nitro, which is the server part we extracted uh two years

ago with Nux 3 that is now compatible uh that you can use as a backend framework, but also use it as a VIT plug-in if some of you are using VIT.

>> So, you said nine years. Next is also at nine years. And what's next at?

nine years. And what's next at?

>> Uh, nine years as well. It was the 26th of October uh 2016.

>> Awesome. So why did you build it? Why

did you build Next?

>> Yeah, I was making an e-commerce website with Vue. GS and serverside rendering.

with Vue. GS and serverside rendering.

Uh back then Vue2 was not there. It was

in 2015. So I was using pre-rendering with a headless browser. And then I was following GMO and I saw the announcement of Nex.js >> and I gave myself the challenge to do

the same for view and a week later I had the first prototype working.

>> Oh, that's cool. What about you, Rich?

Why'd you start spelt?

>> Uh, guilt partly. Um, it's actually my my second front end framework. I'm a

serial offender. The first one was uh it's like kind of bulky and a little bit slow and I just wanted a doover. And um

having spent time working in the compiler space, I created a module bundler called rollup. And so I was learning what you could do with abstract syntax trees and and all of that. And I

thought, what if we can apply these ideas to frameworks and do the work instead of doing it in the browser? Like

you do a lot of the work as part of your build step and make everything faster and and more compact. And so it kind of started out as, you know, atonement for my sins in in the earlier project, but

also as an experiment to see how far you could push that idea of doing ahead of time um work. And the reason that I was motivated to do this was because my

background is actually in journalism. I

was working in newsrooms um building interactive visualizations which tend to be very heavily interactive. You're

dealing with large amounts of data, motion, things like that. And you need something that is capable of moving a lot of pixels at 60 frames per second even on mobile devices. And the tools that were around at the time weren't really up to it. So that's where it came

from.

>> Nice, Tim.

>> Yeah. Uh well if you're here you might already know the story but uh we we basically started by uh so this is before I started contributing to access

but we we started building this uh um like the dashboard for versell actually like what turned into the dashboard that you use maybe every day if you're a forcell customer. Uh and uh we we

forcell customer. Uh and uh we we basically didn't have like like Kishama had a a very um like clear vision of like what he wanted this kind of thing

to look like uh and also like how the the actual like development on it should work. he wrote this blog post uh um

work. he wrote this blog post uh um forgot the name of it but uh you can still find this on his blog on his blog and it was basically uh like what should

a framework look like uh in in 2016 uh before we had all this like all these different paradigms uh but um one of the

big things was like seret rendering uh you think like we we we immediately chose react but actually we didn't uh we we chose uh to build our own like react

like thing uh that was like a templating language. It's called N4. Uh and uh

language. It's called N4. Uh and uh eventually like they decided to pivot that to a uh like React based framework

which was uh uh React wasn't as popular at the time uh per se. Uh and uh we we basically started like building with like service rendering and doing all of

that. Then like over time it evolved

that. Then like over time it evolved into doing like more static generation other things as well.

>> Cool. It's fun because like these are frameworks that are all like being used by so many of us but there's just like this unique story that made them start.

So it's like really cool to hear that.

Um so I engage with a lot of the open source community and people have a lot of mixed feelings about open source projects. Like code exposure I think is

projects. Like code exposure I think is the biggest one. Like oh I got to keep my code safe. I I don't want to open source it. But then others think it's

source it. But then others think it's super important to be a part of the open source community and have the contributions. So why did why open

contributions. So why did why open source? What why is open source so

source? What why is open source so important to you?

>> Honestly never even occurred to me to not do things >> as as open source. It's just kind of how software is is written particularly if you want to get users.

>> Yeah.

>> Which I mean I I don't know having users is a is a blessing and a curse. But um

if you want any kind of adoption then um you've got to be open source because otherwise someone else is going to do an open source version and that is going to be more popular. But there are also like very tangible concrete reasons why it's

it's beneficial to the software itself for for for the source to be open. And

I'm sure like you've had similar experiences of you know you throw something over the wall and then someone comes up and says oh hey there's a security vulnerability that you might not be aware of. the source wasn't

available to them, then you'd just be sharing that vulnerability with your with your users. Um, so yeah, it just never occurred to me to >> I think for you it's also what's really what I find really interesting for your

case is like you like you said you're serial a serial open sourcer and you created many projects but actually many of these projects that you uh are talking about they they still exist

today right so you're obviously working on swel uh but then you're also you created rollup but actually you're not working on rollup day today uh at least I think so but there's like someone else

that came in that is how working it I think like even full-time or something like that. Um

like that. Um >> right >> longevity is is like >> yeah so it's like even like uh there's this thing with like if you don't open source your software uh and you lose

interest or like I'm not saying you lost interest but it's it's more like you you uh live and you learn new things and you start building even more uh software and uh maybe something like that. And what I

always find really interesting for the project that you build is that they they end up um like going over this lifespan of just like you being the maintainer and creating it to then also getting

more user like more users but also the user starting to contribute back to it and then also uh like outliving you as

the the like rollup guy basically or yeah I mean that's absolutely essential and like I think we've probably all we probably all would say that we learned

how to be developers by reading other people's source code.

>> Yeah.

>> And like it sounds a bit like come by art to be like giving back to the community and everything but actually that is how we create the next generation of developers is by giving

people the ability to to to learn the same way that we did. Um, and so I I think it's just essential that we continue to prioritize open source over uh the alternatives.

>> And I do think that if you want to create a community, um, it has to be open source. So I think it might be very

open source. So I think it might be very hard and if you want to get some help, it's also easier if if it's open source.

Uh, it might be frightening to push your code as open source, but you may be surprised that not everyone look at your source code. uh potentially they look at

source code. uh potentially they look at some files uh or it's the same thing as you have this feeling that everyone is watching you if you are on stage but probably not maybe everyone is watching uh

>> they're all on their phones >> yeah they're all on their phone actually uh and it's quite similar on the open source like most people won't read your source code they will probably read the documentation but they will have trust

by by knowing they can look at the source code at one point and be able to contribute so I so only benefits of open sourcing it. At at one point if it

sourcing it. At at one point if it becomes popular uh the maintenance might become a burden but the beauty of open source is you can find help quite

easily. So this is something that I

easily. So this is something that I highly recommend for you to try.

>> Speaking of the community, what has been the weirdest, quirkiest contribution you have seen and did you accept it?

>> Quirky. It almost sounds like the the like you're looking for the the Guy Fury in Babel thing.

>> So, at least someone in the audience knows what I'm talking about. No.

>> Yeah. The the the Babel thing. Um there

is one thing that springs to mind. So,

the the project I mentioned that was Spelt's predecessor. Um, it was, you

Spelt's predecessor. Um, it was, you know, you'd have have a JavaScript file and you would instantiate a component inside it and you would pass it a template which is like this HTMLish thing and it would just be in a giant

string that you pass to the constructor.

And um, one day this guy comes along and says like I want to have my my JavaScript to my HTML like in in the same place. What do you mean they are in

same place. What do you mean they are in the same place? Like no, I don't want to put my HTML in my JavaScript. I want to put my JavaScript to my HTML. What the

hell does this guy mean? And after a bit of a back and forth, he was like, "No, if you have your component in an HTML file and then you have a way to import that and it becomes a JavaScript

component, then everything is just kind of much more neatly organized and you can put your JavaScript inside a script tag because HTML can contain everything else." And while we're at it, why don't

else." And while we're at it, why don't we put our styles inside a style tag?

And it was like I didn't see it at all at first, but he he convinced me. And

that is how single file components were born.

>> Wow.

>> Um so this I I remember like this is like 11 years ago guy's name Dave Marshall um very influential figure in frontend um development because we did

that and then Vue adopted it too and like now it's just how uh some frameworks work and it was just because some guy had an idea and thought I'm going to raise an issue.

>> Yeah.

>> Power of open source.

>> That's cool. So someone had, you're saying someone had strong feelings and that that affected like millions of developers basically >> open source.

>> He won.

>> Nice.

>> Good on you, Dave.

>> Mhm.

>> Well, I think recently uh we had an issue on Next uh someone saying that uh can you please rename this prop because the LLM doesn't assume that it should be

the right prop like it was a U table. I

think we're using tenstack table under the hood. So obviously we give this data

the hood. So obviously we give this data attribute to give the whole array to generate the table but because the LLM was expecting to use rows we should rename directly the uh the attribute. So

at least it gives thinking that pro potentially uh with the rise of LLM shall we also try to either educate the LM of this new prop or just go ahead and

and trust what the LLM is expecting to for us to so maybe the LLM will be the one making the craziest ideas for frameworks but uh yeah >> that's a good point have you had to

change much with the framework with AI like is there a lot of issues that are coming up that you're like oh I would have never thought of that but this LLM issue is showing itself.

>> There there are some uh some things that that I I think like at least I as a programmer never did is it's like running the Nexus dev server and then also running next build.

>> Mhm.

>> Because like every change I make I'm like running next build to verify if my thing is correct. Like I never did that myself. And then when you start getting

myself. And then when you start getting like agents that are like automatically like verifying their own work like you get people to do uh next build. But we

like we never thought about it in that way like at all. So uh there wasn't even like a good error message for that like if you do that. Um so what happens is like you run next build it predicts the

entire app like your development server is broken you have to reboot uh start over. Um, but now with the uh so like

over. Um, but now with the uh so like agents or like like AI writing code and verifying itself, you basically end up having to

take all these like new DX features into account that are like sort of like geared towards uh uh like AI more. Um,

and that was like one example like it was for us it was like a oneline change to the config by creating like a new disc directory for for build or for development and then like for built it

it uses the same thing so it doesn't break but that like oneline change uh was received so well because like everyone's running into day-to-day at this point uh if they're if they're using AI um

>> and uh yeah that that was like one that immediately comes to mind for me.

>> Yeah.

>> What about you Rich? uh we haven't changed anything in the design of the framework itself for the benefit of LLMs, but um you people do run into the

reality that if you're using cloud code to build your spelt app, it's probably going to have more spelt 4 in its training data than spelt 5. And we

changed a few things. Um and like you just can't get the leng to to generate modern idiomatic codes. So we have released an MCP server, >> okay,

>> that really helps with that. Shout out

to Paulo and Stannislab on the team who who put that together. But most of the conversations that we have about if there is something that we need to do differently for LLMs, it's about um

making the the documentation more digestible and we are we are loathed to do something purely for the sake of LM

partly because we think of ourselves as you know engaged in a in a very human first project.

>> Yeah. Um, but partly because it is a it's such a moving target. Like the

things that are beneficial for LLMs today are not the same things that were beneficial six months ago and are almost certainly not going to be beneficial six months from now. So we we always think

okay is this change like making more digestible documentation something that is also going to be beneficial to humans? If so, yay, let's do that. If

humans? If so, yay, let's do that. If

not, let's just hold off if for the minute and see how things shake out >> because spelt 5 is new this year.

>> It was released last October.

>> Okay.

>> Yeah. But um

>> and it's still not on five.

>> Is it version five? Yeah. And so the newer models their cut offs are after that date, but there's I think there's still >> more of the old stuff in in the training

data for most of these models than there is the new stuff. And this is not a problem that's unique to us by any stretch. I mean, I'm sure that it's

stretch. I mean, I'm sure that it's going to take a while for new React patterns to get adopted. And

>> yeah, and this is like part of why uh we're also introducing the uh next evolent mention it as well. Yeah.

>> Well, at first when I saw that the LLM was trying to put React code in my view app, I was considering to support React, but then I was like, okay,

let's let's see how we can help the LLM to to read the NX dogs. So, we started with this /lms.txt uh being able based on the um adding

this MDCUIX to other pages to serve the markdown so it's easier for the LLM to crawl our documentation. We recently

released this next UIMCP server. The

next MCP server is coming end of this week. And we noticed better uh code uh

week. And we noticed better uh code uh output from the LLMs. And lastly, we've been also teaming uh teaming up with the

uh Versel and X team to uh build this eval to compare the different AI models about uh producing uh proper code. So we

add some tests, we make sure the uh uh we give a prompt and we compare the output and it help us to see if the MCP is doing a good job or not. Maybe it can

be part of the prompting. So if we notice that uh always pasting these uh sentences in the prompt always give us good result. We will try to advocate

good result. We will try to advocate having this agent MD but we go step by step like Richard said it moves very fast. So you we also need to focus on

fast. So you we also need to focus on the human uh so trying to juggle between the two uh and listen to the community and yeah that's uh that's how we do it.

>> So I always think of open source as that like building block thing and then you have like the one tiny one that if it goes away everything crumbles. That's

like open source. That's like the one open source package.

>> The random guy in Nebraska.

>> Yeah. So it's the foundation of a lot of projects. Um, and it can also be a lot

projects. Um, and it can also be a lot handling like issues and then keeping the community energy up. So, how do you keep the momentum going in your community for next, for how are you

handling those issues, making sure your community is happy and staying true to open source?

>> I mean, anytime you start an open source project, you have to wear all of the hats. like you're pushing all of the

hats. like you're pushing all of the code, you're writing all of the docs, you're um doing the like the logo design and you're building the website. If

there's tooling involved like ESL limp plugins and stuff, you got to do all of that yourself and and that quickly becomes a lot. So the the one thing that uh that I'm glad I learned quite early

on is if anyone shows any inclination to help whatsoever, you bear hug them and you put them to work. Um, and the the thing that I think is is really crucial

with that is if you find people who are community-minded, who are good at helping other people who are like less far along their their learning journey,

um, and can help answer their questions, help responding GitHub issues or in Discord threads, um, those people are absolutely worth their weight and gold

because if you know, if if you're working full-time on the code and the design of the framework like you really don't have the bandwidth to be also like

customer support um and so if you can divide that labor it will it will save you a lot of trouble I imagine we've all found that >> I think uh you cannot do it by yourself

at one point you need help and embracing people helping you uh on my end it really helped u umping the the newcomers

uh it was Puya para who became a lead maintainer of NX then Daniel row uh and I think crediting people even if it's small action a p request answering

issues um it can be very beneficial to showcase their their their work like just a thank you sometimes just a tweet uh adding them in the blog post showing

them that we see their work and we value them I think it's uh it's how you you have people that trust you in return and and wants to continue helping you. So

that's how it worked for us. Uh but it's still uh very hard to maintain this flow of issues.

>> Yeah.

>> Yeah.

>> Well, it's it's a resume builder for these people like oh I I contributed to spelt I contributed to Nex like here's the the stuff that I put in that's in the framework and they may not be building it but they can contribute and

we've created this like open source community that we're helping each other learn more but also get more opportunities. So

opportunities. So >> yeah, I yeah, I I I was the person that uh Rich was talking about where I got

bear hooked to basically be here for eight years now. Uh the uh I think what's was really interesting is that um well so I I came into the like national

project pretty early but I I was uh contributing to a bunch of uh for sale like site at the time uh um like projects and it was more so because the um like there was technical challenge

there like I could learn something new uh 19 at the time I wanted to learn more about JavaScript and this like one way to do it and I I found that like fixing

bugs for some reason I learned really well doing that. Um and uh so like how we approached this like uh and for Nexus

is evolves over time as well because like I think the um is this strange thing where like the well it's not strange it's like the the overall usage

goes up you get more GitHub issues. It's

this thing where everyone will find every single bug there is even if it's uh that there's a dot or comma somewhere

wrong in the docs or things like that.

Um so in the end like what what we're doing now is that we're uh obviously we have a larger team working on it. Uh

we're we're like triaging issues uh every day. Uh there there's someone uh

every day. Uh there there's someone uh like actively work on that. I know that if you're in this room and you've seen issues and they've been open for a long time, uh, we haven't replied yet. Very

sorry about that. I will say that. Um,

it doesn't mean that we haven't seen them. It means that they were either

them. It means that they were either triaged already, but like we we're keeping them on a list or um they are actually like uh still being worked on,

but like we uh we can still do better here. I will say that. Uh but then the

here. I will say that. Uh but then the uh the thing is like uh it will take some time to get through all of those cuz it's like uh I don't know like a thousand open issues

>> and while the number you see keeps going up actually like if you look at the stats it like we're closing like as much as there is being opened

>> but like even if that's like 10 or 20 or 40 or like 50 issues per day more every day being opened than that we close uh it it just never goes down. keeps going

up over time.

>> Uh, and it's a thing that we have because because of like large adoption and like a lot of people using it. Uh,

we're and also I'll say like we're very thankful for people reporting it to GitHub. If you're reporting it to GitHub

GitHub. If you're reporting it to GitHub and the bot did not close it, you did a super good job because uh we we have a pretty uh strict bot that will say you need to provide a reproduction because

otherwise we will close it. Um, and

really the only reason we have that is because we can then correctly triage everything and and investigate it really and really fix it for for these cases.

>> But the overarching message is big bear hugs for the community.

>> Yeah.

>> Yes.

>> All right. So,

let's talk about spelt and nuts lab joining Verscell.

So, some people were excited, others were curious. So,

were curious. So, >> why did you join Verscell? what like and what has changed behind the scenes with NX and with Salt.

>> Oh man, I have so many thoughts. Um

people did have many opinions when uh when when uh I joined Vell. I'm sure you can relate. Um

can relate. Um so I've been at Vel for 4 years now. And

at first people were like, "Oh no, the cell's acquired spelt."

>> Yeah. Um I wish there was nothing to acquire. It's like

an MIT licensed codebase. Um so like I I joined as a salarid employee to work on on spelt.

Um and uh you know people worried that being at a company that exists to make money was going to have a corrupting

influence on on the road map. And uh I I think I can say four years in that our

track record disproves that. Um so to to the people who uh who thought um that it was all going to go terribly wrong that

we were going to start prioritizing um Vel support over being an independent platform agnostic framework. Um that

hasn't panned out for a few reasons.

Number one, the three people who work at Vel on the project are a small part of a much larger maintenance team. And if we started to go rogue, then the rest of

the team would mutiny and and it would be a shortlived thing. Um, but also I I think what like people have have come to see is

that it's just not in Vel's interests to um exert influence over the road map.

Like the whole point of the project is that um there's no vendor lock in the it is designed to be a platform independent um project and that the reason that you

would bring your spelt project to the cell is for Vel. Um and so I think that has that has panned out pretty well. So

I I wouldn't say that anything has really changed.

>> Okay. Um, in terms of the the governance and the road map, um, the only thing that has really changed is our ability to ship stuff because before me being here and Latally, Simon and Elliot being

on the team, it was a like a weekends and evenings project. And

>> any open source project beyond a certain scale, especially when you have beyond a certain number of users, you just can't do it on a part-time basis. Like, it

needs to be a full-time endeavor.

>> Um, and so being part of the company has >> has made that possible. Um I mean I guess it's slightly different for for you in so far as um >> there was an acquisition

>> on NLabs. Yes. Um

>> so Nuxlabs was a company and we had uh different products around Nux to sustain the uh the open source we've been building. Uh one was NXUI Pro which was

building. Uh one was NXUI Pro which was a premium component libraries on top of NXUI. Uh the second one was next studio

NXUI. Uh the second one was next studio which was a gitbased CMS on top of of your Mdon files and the last one was uh

next hub which was uh adding a developer experience layer on top of cloudflare to deploy your next applications trying to get the versel experience on top of

cloudfare which was not that easy anyway to to do. Um and then um I got the opportunity to talk to GMO to brainstorm

about what it could be to to join Versel. I think it really helped also to

Versel. I think it really helped also to have the uh history of Rich joining Versel seeing that the the framework was not denatured like it stayed agnostic.

It was very important for us as next and Nitro to stay agnostic. Um I think universal interest it was also important to showcase that it support multiple

frameworks. So at that stage our um

frameworks. So at that stage our um strategy aligned uh I was excited also by the idea of bringing the team uh working full-time all together and also

open sourcing all of our products because it opens the possibility for the community to access our premium products. um making Nuxab also agnostic.

products. um making Nuxab also agnostic.

So being able to build full stack application on cloudfare but also on Versel and that was this thing that was uh tricky for me to have something that

is locked in. So opening it for the web I think it made sense. Um and I can understand people being sus suspicious.

Uh but what I'm telling them is you will see with time and I'm very confident that Versel was a great choice.

>> One of the the really cool things with um like the the NuLabs acquisition is also that we're uh we're talking a lot more now. we were talking before as

more now. we were talking before as well, but like between our teams as well, there there's like quite a like there's a lot of overlap obviously like we're building similar tooling. It's not

exactly the same. The APIs are not the same. There's all uh like there's a

same. There's all uh like there's a bundler and there's a service rendering and like all of these things and like seeing uh like even in the last few months like seeing like Puya like land

all these new APIs into Nitro to to make Nitro overall better, it's been uh been really cool to see as well.

>> Yeah. and even the cash uh the cash system where we've been collaborating. I

mean, I was excited to be able to uh talk with these guys on the same Slack channel. I think this is honestly uh one

channel. I think this is honestly uh one of the greatest thing.

>> Yeah. One of the the interesting things is that uh because there's a bunch of people that you'll see from Versel that are like super uh like super online uh always on Twitter uh

that kind of thing. I'm not personally not super like that. Uh but the and Rich is not like that as well and and I think uh Sebastian as well but the uh like

there's also a lot of uh like historical uh like there's there's people at Forcell that have shipped like some of the largest uh like tools in the world

at the like either like big tech or uh like uh did it in like open source as well or other things that you don't know about that really uh are helping shape

this like bringing these tools that were in uh that always existed like every single like large company like Metau or Google has and uh that could never be brought to open source because like

their tooling is like super built towards like their particular stacks. Uh

but we're actually now able to build some of these tool these tools uh as open source for like the the frameworks that we're building. Yeah, I'm super l

>> So you had some closed and then you were able to open it all after joining.

>> Yes. Uh which includes all the next way pro components. We are working toward uh

pro components. We are working toward uh making NX studio self-hostable. So you

will be able to uh edit your Nux content, your markdown documentation within your website and commit directly to to GitHub. So all you need is a is a

hosting that supports server side rendering and o github app and next we are going to release uh same as next studio at the end of the month first

version capable of running on on versel.

>> I got a a demo of next studio yesterday.

It's pretty crazy.

>> I wish you could do it right now.

>> Okay. So you all have been building frameworks for almost 10 years. A lot

has happened in those 10 years. version

16, version five, version four.

>> Four. Yes.

>> So, what does the next 10 years look like? It's been going on for almost 10.

like? It's been going on for almost 10.

What's the next 10 look like?

>> More convergence, I think. Um, we were sat in the office yesterday just kind of nerding out over some of the things that that we're working on. Um, there are

some areas where spelt and spelt kit are ahead and there are a lot of areas where Nux is ahead. But like the things that KNX has are the things that we want to do next and the things that we have are

the things that Nux wants to do next.

>> Um and I I know that the same is true for next. Um, I I think

for next. Um, I I think making these frameworks truly full stack is the the o overarching trajectory like reactivity and rendering like keeping

the DOM up to date is very much a solved problem at this point but there are a lot of unsolved problems around making storage and OR like straightforward and bulletproof and well integrated into

these systems and these are the things that >> that I know you're pursuing and we're pursuing >> and it overlaps so much that like you can have generic solutions for some things. Uh and that that's not to say

things. Uh and that that's not to say that we'll ship all of them in the same way and that the APIs are all the same or or things like that. Uh but there is a lot of overlap and uh IDs and

experiences are really helpful here as well. Uh I've noticed been talking with

well. Uh I've noticed been talking with uh Daniel and the Nex Labs team uh like quite a lot. um also just about um like

uh open source in general and uh how we uh like built this at for sale. So yeah,

>> I think we'll see also what the community and users and maybe LLM will ask us to to build next. Um I think the full stack for me has been uh I've been

waiting for a long time to to add full stack feature on next. uh I think with the LLM building we're expecting the LM to build full stack apps. So trying to define a standard uh an opinated way of

building full stack may also reduce the security risk by building it in the open as well and as open source. Um and yeah we we will see we didn't expect to add

an MCP server um few years ago and here we have having an MCP server helping developer to build MCP apps. We'll see

uh it's hard to predict honestly.

>> Yeah. How often does the like feeding notes to each other happening? That's

fun.

>> It is. I mean, I wish we could do it more often. Unfortunately, he lives in

more often. Unfortunately, he lives in France.

>> Well, you're all on the same slack, so >> we are >> 10 years for next.

>> 10 years for next. Next 10 years.

>> Next, next 10 years for next 10 years for next. Uh

for next. Uh yeah. What is next? No. Uh the um yeah I

yeah. What is next? No. Uh the um yeah I I think like the um like today at the at the keynote you saw what our vision is for like what Nexus should look like.

It's >> the the thing we started two years ago.

It's uh getting a really good bundler.

Uh it's going to start paying off like the thing that um I've been talking to a bunch of people like probably in the audience today uh that came up to me and said like Nexus 16 fixed all my

problems. uh uh which and other problems were slow nexus dev uh just to be clear.

Um so the the thing here is like we're we're now at the baseline, right? So

this is the baseline of what all the features that we already had are with Turopac. Uh but now we can start

Turopac. Uh but now we can start building new tooling. So there's like a better bin analysis tooling coming very soon as well. like in the next few months you'll you'll see like a bunch

more tooling in in this uh like in that space around like the bundler because we can now like now we have knowledge about the end to end of like where everything goes. Uh so now that we have this

goes. Uh so now that we have this bundler we can basically we made everything fast. Now we can make it

everything fast. Now we can make it slower. Again this sounds crazy but uh

slower. Again this sounds crazy but uh it means that we can do more work in less time than we did originally and uh do more like advanced optimizations. You

can do do all this these things and that's just for turbo pack for Nexus itself uh like cache components uh parts of pre-rendering and uh like being able

to build these like one page that can do both static and dynamic was always the vision from the like when we started building app router it just took some time to get there and it's finally here

to try out so I definitely recommend you to try it out um and then for the like the next 10 years it's going to be refinement getting better D like even

better DX than you already get. Uh all

the error messages should be correct to where like you throw the error uh if you introduce one. Um and uh I'm sure we'll

introduce one. Um and uh I'm sure we'll see like some something like coding agents and things like that to like bring all these features that that are missing there as well.

>> I would add small notes on we see the rise of AI browsers now. So I think we are going to see uh our web frameworks also being able to ship some

capabilities for this AI browser to communicate directly with your app. Uh

it could be MCP server, it could be a new standard that will come with this AI browser. So I think adapting also to

browser. So I think adapting also to this new uh new browser experience maybe shipping web component directly for the air browser to to use from your website. I don't know. That's that's

website. I don't know. That's that's

exciting.

>> Yeah. where I see these three in 10 years, we're all gonna have a chip in our head and we're gonna think, "Build me a spelled app and just gonna build it." That's 10 years from now. Um, okay.

it." That's 10 years from now. Um, okay.

So, last question. It's not about frameworks. It's not about tech. If you

frameworks. It's not about tech. If you

were not in tech, what would you be doing? What would be your job?

doing? What would be your job?

>> Realistically, I'll probably still be in journalism.

>> Okay. Yeah.

>> Um, I I don't know. I most people who work in tech, I think, have some sort of fantasy of doing things with their hands. Uh, I really like cooking. Maybe

hands. Uh, I really like cooking. Maybe

I'd open a cookery school.

>> All right.

>> Like that.

>> I do like building things with my hand, actually. Um, I like 3D printing a lot,

actually. Um, I like 3D printing a lot, but I guess if tech doesn't exist, 3D printing, uh, neither. Uh, so I guess probably, uh, building houses. Um, I

wanted to be an architect when I was young. I mean, building things. So,

young. I mean, building things. So,

whatever it is.

>> That's cool. Yeah.

>> Cool. Um, for me, it's uh, completely different. And I think I I would

different. And I think I I would probably end up uh working in my like my parents have a a gardening center in the Netherlands and uh I uh I'm I I know

nothing about plants. Don't come to me with plants questions. But uh the uh I would probably be working for them like most likely uh in in their

business. They uh uh it's an interesting

business. They uh uh it's an interesting business. They actually do e-commerce as

business. They actually do e-commerce as well which is uh they they they bring plants to you. uh uh using Yeah, it's a whole logistical thing and so you can Yeah, you you can do whatever. You don't

have to work you you don't even have to touch plants to work in a gardening center. How is that? Yeah.

center. How is that? Yeah.

>> So, we have a gardener, we have a journalist, and we have an architect. I

love it. All right. Well, that's it from me. Thank you so much to all three of

me. Thank you so much to all three of you. Give it up.

you. Give it up.

[Applause] All right, we are at the end of the day.

So, we've had a great time together.

So, this is a dream for me. Um, I've

been in the NextJS community for many years. I've spoken on this stage many

years. I've spoken on this stage many times and now I get to close it out. So,

this is very unreal for me. I'm very

grateful to be here and I'm so grateful to be able to have met all of you today.

So, let's recap the day. We kicked it off. K kicked this morning off with GMO,

off. K kicked this morning off with GMO, Sam, and Jimmy, and they had a wonderful keynote. There's been non-stop energy

keynote. There's been non-stop energy and demos and a lot of good learnings and takeaways. We saw what's next with

and takeaways. We saw what's next with Nex.js. We talked about the future of

Nex.js. We talked about the future of coding and AI and we got a glimpse of how teams are building faster than ever.

So, some of the sessions included stuff like Lydia who was just on stage, she showed us about bun. Reese's session

talked about dynamic pages. Ryan's talk

was about AI is not just a buzzword, but something we can use every day. So,

we're so grateful to all of the speakers. The talks were worldclass and

speakers. The talks were worldclass and reminded us how much creativity lives in this community. And thank you to all of

this community. And thank you to all of our sponsors. It's just a great group of

our sponsors. It's just a great group of companies out there. So, if you've got tickets for Ship AI, that is happening tomorrow right here, same time tomorrow.

It's going to be another amazing day where we're going to talk about AI powered experiences. And for tonight,

powered experiences. And for tonight, we're not done yet. We have happy hour, which it runs until 8:00 p.m.

immediately following this. Go grab a drink, find someone you've met today, or say hi to folks that you've been following online. This is the perfect

following online. This is the perfect time to connect. So, the connections I've made over the years at Next.js Comp are the reason I'm here today. So, go

make that friend. Tell them Cap told me to introduce myself.

Oh, I wasn't clicking there. Those are

the things. So on behalf of everyone at Verscell and the Next.js team, thank you. Thank you for showing up, for

you. Thank you for showing up, for experimenting, for pushing the web forward, and for being here. It really

means a lot to us. We can't wait to see what you all build next. Y'all are

amazing. Now, I'm from Hawaii. Being

from Hawaii, I have to do my Hawaiian word of the day or today it's a little phrase. So many of you know this, but in

phrase. So many of you know this, but in Hawaiian, mahalo is thank you. But if

you have if you want to express great gratitude and like deep thanks you say mahalo newioa. So mahalo newioa

mahalo newioa. So mahalo newioa everyone. I'll see you at happy hour.

everyone. I'll see you at happy hour.

Thank you.

[Music]

[Music]

Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] Heat. Heat.

Heat. Heat.

[Music] [Music]

[Music]

baby.

[Music] Heat.

Hey. Hey. Hey.

[Music] [Music]

Loading...

Loading video analysis...