Tobi Lütke Made a 20-Year-Old Codebase 53% Faster Overnight. Here's How.
By AI News & Strategy Daily | Nate B Jones
Summary
Topics Covered
- Agents Diverge into Four Distinct Species
- Coding Harnesses Replace Developer Tools
- Dark Factories Eliminate Mid-Process Humans
- Auto Research Optimizes Metrics Relentlessly
- Orchestration Manages Specialized Handoffs
Full Transcript
We want agents, but we don't know what we really want. When we say agents, it is too simplistic to say agents are just like an AI plus tools in a loop. Like
that's true, but we are missing the point. We are missing the fact that
point. We are missing the fact that sophisticated agents diverge into at least four different types. And most of us don't understand what those types are and we confuse them. And so this video
is about laying out how agents are really working in production use cases across these four subtypes, explaining why they're different, and then I want
to get into how you use them and how you pick a given agent for a given use case.
And no, we're not going to be talking about individual models. So if you think I'm going to talk about claude or I'm going to talk about chat GPT, that's not what this video is about. It's actually
about the layer above that. You can plug any LLM model into an agentic system and get results. Maybe not the results you
get results. Maybe not the results you want, but you can get results. The point
here is that you need to understand how these agent systems work. Because when
we say agent, we really mean an LLM and tools and a loop where the agent comes back and gets feedback. And the way we construct that is really, really important. And the details of that
important. And the details of that construction effectively give us what I'm calling ancient species. So you're
like, "What are species, Nate?" Right?
Like, tell me. So, we've got coding harnesses. These are often starting out
harnesses. These are often starting out for individual contributors as a single LLM agent. It is working with your files
LLM agent. It is working with your files and it is running with tools that you give it to accomplish coding work. When
Andre Karpathy talks about the kinds of agents he works with to do his coding projects is often this sort of coding harness idea. When individual developers
harness idea. When individual developers talk about their work, it's a coding harness idea. There is an extension of
harness idea. There is an extension of this for larger projects that involves multiple agents that we'll also discuss.
It's like a a separate cousin species, right? Dark factories, that's another
right? Dark factories, that's another species of agent. These are fully autonomous systems. You put the spec in and the software comes out. And the
trick is you have to be really really good at all the steps in between. You
have to give the agent all of the support and all of the scaffolding and all of the evals or tests at the end to make sure that what comes out is actually effective. And and the way you
actually effective. And and the way you develop this, right, often depends on your ability to specify really excellent nonfunctional requirements, which is a
fancy way of saying really excellent rules of the road for these agents in ways that are enforceable, and we'll get into that. Another kind of agent is auto
into that. Another kind of agent is auto research. These are really frameworks
research. These are really frameworks that descend from classical machine learning. And really all you're doing is
learning. And really all you're doing is you're trying to automate the process of letting an AI agent optimize for something. Maybe it's optimizing for
something. Maybe it's optimizing for conversion rate on your landing page.
Maybe it's optimizing or tuning a particular coding framework. But
whatever it's doing, it has to have a metric to optimize against. That's why
it's called auto research. The whole
goal is to uh what machine learning scientists call hill climb. You want to climb the hill and get to a more effective optimized metric. That's the
whole point. If you don't have a metric, you're not doing auto research. And then
we have what we would call orchestration frameworks. This is something we often
frameworks. This is something we often see in big companies where you have multiple LLMs lined up in a row and you have an orchestration framework over the
top that hands work over. So writer to editor, drafter to researcher, or researcher to drafter as it were. All of
these different types of agents, whether they're coding harnesses or whether they're orchestrators or whether they're dark factories or even whether they're auto researchers, the thing that they have in common is that they're using LLM
with tools. And so you can call all of
with tools. And so you can call all of them agents. That's okay. But if you
them agents. That's okay. But if you don't understand why they're different and why that matters, you're gonna use the wrong kind for the wrong kind of work and you're going to get into big trouble. And that's what I want to spare
trouble. And that's what I want to spare you because to be honest with you, I see this happen a lot. I have seen people take what I would describe as a single agent that's designed to do a single
productive task and try and say, "Well, we're going to make a dark factory out of that. We're going to make that into
of that. We're going to make that into something that is a full multi- aent coding harness for multiple big projects that we want to run through our system.
It's not going to work. That's not how that works. Agents have different needs.
that works. Agents have different needs.
Now, you might ask me, Nate, why why is this so complicated? Why can't we have one agent to rule them all? Well, the
answer is that these are just tools that depend on the context around them to do effective work. And if you want to do
effective work. And if you want to do bigger pieces of work, you have to get more interesting and sophisticated in the way you put these agents together to
get that work done. Notice I didn't say more complicated. The art of building
more complicated. The art of building good agents is often the art of finding different simple configurations that enable the agent to do the particular work you have in front of you. And so
when we talk for example about orchestration, you might envision the super complicated framework. It doesn't
actually have to be complicated. The key
to orchestrating is just recognizing you have multiple distinct jobs you need done that aren't well suited to having one longunning dark factory and you need to find a way to negotiate those
handoffs. Whereas with dark factories,
handoffs. Whereas with dark factories, you're usually optimizing toward an eval really relentlessly and you want to make sure that you construct the pipeline so
that the system gets you to that point.
And so you have to look at the goal that you're trying to accomplish and then ask yourself, what kind of agent do I need to get that goal done? I want to get into the details on each of these four
species because I think the more viscerally you understand them, the less you're going to be surprised when we talk about the differences and why they matter. The more you're going to be like
matter. The more you're going to be like fingertippy with those differences. So
let's talk about coating harnesses first. Coding harnesses in many ways are
first. Coding harnesses in many ways are the simplest kind of agentic harness.
They are the kind that you get when you pull up a terminal and you use clawed code or you use codecs. All they are is essentially an agent taking the place of
a developer in an engineering process.
And so the agent has many of the tools a developer would have. The agent can write code. The agent can call files.
write code. The agent can call files.
The agent can put files together. The
agent can read files. The agent can write to files. the agent can use tools like search. When you put all of that
like search. When you put all of that into the agent's context, so the agent understands what it can do, then the agent is able to do effective work. Now,
there are some slight variance, right?
I've talked about the fact that codeex tends to prefer to put these in a virtual machine, which is more secure.
It's not touching your local laptop. And
then there's Claude, and Claude tends to like to work on your local laptop. And
there are pros and cons to each of these, right? But the point from your
these, right? But the point from your perspective and my perspective is that these are very similar overall approaches to the development problem and we should think of them as
essentially you have a human the human is now doing a managerial function and the agent is doing the coding and if you do that well you can give these agents even if they're single threaded just one
agent a fairly longunning task to accomplish and it will go and work.
Andre Carpathy talks about his agents running 16 hours a day, right? That's
not unusual anymore in 2026. A lot of developers have that experience. And I
start with that because that is in many ways the simplest use case. It's really
a singlethreaded approach to agents.
Think of the agent as a standin for the person, an engineer, and you'll kind of get the idea. Now, you can of course run multiple agents at once, and some developers do. Peter Steinberger when he
developers do. Peter Steinberger when he was building open claw famously described having multiple agents running at a time. In his case it was codeex and they would get a particular task done.
It would take about 20 minutes and they'd check back in with him. And so a lot of his day as a manager of agents was at essentially managing these agents that were all doing their own singlethreaded tasks. And so just
singlethreaded tasks. And so just because I talk about it as a single agent doesn't mean that developers view their work streams that way. Developers
may view their work streams and often do as I'm managing all of these singlethreaded agents all day. Now, if
you're wondering what makes this work or doesn't work, I'm going to give you a hint. The hint is decomposition.
hint. The hint is decomposition.
If you can get the work decomposed, well, you can give that work to a bunch of singlethreaded agents and you're going to get real far. And a lot of developers like that. They like the
challenge. They like the task. They like
challenge. They like the task. They like
to decompose. They like to take a big problem that's kind of gnarly and rip it apart and say, "Okay, this chunk is really well defined. I'm going to give it to this agent. This chunk I'm going to give to this agent, etc." And that's
how a lot of work gets done in 2026. You
have the developer look at the overall shape of the project, maybe with an LLM as a planner assistant. It start you start to have the the developer confirm the breakout the LLM planner agent may
propose. And then the the developer
propose. And then the the developer basically says, "Okay, let's go uh let's start to break out this work." And then you start to break it out into these individual agent tasks. So when you are
doing that already, I want you to notice something. You are already past the sort
something. You are already past the sort of spin up an agent in the chat and just talk to the chat to make it happen. You
may be working on different versions of your code or different sections of your code at once. You may be using a work tree approach. Now fundamentally this is
tree approach. Now fundamentally this is about task scale projects, right?
Everything we've been talking about the decomposition individual developers working on this it all suggests you're giving the agent a task the agent is acting for you etc. What happens when
the work gets bigger? That's where we talk about a more complex variant of this coding harness that is really designed for projects. And it's
important to understand what that looks like because so often when we want to do big work at companies, we tend to think of big work as linearly tied to the number of engineers that can hold bits
of the project in their head. But
increasingly that's actually incorrect.
What you want to do is look first at the agent side of things and basically say you know the agent has to be able to understand gro this work figure out the
right path forward and we have to support the agent in getting that done.
Curser has done a lot of work in public, writing it up, helping us understand how to do that. Well, what you really need is a different way of handling a large
set of agents and coordinating their work. Effectively, you're moving from a
work. Effectively, you're moving from a world where the human is the manager to a world where the agent is the manager.
And in that scenario, you have, and this is real, right? Cursor has done this across multiple real projects from browsers to compilers. and actually
coded millions of lines of code with this. What you have is an agent that
this. What you have is an agent that plays the manager, an agent that acts as the planner for the work, and then you have a system of sub aents that come in
to grind on particular tasks as ordered by the planner agent. And so instead of thinking of it as, you know, cursor got some individual agent to code for weeks and that's how you got a browser or
whatever you may be imagining, that's not how it actually worked. What you
actually have is short running grunt agents or execution coding agents that were spun up by a planner agent to hit exactly one problem, solve it and get
that particular part of the job done.
And how that works successfully is by making sure kind of effectively that the planner can make notes. The planner
agent has to be able to track tasks, has to be able to keep things in memory and has to be able to understand whether a particular piece of work by an executor
agent was done well or not. Now you
think, wow, this is complicated. Cursor
actually tried to make it more complicated. They tried to add three
complicated. They tried to add three levels of management and it didn't work well. And one of the things that the
well. And one of the things that the cursor team explicitly noted is that simple scales well with agents. You want
to keep your harness, this whole system we're talking about of making the agent work well, pretty simple so it can scale effectively. And so I'm describing it as
effectively. And so I'm describing it as simply as possible precisely for that reason. Because if you don't understand
reason. Because if you don't understand how it works and it's a mystery to you conceptually, you're not really going to understand where to apply it or where to go and dig in more if you think this is
right for you.
The key to understanding the difference between these individual coding harnesses like the ones that Andre Carpathy is talking about versus the big long running ones like the one cursor is
doing. You need to recognize that the
doing. You need to recognize that the individual coding harnesses are built for the mind of an individual developer.
If you have a team of eight or 16 or 20 developers working on something, you have too much complexity in the room to not have a coding harness like cursor
used. You should be looking at project
used. You should be looking at project level coding architectures rather than individual level. And that is one of the
individual level. And that is one of the biggest unlocks and it's very counterintuitive because I see a lot of people who tell me we've had so much speed up with AI. We have AI assistants.
We have individual engineers working with like four or five coding assistants at a time. It's incredible how much we get done. But if I surface this simple
get done. But if I surface this simple idea that maybe instead of framing everything around the human at the center, we should frame it around how can we make it easy for the agent to do
the work since we're asking the agent to do all this work anyway. Sometimes
people look at me like I'm going crazy.
They're like, "What? What? Why would we do that? We see so much speed up with
do that? We see so much speed up with them as individual assistants. Isn't
that great?" I'm like, "It is great.
That's great. great progress, but from a project perspective, all you're doing is speeding up the human work and you still have all of the bottlenecks you had before. Only now it might be more
before. Only now it might be more complicated because you have a lot more code review to do than you than you did before. And the humans are much busier
before. And the humans are much busier because they're trying to figure out how to manage four different things at once and they used to be individual contributors. And so maybe with this
contributors. And so maybe with this much complexity and the fact that it's really hard to parallelize all of this work across lots of developers in a big project, maybe we should actually try to
build something at team scale. And
that's really how you understand you need to be at a level like cursor is where they're architecting larger multi-agenta harnesses that are designed
to do big work. Okay, this brings me to dark factories and I fully admit there's some blur in these definitions. There
are some architectures for large projects that are effectively dark factories. But if you want to know the
factories. But if you want to know the difference when you are doing a dark factory approach, you have almost no human involvement from the point you put
a specification in to the point where the system says I've passed an eval and I'm done. And the reason why you do that
I'm done. And the reason why you do that is actually that people have found as they go farther on this agentic coding journey that it is often easier and simpler to get the human out of the
middle of the process altogether. Like
once you walk into this process, you want the human to be heavily involved at the top doing some of the design, making sure this is what the customer wants, making sure the spec is really good, making sure intent is communicated
clearly, and you want the human at the end, making sure that what was built actually matters, making sure that it passes the evals, etc. But the less the human is involved in the middle, the
less you have strain on the humans and on the whole process because agents tend to essentially push things through so fast that humans have trouble being
bottlenecks in the middle. And so dark factories are designed to get around that. They basically are designed as
that. They basically are designed as entire complete systems that hit eval at the end and iterate back automatically until the software passes the
evaluation. And that's really the heart
evaluation. And that's really the heart of it. You put an evaluation or a test
of it. You put an evaluation or a test that the software has to pass before it can be launched. Now, if you're really bold and dark factories are often bold
plays, people will launch to production from there without having a human look at the code. I will be honest with you, the companies I look at tend to have an
awareness of risk that is calibrated to actual production realities and most of them are rightly uncomfortable with just trusting the agent and saying, "Yeah, yeah, we'll throw it into production. We
hope it works well."
If you're an enterprise, you're typically having a human look at the code just to make sure there's some accountability there. It's actually
accountability there. It's actually something Amazon learned the hard way recently when they called a bunch of their uh senior engineers and their principal engineers into Seattle to talk about recent AI generated incidents in
production caused by junior engineers and what they were going to do about it.
It makes sense to have a sophisticated engineering mind looking at the code at the end to make sure that you're confident that you got it right. But
that being said, you should understand that dark factories are essentially all about pulling the human out of the middle so humans aren't stressed and bottlenecked in the middle of a fast
flowing agentic process and you're just trying to get the evals done and get the software out the door and it's like a dark factory, right? The famous dark factories in China are the ones where the lights are off. It's literally dark
and you're making stuff with automated robots all the way through. That's the
vision. That's the metaphor that we're using when we talk about agents in the system. And you can see how that's so so
system. And you can see how that's so so different from individuals using agents, right? Like if you give your agent a
right? Like if you give your agent a task and go make coffee for 20 minutes, that's not a dark factory. Similarly, if
you're investing heavily in a coding harness and you're putting a multi- aent project together and you're checking on it obsessively all the way through and giving it ongoing guidance if it doesn't
go right, you're probably closer to a larger project scale harness, but you have a fair bit of human involvement.
And I admit it's a little bit of a blurry line. If you get your project
blurry line. If you get your project harnessed to the point where it's very stable and you can do large runs of the code and you don't have to look at the code in between until it passes the
eval, you are getting very close to a dark factory layout. And so what I would say is think of these as steps along the path toward humans being more and more
involved at the beginning at the end of the software process. If you're an individual, that can look like task level autonomy for the for the agent. If
you're an organization and you're building project level agentic engineering, it can look like the human being involved mostly at the beginning mostly at the end with some guidance in
the middle. And then if you're really
the middle. And then if you're really sophisticated and you feel really good, you can have project level engineering focused on those evals or tests where you have human involvement from engineers and product at the top and
then human involvement from engineers at the end. Now what about auto research?
the end. Now what about auto research?
Auto research is kind of a different bug. So if you look at these three steps
bug. So if you look at these three steps to coding, they're all about producing code and working software. Auto research
is not auto research is about optimizing for a metric. It's actually a descendant of classical machine learning techniques. In machine learning, when
techniques. In machine learning, when you teach a machine something, all you're doing is you're trying to get it to be better and better and better at optimizing for a target. And so when I
was teaching machine learning around how to move titles around at at video, we were optimizing for the ability to cut letters out and reliably shift them.
Right? That that sounds silly, but it was actually necessary to resize title artwork, etc. Now, if you were optimizing for auto research in the age of LLMs, you might be optimizing for
different metrics. And so Toby look
different metrics. And so Toby look optimized his liquid presentation framework that powers millions of Shopify shops. And that's something
Shopify shops. And that's something where there's a code base to optimize against and you're basically optimizing for a better runtime experience. You're
optimizing for the code to run more smoothly in production. That's a metric that you're using. Or you could be optimizing for something like how you tune models in production, right? Are
you tuning the weights of the models appropriately? And that's something that
appropriately? And that's something that we actually got from Andre Carpathy.
He's the one that came out with auto research just a couple of weeks ago and he used it on his own settings in his quest to autooptimize his way toward
effectively a GP2 level scale. Now you
might think GP2, who cares? It's GPT 5.4 right now. Well, what he's trying to do
right now. Well, what he's trying to do is demonstrate as an independent thinker that it is possible to auto research your way through an LLM development chain, which is a really important piece
of research. And you can use that same
of research. And you can use that same technology on any metric you want to optimize as long as you have sufficient data points. And so I've given you an
data points. And so I've given you an example from from Toby and Shopify and running code. I've given you an example
running code. I've given you an example for the for the deep LLM science nerds around optimizing your tunings. But if
you're not any of those things, you can also use it to optimize conversion rates, right? Like anything you can give
rates, right? Like anything you can give it a metric for in principle, you can auto research against it. Now here's the difference. Yes, this is an agentic
difference. Yes, this is an agentic process. The LLM is essentially climbing
process. The LLM is essentially climbing a mountain by relentlessly experimenting, right? You can think of
experimenting, right? You can think of it as trying to reach the most optimal condition possible. Many experiments
condition possible. Many experiments will be failures. Some will be successes. Humans will probably need to
successes. Humans will probably need to review the ones that are successes to ensure they're scalable. But
this doesn't work in the same way that the software process does. So, I talked about dark factories. I talk about coding harnesses. This is not about
coding harnesses. This is not about producing working software. This is
about using the power of LLMs to optimize for a particular metric. And so
you have to be able to understand is my problem softwares shaped or is my problem metric shaped. Those are super different things. And if you can't
different things. And if you can't figure out the difference between the two of them, you need to sit with your problem until you understand that either it's a rate that I can optimize in
something some measure or it's a piece of software I need to build. And those
are usually pretty intuitive. Once I put it that way, people usually say, "Aha, I know what it is. It's one or the other.
It's not both." Okay, now we come to orchestration. I've saved orchestration
orchestration. I've saved orchestration for the end because it's probably the most complicated one to set up and manage. And that's one of the reasons
manage. And that's one of the reasons there's lots of startups in the space cuz they're basically trying to optimize away that complexity for you. Langraph
is an example of an orchestrator. If you
have a bunch of different jobs you want an agent to do, like, okay, this agent needs to pick up the ticket, this agent needs to go research for the ticket, this agent over here needs to go do something else, and then we have to close the ticket, and we have to comment
on it along the way. You're you're
basically handing off a bunch of things to agents, right? That's a customer success one, but you can imagine other kinds. If you're researching and then
kinds. If you're researching and then you're writing, those are two different things. And so you're looking at
things. And so you're looking at orchestration.
And orchestration is just a fancy way of saying handing off from A to B. Now, I
want to be careful here because if you're listening along and you say, "Hey, the cursor example felt a lot like this. Aren't isn't the planner agent
this. Aren't isn't the planner agent handing off work to the executor?" Yes,
that's true. But keep in mind, this is toward one unified goal in cursor's case. they're trying to build a piece of
case. they're trying to build a piece of code and the multi- aent approach is just the most effective way to do that over a long period of time. In this
case, you're actually giving these agents really specialized roles. And so
if you're the person who got excited about giving agents different roles, you're really excited about orchestration, which is a small subset of what agents can do. And so you're saying, "Okay, I want a really good
marketing agent, and then I want a really good copywriting agent, and then a really good finance agent." you're
doing orchestration and orchestration takes a lot of work from people. You
have to be thoughtful about how you hand off. What do you hand off? What is the
off. What do you hand off? What is the context? What are the procs? You're
context? What are the procs? You're
essentially optimizing all of these individual LLM bits in the chain so that you can effectively manage the handoffs along the way. And in my experience, when you start to talk about agentic
systems, what you're really doing is you're talking about the bits of work where you can trust an agent to do something a human doesn't have to look at. And in the orchestration example,
at. And in the orchestration example, there's actually a lot of joints in the process that a human has to look at. And
that is one of the things that makes a lot of the orchestration approaches right now feel somewhat heavy. You have
to do a lot of human involvement. Now,
that doesn't mean that they're not valuable. There are some tasks where you
valuable. There are some tasks where you do need those specialized roles right now and so it makes sense to have orchestration platforms like Langraph for that task. The question is really
whether the work you're doing on coordination matches the scale of the problem. So if you're tackling 10,000
problem. So if you're tackling 10,000 customer success tickets, it clearly is worth it to spend some time to get this right. Let alone if it's millions or
right. Let alone if it's millions or tens of millions. Now, if you're only going to do this for a,000 tickets or 100 tickets, it might not be worth it.
And so, when people talk about orchestration, I often ask about scale because I'm like, is this really worth going after, right? Like, are you going to put the work into all of the prompts and all of the context management and
this and that and just not get the scale back or is it worth the value you're putting into it? Let me close by giving you a cheat sheet. I want to give you a cheat sheet so you know which of these
different kinds of agents to go after.
If you are optimizing for just what is in front of you, you should be using a coding harness, right? Your judgment is really the gold standard here. This is
what Peter Steinberger did when he used multiple codeex agents to code open claw. His judgment was the gate. That's
claw. His judgment was the gate. That's
a coding harness, right? That's the
classical approach. That's what Andre does too. And so in that sense, it's the
does too. And so in that sense, it's the simplest approach. which is the one we
simplest approach. which is the one we started with in this video. It should be the easiest one to understand. Now, at
project scale, your judgment can still be the quality gate. It just looks a little bit more like cursor's approach.
It looks like having planner agents and executor agents and they're working against an eval, but ultimately a human is still judging. Now, if you go even further and your judgment is no longer the key thing to keep in mind because
you trust the agents and they have been they have been tuned so well and the evals are so good and you're so confident they can hit production or maybe your standards are lower.
Sometimes it's both. Then you might be doing dark factory where all you're doing is making sure the intent and the specification is good and making sure the agents pass the test honestly and then go to production and you're putting
a lot of work into monitoring and making sure that the work that's being done in production is legitimate that the quality is there. That is dark factory work and it's really the story of optimizing not against a task but
against specifications and it is possible to hybridize those. I
talked about that you can do mostly dark factory and have it human check the evals at the end. I often recommend that because I find that you can get a lot of the value out of the middle part that's a dark factory and still get a human
judgment at the end in a place that's really important. Now, if you're
really important. Now, if you're optimizing against a rate or a metric, that's the auto research, right? Now,
you're trying to figure out how to automatically use LLM to run little mini experiments on code or on LLM tunings or on maybe on conversion rates to figure out how to make that metric better. And
really, the sky is the limit. If you
have a lot of data and you have a rate of some sort, in theory, you can apply auto research. We're just at the
auto research. We're just at the beginning of using this. Andre released
the package a couple of weeks ago, but that's the principle and you're going to see a lot more like it. I've already
seen forks that make this very generally applicable and let you ask a question in plain English. So, it's it's coming.
plain English. So, it's it's coming.
Last but not least, if you're optimizing for workflow routing, you're really talking about orchestration. You're
talking about something like Crew AI, you're talking about something like Langraph. And really, what you want to
Langraph. And really, what you want to do at that point is make sure that it is worth it to do all of those handoffs.
So, there you go. Those that that's my safari tour, right? Those are the four species we've been able to see of different kinds of agents doing real work in the enterprise. Please do not
confuse them. I see people who are
confuse them. I see people who are proposing using auto research to build software. Don't do that. Right? I see
software. Don't do that. Right? I see
people who are using longunning coding harnesses and they say this is the way that I want to build and write a novel.
No, don't do that. like that would be an orchestration problem or really probably a human should do it. There are lots and lots and lots of ways to get agents right. But part of the challenge is we
right. But part of the challenge is we are now sophisticated enough that we have to be really specific with what agents do and do not do well and how you
configure this supposedly simple idea of a tools and a loop and an LLM into actual work configurations. And so
that's why I made this video. I want you to walk away and really really understand that there are at least four different types of agents in the wild in implementation today. Do not mix them
implementation today. Do not mix them up. Understand what you're building for.
up. Understand what you're building for.
Cheers and good luck with your agents.
Loading video analysis...