You Don't Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)
By AI News & Strategy Daily | Nate B Jones
Summary
Topics Covered
- AI Agents Lack Proactive Memory
- Memory Architecture Trumps Model Selection
- Platforms Trap Knowledge for Lock-In
- Build Agent-Readable Open Brain
- Compounding Context Creates Career Edge
Full Transcript
Your AI agent probably doesn't have a brain. And what I mean by that is it
brain. And what I mean by that is it doesn't have a system that allows it to read and think through context that you have developed over months and years and reliably come back and be proactive
with. I published a whole guide on the
with. I published a whole guide on the second brain last month. It was super popular. A lot of people built it. A lot
popular. A lot of people built it. A lot
of people improved on it. You can use Zapier. You can use Notion. You can use
Zapier. You can use Notion. You can use N8N. You can use an MCP server. You can
N8N. You can use an MCP server. You can
use Obsidian. I have all of those pieces. But what I don't have is the
pieces. But what I don't have is the agent piece and that matters because in the intervening period in the last few weeks we are now at a point where agents are becoming mainstream. Anthropic is
working on one. OpenAI hired Peter Steinberger the inventor of Open Claw.
Open Claw itself passed 190,000 GitHub stars and spawned over one and a half million autonomous agents in just a couple of weeks. We need a second brain
system that is agent readable. And so
what I'm going to lay out here today is the architecture for what I am calling an open brain. A databasebacked AI
accessible knowledge system that you own outright with no SAS middlemen that can break or repric or disappear. One brain
that every AI you use, Claude, Chat, GPT, Cursor, whatever ships next month, can plug into via MCP. You can type a thought in Slack and five seconds later
it's embedded. It's classified. It's
it's embedded. It's classified. It's
searchable by meaning from any AI tool you touch or any AI agent that wants to touch it. The total cost and yes we've
touch it. The total cost and yes we've benchmarked this. It's roughly 10 to 30
benchmarked this. It's roughly 10 to 30 cents a month. I'm publishing a companion guide on the Substack to handle the step by step. This video is about why the architecture of an agent
readable system matters much more than the individual tools you choose and why the memory problem we're talking about here is secretly the bottleneck in everything you're doing with AI today
and why people who solve it for agents and themselves will have a compounding advantage that whitens every single week. So, first let's talk about the
week. So, first let's talk about the memory problem that is hiding inside your prompting. If you've been following
your prompting. If you've been following my videos for a while, you know I keep coming back to one idea. The quality of AI output depends entirely on the
quality of your ability to specify.
That's not a nice to have principle anymore. That is the whole game. I laid
anymore. That is the whole game. I laid
out the full framework I see for prompting in 2026 in a video I did last week. From prompt craft through context
week. From prompt craft through context engineering to intent engineering to specification engineering, that hierarchy is real. And the people who are 10x more effective than their peers
have built context infrastructure that does the heavy lifting on all of those pieces, the context engineering, the specification engineering before they have to type a single prompt. And what I
want to talk about in this video is how you take that abstract skill set and how you turn it into a memory problem that
gives you a leg up on everybody else. In
other words, if you're going to do context engineering, if you're going to do specification engineering, seriously, you need to invest in a memory system that is yours, that is agent readable,
that makes calling and retrieving that context, that makes specifying easier.
The best prompt in the world cannot compensate for an AI that does not know what you've been working on, what you've already tried, what your constraints are, who the key people in your life
are, or what you decided last Tuesday.
And by the way, that is also the constraint working with agents. They
need that context, too. And right now, that's exactly what most of us are struggling with when it comes to AI.
Every single time we open a new chat, we often start from zero. Every single time we switch from claw to chat GPT to cursor, we tend to lose things, which is why we gravitate toward one of those
systems more than another. Think about
how much of your prompting is asking AI to catch up on what you know already.
The background here is you're burning up your best thinking on context transfer instead of real work. A Harvard Business Review study found that digital workers toggle between applications nearly 1,200
times a day. I get tired saying that sentence. Every switch seems really
sentence. Every switch seems really small but collectively this is devastating our attention. I have
watched this context switching issue play out over and over and over again in my own life in the lives of others and what I keep coming back to is the
insight that our desire to specify to be clear with AI is only getting higher and it's demanding more of our memory systems and our memory systems and memory structures are not keeping up.
Memory architecture determines agent capabilities much more than model selection does. That's widely
selection does. That's widely misunderstood. And when you construct
misunderstood. And when you construct memory incorrectly, you're stuck reexplaining yourself forever or you're stuck in a world where you know how to access memory and the agent doesn't. I
believe we can make a stable memory system that is reasonably futureproofed that enables us to plug in new tools via MCP server very efficiently. So we don't
have to keep updating our system. And
yes, I want to acknowledge something.
Claude has memory now. Chad GPT has memory now. Grock has memory now. Google
memory now. Grock has memory now. Google
has memory now. These features are getting better all the time. But think
about what they give you and what they don't.
Claude's memory doesn't know what you told Chad GPT. Chad GPT's memory doesn't follow you into cursor. Your phone app doesn't share context with your coding agent. Every platform has built a walled
agent. Every platform has built a walled garden of memory and none of them talk to each other. There's a whole new category of products emerging in early
2026 specifically because platforms refuse to solve this products like memcync one context. The problem is real enough
one context. The problem is real enough to spawn an entire VC backed industry.
So what you've really got is multiple AI tools getting upgraded all the time, adding AI tools all the time to experiment with them, and you have a thin siloed layer of context that only
works inside each of those individual tools. You know what? That's not really
tools. You know what? That's not really memory. That is five separate piles of
memory. That is five separate piles of sticky notes on five separate desks. And
now let's add autonomous agents into the picture. The agent category has
picture. The agent category has absolutely detonated in the last few weeks, but the use cases that are shining, like the guy who got thousands of dollars off a car purchase, they're
shining because the agent has the ability to securely and safely access relevant memories, relevant context from the user. Whereas agents that just guess
the user. Whereas agents that just guess contacts or have to fill in the dots because you aren't able to provide them secure access to all of your systems, they're not going to be nearly as useful
for you. And whether we're talking about
for you. And whether we're talking about agents or we're talking about tools, the part that should bother you even more is that these systems that corporations are designing are all designed to create
lock in. Memory is supposed to be a lock
lock in. Memory is supposed to be a lock in on chat GPT, ditto on other systems. So you've spent a long time building up history with a tool and now if you want
to try the latest other model, let's say you're on chat GPT and you want to try Gemini or you want to try Claude or you want to try another model, you lose all of that context, not because the new model is worse, but because your context
is trapped in the old one and oh by the way, all of that memory in those individual tools, that is not agent readable. And so as we get to a world
readable. And so as we get to a world where autonomous agents are becoming more and more and more a thing, the big corporations are betting that if they
can trap you with memory, you will only use their agents and they will get to keep you and your attention and your dollars forever. But your knowledge
dollars forever. But your knowledge should not be a hostage to any single platform. And for most of us right now,
platform. And for most of us right now, frankly, it is. And that's shaping our entire AI future. We don't necessarily have a free choice between tools right
now because the product strategy of these large businesses is to keep you to keep you engaged to keep you entertained. I've talked about how in
entertained. I've talked about how in many cases you're pushing for engagement with these models. One of the reasons why chat GPT40 was so mourned and so grieved was because it was an engagement
optimized model and people liked the engagement. It works. Ditto with memory.
engagement. It works. Ditto with memory.
Memory is engaging. Feeling known is engaging. It works. It's smart product
engaging. It works. It's smart product strategy. But you're smart, too, and you
strategy. But you're smart, too, and you don't have to go along with that product strategy. And you might be thinking at
strategy. And you might be thinking at this point, Nate, you made a video on second brain. I can just connect it to
second brain. I can just connect it to my open claw and I'm fine. Absolutely,
you can try that. But you're going to run into a structural mismatch that most people haven't noticed. That explains
why the current generation of notetaking tools needs a different more structural memory layer underneath. The internet
right now is forking. I've talked about that. There's the human web with fonts,
that. There's the human web with fonts, with layouts, with what you're reading.
And there's the agent web that's emerging with APIs, with structured data that's built for machine to- machine readability. That fork is happening to
readability. That fork is happening to your memory architectures and your notes as well. Your notion workspace, for
as well. Your notion workspace, for example, is built for human eyes. It's
built for pages, for databases, for views, for toggles, for cover images.
It's beautiful for you. It's useless for an AI agent that needs to search by meaning, not by folder structure. Your
Apple notes are locked into an ecosystem. Your Evernote has a decade of
ecosystem. Your Evernote has a decade of accumulated clutter with no semantic structure. Your bookmarks are a
structure. Your bookmarks are a graveyard of things you've meant to read. These tools were built for the
read. These tools were built for the human web back in the 2010s. They were
designed for you to browse, to organize, to read. They were never designed
to read. They were never designed fundamentally with the expectation that AI agents would query them. That got
bolted on later, much more recently. And
the apps adding AI features today are mostly doing it as bolt-ons, like chat with your notes. Great. You have one AI that can kind of search one app. What
about the other five tools you use every week? We're still in a world of separate
week? We're still in a world of separate sticky notes on separate desks. You've
traded one silo for another. Every
second brain app has been reaching for something that required a different layer entirely. Infrastructure built for
layer entirely. Infrastructure built for the agent web, not the human web. And
that's what I want to focus on here.
Because if you can build infrastructure for the agent web, you are suddenly in a position to make a lot more human-friendly decisions with how you plug into that infrastructure. The
infrastructure is yours. It's something
your agent can plug into. It's something
your chat bots can plug into, but you control and manage it. This frees you from having memory that only lives with one of these corporations and their
clouds AI systems. You don't have to depend on chat GPT memory anymore. It
also frees you from having to depend on an individual SAS company not changing a setting in order to keep your own second brain working. And ultimately, as agents
brain working. And ultimately, as agents get better, it frees you from having to do as much manual work to retrain a second brain. And so, this is me
second brain. And so, this is me essentially giving you a sense of how agents unlocking are changing our perspective on memory and changing our
perspective on prompting and changing what we need to be digital citizens.
Just as we needed a personal computer to be digital citizens over the 2010s, over the 1990s, over the 2000s, we need our own memory architectures to be
responsible AI citizens now. But we
haven't really had a way to do that. And
until very recently, until the last few weeks, we haven't had AI agents that would make that really practical. Now we
do, and now the world has moved, and now it's time to talk about it. So, let's
get specific. What am I proposing here?
Instead of storing your thoughts in an app designed for humans, you should store them in infrastructure designed for anything. A real database, vector
for anything. A real database, vector embeddings that capture meaning, not just keywords, a standard protocol that any AI can speak. I'm calling it open brain because the architecture is what
matters and you should not be forced to choose any given model. This is all possible because of MCP, the protocol shift that I talked about briefly above.
It started as Anthropic's open- source experiment in November of 2024, but it's since become the HTTP infrastructure of the AI age. It's the USBC of AI. It's
one protocol. Every AI, your data is yours. It stays in one place, but every
yours. It stays in one place, but every tool that speaks MCP can read it. So, at
a high level, I don't want to make you go and click somewhere. Let me show you what this actually looks like.
Your thoughts live in a Postgress database you control, not somebody else's proprietary format. This is the most boring battle tested technology you
can imagine. Postgress is not exciting.
can imagine. Postgress is not exciting.
It's not deprecating. Postgress isn't
chasing a growth metric. Postgress isn't
VC backed and needing to hit a billion dollar unicorn valuation. It's just a standard way of storing data. And you
want that boringness because everything else needs to plug into it. The nice
thing about the database is that if you construct it properly, if you vectorize it, every thought you capture gets converted into a vector embedding, which means it's a mathematical representation
of what it means that is immediately natively AI readable. So when you ask what was I thinking about career changes last month, it can find your note about
how you were considering moving into consulting or how you were considering moving into product even if you never used the word career in the original thought. is called semantic search and
thought. is called semantic search and it's a whole different universe from F.
So what this looks like when you have Postgress hooked up with an MCP server is you can type into a Slack channel, hey I was talking with Sarah. She
mentioned she's thinking about leaving her job to start a consulting business.
She's been really unhappy since the reorg. 5 seconds later, the system has
reorg. 5 seconds later, the system has stored the raw text, generated a vector embedding of the meaning, extracted the metadata, the people, the topics, the type, the action items, and filed all of
it in a real database. Now, any AI that you're working with can go see that. If
you're in Claude working on a coaching framework, hey, search my brain for notes about people considering career transition. Found it. If I'm in chat GPT
transition. Found it. If I'm in chat GPT drafting an email, same search, same result. If I'm in cursor building a tool
result. If I'm in cursor building a tool and I need to remember a decision I made last week, hit the MCP server, it's right there. One brain, every AI
right there. One brain, every AI persistent memory that never starts from zero. Even if you start a new tool
zero. Even if you start a new tool tomorrow and you've never touched it before. So this has two basic parts,
before. So this has two basic parts, right? Capture runs through any tool you
right? Capture runs through any tool you have open. You type a thought, it hits a
have open. You type a thought, it hits a superbase edge function that generates an embedding and it extracts the metadata in parallel and stores both in a Postgress database with PG vector and it just replies in thread with a
confirmation showing what it captured.
The whole round trip takes under 10 seconds. Retrieval runs through an MCP
seconds. Retrieval runs through an MCP server that connects to any compatible AI client. You have three tools.
AI client. You have three tools.
Semantic search, which is finding your thoughts by meaning, listing recent, which is browsing what you captured this week. and stats. See your patterns,
week. and stats. See your patterns, right? You can hit this from Claude,
right? You can hit this from Claude, from Claude Code, from Chad GPT, from cursor, from VS Code, from anywhere you can query your brain through an MCP server. If all of this sounds like Greek
server. If all of this sounds like Greek to you, the companion guide walks you through a complete setup. Copy paste, no coding, about 45 minutes to set up. And
you know how I tested this? I asked
someone in my life to follow this guide before I showed it to you. And she has no coding experience whatsoever. And I
said, "Can you get to a point where you can set this up?" And she could. And it
took her about 45 minutes. And I'm not kidding about the cost because the total running cost on the free tiers of say Slack and Superbase, which is what I'm talking about here, it's roughly a dime
to 30 cents a month and API calls for about 20 thoughts a day. So you're going to spend more on coffee this morning than you're going to spend on the system this month. Here's why getting memory at
this month. Here's why getting memory at the fundamental architectural level matters beyond the nice feeling we get from building a cool tool. I love to build. You can probably tell people who
build. You can probably tell people who love to build will love to build anyway, but it matters for everybody. It doesn't
just matter for those of us that like to experiment. We are in the middle of a
experiment. We are in the middle of a massive shift in how AI integrates into our daily work. The models keep getting better at a terrifyingly fast pace and
you don't want to fall behind. Opus 4.6
6 shipped just a couple of weeks back.
The agent market is growing probably in triple figures this year. Threeperson
engineering teams are routinely outproducing teams 10 times their size.
And we're finally seeing this explosion in AI productivity show up even in economywide metrics. Eric Bjornson wrote
economywide metrics. Eric Bjornson wrote in the Financial Times last month that US productivity grew roughly 2.7% in 2025, which is double the decade
average. And frankly, Eric attributed a
average. And frankly, Eric attributed a fair bit of that to AI agents and AI.
But the key is, as I've called out before, AI adoption is not the same everywhere. If you're just talking with
everywhere. If you're just talking with a single chatbot, I've said it over and over, you're not really adopting and working your workflows around AI in the way you need to. And the people getting
those outsized results are not depending on better models to get there. They're
actually restructuring how they work with AI as a primary collaborator. But
you cannot collaborate with something that has no memory of you. Think about
the difference between these two workflows. Person A opens up Claude,
workflows. Person A opens up Claude, spends four minutes explaining their role, their project, their constraints, and the decision they're trying to make, and they get a good answer. Person B
opens up Claude. It already knows her role, her active projects, her constraints, her team members, and the decisions she made last week because all of that lives via MCP server in Open Brain.
All of it is loaded up before she types a word. She asks for a question, she
a word. She asks for a question, she gets an answer informed by six months of accumulated context. If she wants to
accumulated context. If she wants to switch to Chad GPT for a different perspective, she'll get a different model, but she'll get the same brain, the same context, and the same answer
quality. Every single tool will have the
quality. Every single tool will have the full picture for her. And the key is that advantage will keep compounding.
Every thought person B captures makes the next iteration better. Every
decision logged, every person noted, every insight saved as another node to what's a growing knowledge graph that every AI in the system can access. So
person A is going to start from zero every single time. The gap between I use AI sometimes and AI is embedded in how I think and work is the career gap of this
decade. And it comes down to memory and
decade. And it comes down to memory and context infrastructure. And the gap is
context infrastructure. And the gap is going to get wider as person B continues to accumulate knowledge every week. The
people who build persistent, searchable, AI accessible knowledge systems will have AI that gets better at helping them over time because it has more context to work with. Every thought you capture
work with. Every thought you capture makes the next search smarter, the next connection more likely to surface. And
that is a compounding advantage that you own, that the big companies don't own.
Whereas the people who keep reexplaining themselves in every chat window are going to wonder why AI still feels like a party trick. It's the same tech. It's
just wildly different outcomes. And the
variable here is your infrastructure.
And one thing I want to call out here, I've given you a simple example where you can retrieve a clear answer in text in any AI tool you want with an MCP
server. But MCP servers are not just for
server. But MCP servers are not just for retrieval. And if you construct an open
retrieval. And if you construct an open brain, your MCP server can work in a lot of different directions to give you advantages you might not think of if you
are just used to using memory in a single tool. MCP means you can write
single tool. MCP means you can write directly into the brain from anywhere. I
really meant that. You can write into Claude on the phone. You can use Chad CPT on the desktop. You can use Claude code in the terminal. You can rig it up uh to talk to a messaging app. any MCP
compatible client becomes both a capture point and a search tool. You're not
locked into Slack or any other system.
That's what open means. And then think about what you can build over the top.
It's easy to use MCP to build a dashboard that visualizes your thinking patterns over time, a daily digest that surfaces forgotten ideas based on what you're working on. And do you know that
you don't need to use code to do that because you can just ask the AI tool of your choice to retrieve from the MCP server the relevant slice of context and
build something because the data is stored in a way that is easy to plug in and easy to store and easy to access from any tool out there. The ceiling is wherever you decide to stop building.
Now I want to be honest the metadata extraction isn't always perfect. The LLM
makes its best guess to classify with limited context and it will sometimes mclassify a thought or miss a name. It
doesn't matter as much with semantic embedding because the embeddings handle so much of the heavy lifting with retrieval. Semantic search works even
retrieval. Semantic search works even when the metadata is off. The one real requirement for this to work is that you actually use it because the system
compounds. Every thought you capture
compounds. Every thought you capture makes the next search smarter and the next connection more likely to surface.
But it needs input. You need to build the habit. You need to be dumping your
the habit. You need to be dumping your thinking into the system and let it do the rest. Now, if you're a subscriber on
the rest. Now, if you're a subscriber on the Substack, I've put together four prompts that cover the full life cycle.
And I actually want to describe them in the video because even if you're not a subscriber, you should understand how we can use prompts in the architecture of this system to think more deliberately
and make the memory architecture fit our needs. The memory migration is the first
needs. The memory migration is the first thing I'm going to suggest. You want to run this right after setup. It extracts
everything your AI knows about you already from Claude's memory, from Chad GPT's memory, from wherever you've accumulated context, and it saves it into your open brain. Every other AI you
connect then starts with that foundation instead of zero. So you want to run it once and let it pull that stuff down.
I'm also building what I call the open brain spark because I sometimes get writer's block. So you want to have an
writer's block. So you want to have an interview prompt that discovers how the system fits your specific works. It asks
about your tools, your decisions, your reexlanation patterns, your key people, and then generates a personalized list organized by category that suggests what you should be putting into Open Brain
regularly. Use it when you're staring at
regularly. Use it when you're staring at the Slack channel or you're staring at your messaging app or you're staring at Shed GBT and you're wondering what do I type that I want to put into OpenBrain today. I also put together quick capture
today. I also put together quick capture templates. So these are five sentence
templates. So these are five sentence long starters optimized for really clean metadata extraction. So a decision
metadata extraction. So a decision capture prompt, a person note, an insight capture, uh a meeting debrief, each one is designed to trigger the right classification in your processing
pipeline. And after a week of capturing,
pipeline. And after a week of capturing, you'll find you don't need them as much because you're going to develop your own patterns. but they're really useful for
patterns. but they're really useful for building that habit early without having to think about how to sort of send the system a coherent message where it's likely to classify correctly.
The weekly review is another one I put together. End of week synthesis across
together. End of week synthesis across everything you captured. It clusters by topic. It scans for unresolved action
topic. It scans for unresolved action items. It detects patterns across days.
It finds connections you missed. And it
identifies gaps in what you're tracking.
So about 5 minutes on a Friday afternoon becomes more valuable every week because your open brain continues to grow.
If we zoom back out, when this thing works, when you get the Postgress database set up, you're starting to use it in whatever messaging app you want, you're starting to see the memory become
consistent across all your AI tools, and you're starting to realize you do not depend on proprietary paid for memory by big AI companies.
something happens that's a little bit hard to describe until you experience it. Your AI in every single part of the
it. Your AI in every single part of the system, whether you're using Claude or Chad GPT or both or Cursor or Grock, whatever it is, it starts to know you.
Not in the creepy corporate surveillance way, in the hey, we were thinking about this last week and it's relevant to what you're asking me now kind of way. The
way a great colleague remembers what matters. So every AI you use gets
matters. So every AI you use gets better. You're less afraid of trying a
better. You're less afraid of trying a new AI because you can just plug it into MCP and it finally has the context.
This is what an agent readable world makes possible. And I want to call out
makes possible. And I want to call out something really special here. When I
suggested the original second brain guide, I built it before the agent revolution went mainstream, which again was only about a month and a half ago, a month ago.
And it was useful for humans and it was designed to solve a fundamental cognitive problem that we've had which is that we have trouble holding stuff in our head and we need to see patterns over time. LLMs can help us assess
over time. LLMs can help us assess patterns. That's all still true and you
patterns. That's all still true and you can use this open brain in that way. But
when the agent revolution came through in the last few weeks because again AI is moving that fast. What we need to move to is a second brain system that is
more foundational. Something that
more foundational. Something that enables both us and our agents to reliably read from a system that isn't SAS controlled, that isn't proprietary
company controlled, that is frankly open- source LLM friendly. And when we have that, we get two benefits. Yes, the
agent can read it. And that is in line with where we're going with agents and how quickly agents are going mainstream.
And that's the reason I'm making this video. But second, look at how much
video. But second, look at how much cleaner and clearer the human readable part of this gets. We get downstream
benefits that we did not get when we think about the system from only a human readable perspective. Because if you
readable perspective. Because if you think about the system from a human readable perspective, you get something like what I described. You focus on SASfriendly solutions with graphical
user interfaces that humans can easily read because you want to make it easy and accessible to build the system. And
that's what I did originally. But if
you're willing to get slightly technical and follow a clean step-by-step tutorial to get to something that is a true database, what you get is a
futureproofed system that unlocks the human benefit of touching any AI system in the future that you may want to try without doing any additional effort. And
so we humans reap a tremendous amount of value from the clarity that comes from a truly foundational architected memory
system. This reminds me of one of the
system. This reminds me of one of the larger lessons I've been meditating on in the AI revolution which is that AI is
forcing a clarity of thought in our work in our lives that has a tremendous amount of human benefit. Toby look has
said that he thinks a lot of corporate politics amount to bad human context engineering which is a very provocative take and I think that that is something
that pops out here because we need extraordinary clarity to work with AI agents and when we develop that extraordinary clarity through memory
architectures that are foundational through good databases through a clean MCP server We get the benefit of cleanly and clearly being able to plug in and
work with that memory system anywhere.
We do good context engineering for our human brains when we build the right context engineering for AI, which is kind of Toby's point about politics.
When we do good context engineering for agents, we happen to do good context engineering for people. And that makes people less likely to play politics. So
the second brain you built, if you were one of the thousands of people that built it when I talked about it, was always reaching for this. It was
reaching for a place where your thinking lives, where it's searchable by meaning, where it's accessible to any tool you use. And those tools solve the capture
use. And those tools solve the capture problem. They solve the organization
problem. They solve the organization problem. But what they didn't realize
problem. But what they didn't realize they needed to solve because it wasn't really there yet was the agent readable problem.
Open brain adds that foundational layer not by replacing what you built but by giving it an infrastructure underneath a database, a protocol, your thoughts,
every AI you'll ever use. So you can build it in a morning over coffee this weekend. Yes, really you. And your
weekend. Yes, really you. And your
future AI, your future self as a human will thank you for every thought you start to capture. Now, if you have already built a second brain, I'm also
including a special migration guide so that you can figure out how to not lose the thoughts you've been capturing and make sure you get them into a system that is more agent readable going
forward. Best of luck. Don't be afraid
forward. Best of luck. Don't be afraid of how this is slightly technical. There
have been lots of visuals all the way through this YouTube helping you to see what I mean. And you'll see more guides in the substack if you're interested.
And honestly, I put enough visuals into this video that if you are not ready to hop into the Substack, totally fine. You
should still be able to get there. You
should be able to show this video to an AI and say, "Help me build this." And it should be able to do it.
Cheers.
Loading video analysis...