How to Build a Personal Context MCP
By The AI Daily Brief: Artificial Intelligence News
Summary
Topics Covered
- Data Ready Is Just a State of Mind
- The Context Repetition Tax Is Unsustainable
- Your Personal Context Portfolio Is API Documentation for You
- Modular Over Monolithic for Living Context
- Build It with AI Step by Step, No Judgment
Full Transcript
In a world of agents, everything is about context. And today we are going to
about context. And today we are going to help you build your own personal context portfolio and MCP server. Today we have another episode in our build week
series. And boy does this one cut to the
series. And boy does this one cut to the heart of building. Right now we officially live in the agentic era and agents as we know need context to do
their jobs well. And yet context is one of those things that is very simple to articulate and much harder to actually organize in a way that is useful. Now
this is obviously a big problem in the context of organizations. Michael Chen
from applied compute recently dropped an article on X called what to expect when you're deploying AI in the enterprise.
He writes at applied compute we spent the past 6 months embedding inside companies to deploy AI into production workflows i.e. actually sitting in their
workflows i.e. actually sitting in their offices, filing tickets, reading Confluence pages, fighting for access to data, and shipping agents into production that improve over time. There
is surprisingly little written about working with large organizations in the age of AI. So, this is our attempt to fill that gap. And big and blaring right at the front is one data ready is just a
state of mind. The gap between we have data and we have data in a format that an AI system can learn from is enormous.
It surprises everyone, even teams that have already wrangled internal data for incredible companies. Most enterprise
incredible companies. Most enterprise data was never structured with AI consumption in mind. It's difficult to imagine a more challenging starting point for a data project, which at its core, every agent deployment is a hard
data problem. Now, he's using the word
data problem. Now, he's using the word data, but obviously in this case, this is at least partially synonymous with context. One of the big differentiators
context. One of the big differentiators between organizations that are leading and organizations that are lagging is that the lagging organizations tend to operate without their AI systems having access to context. In other words,
they're dropping co-pilot on people's heads and hoping it all works out, which is very different than becoming an AI native organization. Now, there are lots
native organization. Now, there are lots of organizations who are working on the context problem for the enterprise. Just
to take an example from the last 24 hours at the time that I was recording this episode. Notion who basically their
this episode. Notion who basically their entire play for enterprise AI is a pitch that they already have your enterprises context announced database agents which they describe as a team of little librarians in your database keeping it
up to date automatically using context from your page your workspace and the web. So okay we have an acknowledgement
web. So okay we have an acknowledgement that context in the enterprise is tough and we're even seeing a lot of work on the context that agents can provide each other around their tool use. Andrew Ing
recently wrote, "Should there be a Stack Overflow for AI coding agents to share learnings with each other?" Last week, I announced Context Hub, an open CLI that gives coding agents up-to-date API documentation. In our new release,
documentation. In our new release, agents can share feedback on documentation. What worked, what didn't,
documentation. What worked, what didn't, what's missing. This feedback helps
what's missing. This feedback helps refine the docs for everyone with safeguards for privacy and security. So,
Context Hub for agents is all about the context they need to use tools better.
And yet you might have spotted that what all of those efforts don't have is an emphasis on the individual. Now recently
we had a moment where the challenge of the portability or lack thereof of personal context reared its ugly head.
In the wake of the Pentagon threatening and then following through on their designation of anthropic as a supply chain risk and OpenAI's quickly regretted decision to announce their deal with the Department of Defense on the same night. There was a big push
over the course of the next couple of days to drop chat GPT and switch to Claude. That was of course when Claude
Claude. That was of course when Claude hit number one in the app store for the very first time. Now into that maelstrom, the team at Claude released what they called a feature to make it easier to import saved memories into
Claude. Switch to Claude without
Claude. Switch to Claude without starting over, they promised. And of
course, this is a big deal. If you have been investing in either Claude or ChateBT or Grock or Gemini or whatever system you are over time, it's learned so much about you that the idea of
having to explain to a new LLM all of those things once again becomes a reason just not to switch. Now, Claude's
approach to importing memory was pretty simplistic. In fact, all it was is a
simplistic. In fact, all it was is a copyable prompt that Claude wrote that says basically, I'm moving to another service and need to export my data. List
every memory you have stored about me as well as any context you've learned about me from past conversations, etc., etc., etc. Basically, it was a prompt that asked Chacht to write up everything it knew about you so you could hand that document off to a new chatbot. Not bad,
but there's got to be something more, right? Well, that's what we are talking
right? Well, that's what we are talking about today. We're going to go through
about today. We're going to go through and talk about and build a personal context portfolio. In other words, a
context portfolio. In other words, a portable machine readable representation of who you are so that in the future every AI agent, tool or system you use knows about you coming in and you are no
longer dealing with memory and contextbased product lockin. So the
problem as we've discussed is that every time you set up some new agent or some new cla project or onboard some new tool and presumably if you're listening to this show that happens more than infrequently. You have to reexlain
infrequently. You have to reexlain yourself from the ground up. your role,
your projects, your preferences, your constraints, even how you like to talk to the machine. And when that was a very occasional switch, maybe that was in the realm of annoyance. By the time you're
dealing with three agents or five or 10 agents, though, it's completely untenable. And as you get into the world
untenable. And as you get into the world that we're going into where every week there are going to be new types of agents and agentic surfaces that you're interacting with, it is going to become
absolutely critical to have a way to get out of paying this context repetition tax. Now, importantly, the context
tax. Now, importantly, the context repetition tax doesn't just waste time, it also degrades quality. And I
guarantee you that even if you have been willing to provide your context to a new agent you were working with, the sheer time and effort it takes to explain everything fully means that there was probably a lot that was left out. The
solution that I'm proposing is a personal context portfolio, a structured set of markdown files that together represent you as a context package.
Effectively, it's an operating manual for any AI that works with you that knows about your roles, your projects, your team, your tools, your communication style, your goals, your constraints, your expertise.
Effectively, it's API documentation, but for you, a single source of machine readable truth about who you are that any agentic system can read. Now, a
couple of design principles for this one is obviously this is going to be marked down first. You might have yesterday
down first. You might have yesterday just listened to the agent skills master class. And even if you haven't, you're
class. And even if you haven't, you're probably familiar with this new primitive that is skills. Skills are
effectively a folder of information that updates the knowledge base and context for any given agent that is all rendered in markdown files. Every AI system on Earth can read markdown. It is the universal interchange format for
context. And so the personal context
context. And so the personal context portfolio is going to be markdown first.
Second, we are going for modular, not monolithic. This is not going to be one
monolithic. This is not going to be one giant about me file. We have separate files and separate templates for separate parts of the whole that is you.
This means that you can give different agents different pieces of what they need. It allows agents to grab what's
need. It allows agents to grab what's relevant and ignore what's not. It also
means, which gets to principle three, that this is living and not static. This
is not a thing you write once, but it's a thing you maintain, or better that your agents help you maintain. As
projects change and priorities shift, the personal context portfolio should evolve with you. And again, because it is modular, it's not just that you'll change what's in this initial file set.
Probably find reasons to expand the files that are actually in the portfolio. Now, obviously, the last
portfolio. Now, obviously, the last piece, which is sort of implicit in the markdown first principle as well, is that this is meant to be portable across everything working with claw, chatbt, openclaw, gemini, and whatever else
comes next. By being markdown first, it
comes next. By being markdown first, it is just files and you can bring them anywhere. So, what are the files? I want
anywhere. So, what are the files? I want
to stress that this is not necessarily for everyone going to be comprehensive or even the right breakdown, but I wanted to have a clean starting point that would be significantly better, like 10x better than nothing. And so the
portfolio template that we've put together is divided into 10 different dimensions. The identity.md file is
dimensions. The identity.md file is first. It's your name, your role, your
first. It's your name, your role, your organization, what you do in a single paragraph. This is you distilled down
paragraph. This is you distilled down into a page. If the agent can only read one file, you want it to be this one.
Next up is roles and responsibilities.mmd.
responsibilities.mmd.
This isn't your job description. This is
your actual lived experience. It
explains what your job or your activities actually involve day-to-day.
Can be anything from what decisions you make to what you produce to who you serve to what your week looks like.
Current projects.mmd goes a level down.
These are the active work streams that contain in this file status, priority, key collaborators, goals, KPIs, what done looks like for each. My guess is that this will be the file that changes
most often because presumably from week to week, what is a current project versus a past project versus an icebox project is going to change. Team and
relationships.mmd is the key people you work with, their roles, how you interact with them, what they need from you, what you need from them. When you've got agents prepping meeting notes or agendas
or one-on- ones, this is going to be one of the key files that they need. Tools
and systems. MD is what you use, how it's configured, what's connected to what, rather than agents running off and using whatever tools they think would be useful. This gives them a picture of
useful. This gives them a picture of your stack so they can make sure that what they're doing actually comports with the systems you already have.
Communication style.md. Maybe this one seems less important to you, but goodness gracious, for me at least, every time I interact with agents, I am always surprised at how much this one
matters. That could be because I am
matters. That could be because I am completely allergic to any hint of sick fancy or fluff or coddling or wavering.
Effectively, there are a lot of things about the way that models on average communicate that I very much dislike.
And so, communication style.md which can include everything from how you write, how you want things written for you, your tone preferences, your formatting preferences, what you dislike. This is a file that is both internal facing and
external facing. It impacts how the
external facing. It impacts how the agent communicates with you, but it also helps make every output of the agent feel like yours. Goals and priorities is a level up from current projects. This
is about what you're optimizing for right now. Whether the right frame of
right now. Whether the right frame of reference is this week, this month, this quarter, this year, or in your career overall. It gives your agents the
overall. It gives your agents the ability to weigh decisions and recommendations appropriately, viewing the work as a continuous whole rather than siloed in the context of any individual project. preferences and
individual project. preferences and constraints.mmd is the always do this,
constraints.mmd is the always do this, never do that file, then this could be a very diverse set of different things for different people. If you're using agents
different people. If you're using agents to help plan your travel, maybe this is dietary restrictions or time zone constraints, maybe it's about tools you refuse to use, strong opinions you have
about formatting. Basically, this is all
about formatting. Basically, this is all the stuff that out of the box an agent is going to get wrong most of the time unless you tell it how to get it right.
Domain knowledge MD is your expertise areas, your industry context, key terminology. These are the things that
terminology. These are the things that you know that a general purpose AI doesn't. If you work in biotech, this is
doesn't. If you work in biotech, this is where the agent learns that you know what a phase 2 trial is and doesn't need to explain it. Now, this is another one that I think could be very expansionary over time. At the beginning, it might be
over time. At the beginning, it might be just a log of what you know, but over time it might actually impart some of that so your agents in the future know it, too. Finally is decision log.mmd,
it, too. Finally is decision log.mmd,
the history of past decisions and the reasoning behind them. I actually think that this could end up being the most underrated file. Because when an agent
underrated file. Because when an agent is helping you think through a new decision, knowing how you've decided things before is enormously valuable. So
that's the 10 files that make up the template of the personal context portfolio. But how are you going to fill
portfolio. But how are you going to fill this out? You, my friend, live in AI
this out? You, my friend, live in AI world. So you are certainly not going to
world. So you are certainly not going to write this by hand. My goodness.
Instead, you are going to have the AI interview you to get it done. For each
file, you're likely going to follow a pretty similar loop. First of all, if you're using something like Claude or Chat GBT, you'll probably want to create a project to house this all so that the context of the process itself gets
shared across the different instances of these types of interviews. And
effectively, you're going to go through a process of interview to draft to reaction to revision and then so on and so forth in a recurring loop until you feel like you've gotten enough information to be going with. Now,
because we live in build world, I didn't just want to describe this all to you guys. I wanted to actually provide some
guys. I wanted to actually provide some resources. So, here are a couple. First
resources. So, here are a couple. First
of all, I've put up the personal context portfolio as a public repo on GitHub.
This is going to have templates for all of those files that I just mentioned.
And the templates include not only the ultimate output structure that you're going to want, but an interview protocol that you can hand your AI build partner.
Each of the 10 files has that interview protocol as well as the output structure. There's also an overall
structure. There's also an overall interview protocol that you can use as you're setting up your project. If you
want to get a sense of how this might look in practice, there are three synthetic demonstration examples. One
for an entrepreneur, one for an executive, and one for a knowledge worker. There's also a folder called
worker. There's also a folder called wiring which gives some resources for turning this into a cloud project or an MCP or an API layer. And we'll come back to that in just a minute. So hopefully
this makes it fairly easy to get up and going. And like I said, all this is
going. And like I said, all this is available on play.brief.ai. I and I might even put this one actually on the main section of the website. But come
on, man. We live in agent build world.
We can go a step further than this, right? Of course, we can. So, for those
right? Of course, we can. So, for those of you who don't want to bother with all these messy templates and interview protocol and all of that, you are just going to use the personal context portfolio app that we built. This is
exactly what it sounds like. It's got
two sections, an interview, which is powered by Opus 4.6, and the portfolio that it's building persistently in the background. The interview is designed to
background. The interview is designed to never be fully done. It works through questions based on the overall goals trying to fill out all 10 of those portfolio files, but it will engage with you for as long as you want it. If you
want to come back, it will continue to talk with you, adding more information in. Now, the cool thing about this is
in. Now, the cool thing about this is that rather than having to break this up into 10 different interviews like you might have to if you were using a cloud project, when you answer one question, if it's relevant for different portfolio
files, that's all going to be added at once. You can see for example here when
once. You can see for example here when I explained what super intelligent did helping enterprises with AI strategy. It
added notes to the identity file, the current projects file and the domain knowledge file. This speeds things up.
knowledge file. This speeds things up.
Anytime you want you can download your portfolio. And obviously this is of
portfolio. And obviously this is of course totally private to you, completely free and hopefully a faster leg up to get started. Now honestly
given that this is just one episode, I should not have spent as much time as I did trying to get the actual interaction right. But I got to say I think this one
right. But I got to say I think this one is pretty useful. So, you should go check it out. The only reason, by the way, that I'm not giving you a dedicated URL right now outside of the podcast website is that I'm not sure what
dedicated URL I'm actually going to use for this. Now, once you've got your
for this. Now, once you've got your portfolio downloaded, the last piece of the puzzle is how you make it highly transportable. Now, to be clear, you
transportable. Now, to be clear, you don't necessarily need to do this step.
If you host, for example, your own personal context portfolio on GitHub, many agents are going to be able to interact with that and use it. Plus, if
you have the folder of markdown files, you're going to be able to drop that in any chatbot. But for the sake of
any chatbot. But for the sake of exploring more advanced modes, let's now put your personal context portfolio into an MCP server. Now, for this, we're going to lean heavily on what I think is
the single most important advice that I give anyone about how to learn how to use AI, which is to lean on the AI as your tutor and build partner. I've been
managing this whole endeavor as part of my AIDB training project on Claude and I had gone through the entire process and I could tell as I was transitioning from the part of the project where I was getting these templates up on GitHub to
the part of the project where I wanted to put my personal context into an MCP server that I was exhausting Claude's context window. For me, that usually
context window. For me, that usually manifests as it getting short and kind of lazy. And so, I had to write a
of lazy. And so, I had to write a handoff that was specifically about this MCP goal. And we dove in. And pretty
MCP goal. And we dove in. And pretty
much all the time that I spent on this was going back and forth with Claude to help me figure things out. Now, the
first job of this was Claude wrapping its head around exactly what I wanted out of the experience, whether it was read only or read write, what the O model was, whether it was a combined resource or the individual files. From
there, it produced this massively long document with all the steps, which looking back now were the steps that I would ultimately go through, as well as this particular bunch of code that I would use alongside a read me and a
couple other documents. Now, for the purposes of both the podcast and my own purposes, I said, "This is 1,000% too complex. Walk me step by step through
complex. Walk me step by step through creating an MCP server, and I'll figure out how to explain it." And to put a fine point on this, AI is zero judgment.
There is no risk of you looking or seeming dumb because there's no one on the other end of the line to think that.
When you are trying to get something explained step by step, even if it tries to race ahead, demand that it go back and do things more simply. So in our case from there Claude got way basic. It
reminded me first what an MCP server is mechanically a program that responds to a specific protocol. An AI tool sends it a request saying what do you have? And
it responds with a list of resources.
The tool says give me this resource and it responds with the content. So of
course in our case the AI tool wants to know more about you or your projects or your team and the MCP server has all of those resources at the ready. Now, step
two, it divided the way that an MCP server can run into the two categories of local or remote. Is this all just for things going on on your machine, or do you want yourself or others to be able
to access it from anywhere? Ultimately,
I wanted to do both, so we dove in. Now,
once again, it immediately tried to not go step by step, but to give me a whole bunch of information at once, and I had to remind it to slow down. This is the process that I would recommend you follow. Pull up Claude or Chat GBT or
follow. Pull up Claude or Chat GBT or Gemini or whatever your LLM of choice is. Once you have this personal context
is. Once you have this personal context folder and have it walk you step by step through how to set it up first remotely and then on the web. And one thing to keep in mind as you're doing that is
that the vast majority of the time I spent on this was sharing screenshots of things that went wrong and asking it to help me figure it out. For example, this little message one MCP server failed. A
lot of the work is troubleshooting. Now,
in that case, we figured out that port 3000 on my computer was already taken, so it was a relatively easy switch. But
that's the type of thing you're going to experience as you go back and forth on this. Another small tip, one thing that
this. Another small tip, one thing that I've noticed is that when Claude or Chatbt are giving you some code that you need to run somewhere or copy paste into cursor or VS code or something like
that, once they've given you the initial block of code, they'll often say now just change this one thing. I have found personally that a lot of the errors that I run into are accidents in the copy
pasting of the changing of that one thing. And so I will frequently say,
thing. And so I will frequently say, even if it's repetitive, when you're asking me to change one line from this whole 77line document, just give me the whole new 77line document so I can copy
paste the entire thing at once. Couple
other errors we ran into. One of them was a file naming mismatch. And after
that, pretty much things were running.
Finally, after about 10 or 15 minutes, we got to the point where I could say, "What do you know about my identity?"
And it was able to pull up the identity file. Ultimately, this was actually a
file. Ultimately, this was actually a very small amount of work. Almost all of the time was in the troubleshooting.
Now, to deploy it remotely, there were just a couple more steps. First, we had to create a GitHub repo. Next, we had to make sure all the portfolio files were copied into the project. We had to change a line or two in the server code.
And then, step by step, it told me exactly what to do to get everything pushed up into GitHub. We were able to deploy it using Railway, which took basically no time at all. The jump from local MCP server to something that was
available on the web actually took less time than the local just because we ran into fewer issues. My recommendation is that it's worth taking the time to work with an AI build partner like Claude or
CHBT to try to go through this process even if you think that in this case you're not sure how useful this particular MCP server will be. I do
think that a lot of the value you're going to get out of the followalong for this is going to be just in the creation of the files, which is of course why I spent most of my time building out the context portfolio interview agent. But
it is a really great way and a pretty simple and clear context to learn how to use MCP. And so if you haven't yet, give
use MCP. And so if you haven't yet, give it a try. Overall though, that is how we go from endlessly repeating ourselves, telling AI about ourselves and our projects and our teams to doing it once,
allowing it to stay updated, and giving every agent and AI that you interact with access to the same pool of information. Hopefully, this was a
information. Hopefully, this was a useful one. Have fun this weekend trying
useful one. Have fun this weekend trying it out for yourself. For now, that is going to do it for today's AI daily brief. Appreciate you listening or
brief. Appreciate you listening or watching as always and until next time, peace.
Loading video analysis...