Full Course (Lessons 1-11) MCP for Beginners
By Microsoft Developer
Summary
## Key takeaways - **MCP: AI's USB-C Connector**: Think of MCP as the USB-C of AI, a universal connector that unifies how models access tools and data. Once something speaks MCP, your agent can use it without needing custom instructions. [03:03], [03:26] - **Token Pass-Through Forbidden**: Token pass-through is explicitly forbidden in the MCP spec because clients can bypass critical security controls, muddy the audit trail, and break trust boundaries between services. [11:02], [11:29] - **45% Consistency Boost in Support**: A global enterprise used MCP to unify customer support, building a Python server with resource registration, prompt management, and ticketing tools, leading to 30% drop in model costs and 45% bump in consistency. [33:03], [33:35] - **Prompt Shields Block Injections**: Microsoft's prompt shields protect against direct and indirect prompt injection attacks through detection, filtering, spotlighting, and delimiters that mark trusted versus untrusted data. [12:55], [13:35] - **Single Responsibility Per Tool**: Each tool should do one thing and do it well, breaking out mega-tools into small focused components to keep code cleaner, APIs predictable, and tools modular and reusable. [37:10], [37:49] - **Playwright MCP Automates Browsers**: Playwright MCP server lets AI agents control browsers to open web pages, click buttons, extract content, take screenshots, and run test flows just by describing what to do. [48:27], [49:01]
Topics Covered
- MCP: AI's USB-C Standard
- Servers Expose Resources, Prompts, Tools
- Defend Prompt Injection, Tool Poisoning
- MCP Cuts Enterprise Costs 30-45%
- Single Responsibility Per Tool
Full Transcript
[Music] Hey everyone, welcome. In this video, we're kicking off your journey into model context protocol or MCP for short.
If you've ever tried building a generative AI app that does more than just chat, you've probably run into some challenges. How do you connect it to
challenges. How do you connect it to real-time data? How do you call tools
real-time data? How do you call tools like a calculator or search engines? And
how do you keep it all scalable and maintainable? That's exactly where MCP
maintainable? That's exactly where MCP comes in. The model context protocol is
comes in. The model context protocol is an open standardized interface that helps AI models like large language models communicate with the outside world. Think APIs, tools, data sources,
world. Think APIs, tools, data sources, all working together through a consistent architecture. MCP allows your
consistent architecture. MCP allows your model to not only respond intelligently, but also take action. As AI applications grow in complexity, custom integrations
just don't scale. You end up with one-off solutions, brittle pipelines, and code that breaks whenever something changes. MCP fixes that by acting as a
changes. MCP fixes that by acting as a universal layer so your model can interact with any tool or resource in a consistent way. And what's really cool
consistent way. And what's really cool is that this standardization opens the door to building smarter, more agentic systems. You can plug in tools once and then reuse them across multiple models
or projects. Plus, it makes it way
or projects. Plus, it makes it way easier to extend functionality further down the line. Here are some of the key benefits.
Interopability.
You can work across different vendors and platforms. Consistency. Models
behave the same way with any tool.
Reusability.
You build a tool once and then you can use them everywhere.
And then faster development. There's no
more starting from scratch each time. At
a high level, MTP follows a client server model. You have an MCP host which
server model. You have an MCP host which runs the AI model, an MTP client, often your app, which sends requests, and an MTP server, which provides tools,
resources, and context your model might need. MTP servers manage things like
need. MTP servers manage things like tool registries, authentication, and formatting responses so the model can understand them. When the model needs
understand them. When the model needs help, maybe it wants to search the web or run some calculations, it talks to the server, which handles the rest. And
here's how it works. The client sends a user prompt to the model. The model
realizes it needs external help. It
sends a request via MTP to the server.
The server executes the tool, returns a result, and the model completes its response. It's simple, clean, and also
response. It's simple, clean, and also scalable. And if you're ready to give
scalable. And if you're ready to give this a try yourself, then good news.
There are MCP servers in Java, JavaScript, Python, and C. So, you can start building your own MCP servers in a programming language you're already familiar with. And here's where things
familiar with. And here's where things get exciting. MCP is being used in
get exciting. MCP is being used in enterprise data integration to connect models with internal tools and CRM.
Agentic AI systems where models autonomously decide which tools to use.
Multimodal apps combining text, images, and audio tools. And then there's real time data access. So responses are always fresh. Think of MCP as the USBC
always fresh. Think of MCP as the USBC of AI, a universal connector. Just like
USBC helped unify device charging, MCP unifies how models access tools and data. Once something speaks MCP, your
data. Once something speaks MCP, your agent can use it without needing custom instructions. This also means you can
instructions. This also means you can scale one model, many servers, each with different capabilities. can add a new
different capabilities. can add a new server and the agent automatically knows what tools are available. There's no
extra wiring needed. And for more advanced setups, both the client and server can have their own LLMs. This enables smarter feature negotiation and richer interactions. Think the way
richer interactions. Think the way Visual Studio Code negotiates capabilities with extensions. That's the
level of flexibility we're talking about here. MCP isn't just about building
here. MCP isn't just about building better apps. It's about building futurep
better apps. It's about building futurep proof ones. With it, you can reduce
proof ones. With it, you can reduce hallucinations by grounding your model in real data. You keep sensitive info secure and you give your model capabilities that it was never trained
for. To recap, MCP is a standard
for. To recap, MCP is a standard interface for AI models to use tools and access contexts. It makes your apps more
access contexts. It makes your apps more extensible, more consistent, and easier to maintain. and you can scale with
to maintain. and you can scale with confidence, adding new tools or servers without breaking things. Think about an AI app you want to build. What tools or
data would help enhance it? And how can MCP help you plug into those more reliably?
That's it for this chapter. In the next video, we'll start exploring the core concepts of MCP, breaking down what makes it tick and how it all fits
together. Don't forget to check out the
together. Don't forget to check out the SDKs over on GitHub and start imagining what you could build with MCP. I'll see
you in the next video.
Hey there. In this chapter, we're diving into the core of model context protocol.
If you've ever wondered how AI tools talk to external APIs or databases, then you're in the right place. MCP is what makes that possible and powerful.
MCP stands for model context protocol.
It's a standardized way for language models to interact with tools, data sources, and external applications.
Think of it like a translator between your AI model and the rest of your digital ecosystem. What makes MCP so
digital ecosystem. What makes MCP so special is its architecture. It's
modular, flexible, and designed to work with any programming language. Be it
Python Java JavaScript.NET you name it. Here's how it works. MCP
uses a client server architecture with three main roles. The host like VS Code or Cloud Desktop is where the user interacts. The client lives inside the
interacts. The client lives inside the host and talks to the server. And then
the server provides tools, data or prompts that the model can use. If
you've ever used an AI agent that could look up a document, call the weather API, or generate code templates, it probably use something like MCP under the hood. So, let's break it down. Hosts
the hood. So, let's break it down. Hosts
are where user prompts originate. They
manage the UI permissions and connect to servers. Clients handle the back and
servers. Clients handle the back and forth. They send prompts to servers and
forth. They send prompts to servers and return model responses. And then servers expose resources, tools, and prompts.
They're the workhorses doing the actual lifting. Servers can provide three kinds
lifting. Servers can provide three kinds of features. First, there's resources
of features. First, there's resources like local files, database entries, or external APIs. There's also prompts
external APIs. There's also prompts which are templates that guide AI behavior. And then there's tools which
behavior. And then there's tools which are executable functions that models can call like get products or fetch weather.
This is where MCP really shines. Tools
are like plugins for your AI. You can
define them, control their access, and use them to make your agent both smarter and more helpful. So here's a simple Python example. We define a tool called
Python example. We define a tool called get weather that takes a location and returns a mock forecast. In the real world, this might call a weather API and return structured JSON back to the
model. Now, let's talk about how all
model. Now, let's talk about how all these parts communicate. When a user makes a request, the host initiates a connection. The client and server
connection. The client and server negotiate capabilities. What tools or
negotiate capabilities. What tools or data are available? The model might request a tool or a resource. The server
executes it and sends back the result.
And finally, the client integrates everything into the model's response and the user sees the result. All of this happens using a structured message
format called JSON RPC. It ensures clear predictable communication between components. Whether you're using
components. Whether you're using websockets, standard input output, or server sent events, MCP builds on JSON RPC with added features like capability
negotiation, tool invocation and result handling, request cancellation and progress tracking, authentication and rate limiting, and most importantly,
user consent and control. Security is
baked in. Every tool call, every data access has to be approved. That means
users stay in control of what's shared, what's executed, and what gets exposed to the model. Want to build your own MTP server? Our curriculum provides examples
server? Our curriculum provides examples innet, Java, Python, and JavaScript. No
matter your stack, you can define tools, serve contacts, and participate in the MCP ecosystem.
So to recap, MCP is your bridge between AI and the rest of your digital world.
It's modular, secure, and built for realworld integration. Whether you're
realworld integration. Whether you're debugging in VS Code or building custom agents, MCP helps your models act on the world, not just talk about it. Here's a
challenge. Design a tool you'd want to build with MCP. What would it be called?
What inputs would it need? What output
would it return? And how would a model use it? That's it for this chapter. In
use it? That's it for this chapter. In
the next one, we'll discuss security.
We'll cover permissions, tool safety, and how to keep your data protected. See
you in the next one.
Hey there. In this chapter, we're discussing one of the most important topics in AI development, security. If
you're building with MTP, it's not just about making things smart, it's about making them safe. And trust me, MCP introduces some new security challenges
that you won't find in traditional software. So, let's talk about those
software. So, let's talk about those challenges and how you can defend against them. The model context protocol
against them. The model context protocol unlocks powerful capabilities by allowing AI systems to interact with tools, APIs, and data. But with that
powers comes new risk like prompt injection, tool poisoning, and dynamic tool modification. These threats can
tool modification. These threats can lead to things like data exfiltration, privacy breaches, or even an AI system executing unintended actions, all
because of something hidden in a prompt.
The good news, you can absolutely defend against them. But it starts with
against them. But it starts with understanding them. So, let's walk
understanding them. So, let's walk through the most common risk one by one.
Earlier MCP specs assumed you'd roll your own OOTH 2.0 authentication server.
That's not ideal for most devs. As of April 2025, MTP servers can now delegate off to external identity providers like Microsoft Inter ID, which is a huge
improvement. But even with this update,
improvement. But even with this update, token mismanagement is a real concern.
Some folks might be tempted to let the client pass its token straight to the downstream resource called token pass through. This is explicitly forbidden in
through. This is explicitly forbidden in the MTP spec because it introduces a mess of problems. Clients can bypass critical security controls. It muddies
the audit trail and it can break trust boundaries between services. The bottom
line only accept tokens issued specifically for the MTP server. If
you're using Azure tools like API management, Microsoft enter ID and the official MCP security guides will walk you through best practices. Now let's
talk permissions. MCP servers often get access to sensitive data, but if you're not careful, they might get too much access. For example, if your MCP server
access. For example, if your MCP server is meant to access sales data, it shouldn't be able to read all your enterprise files. Stick to the principle
enterprise files. Stick to the principle of lease privilege. Use arpback, audit your roles, and review them regularly.
Now, for one of the more AI specific threats, indirect prompt injection. This
happens when malicious instructions are hidden in external context like an email, a web page, or a PDF. When the AI reads that content, it interprets the
hidden instructions and boom, unintended actions, leaked data, and potential harmful content.
A related attack is tool poisoning, where the metadata of an MCP tool is tampered with. Since LLMs rely on that
tampered with. Since LLMs rely on that metadata to decide which tools to call, attackers can sneak in dangerous behavior through tool descriptions or parameters. This is especially dangerous
parameters. This is especially dangerous in hosted environments where tools can be changed after a user approves them, a tactic known as a rugpool. Okay, so what
do you do about all that? Microsoft has
a solution and it's called prompt shields and it's a gamecher.
Prompt shields protect against both direct and indirect prompt injection attacks. They include detection and
attacks. They include detection and filtering. This finds malicious inputs
filtering. This finds malicious inputs in documents and emails. Spotlighting.
This helps the model identify what's a system instruction versus external text.
Delimiters and data marking. This
clearly marks which data is trusted or untrusted.
continuous updates from Microsoft and it integrates with Azure content safety.
Let's not forget about supply chain security. When building AI apps, your
security. When building AI apps, your supply chain isn't just code. It
includes models, embeddings, APIs, and context providers. Before integrating
context providers. Before integrating any component, verify its source. Use
secure deployment pipelines. scan for
vulnerabilities and monitor for changes continuously.
Tools like GitHub advanced security, Azure DevOps, and CodeQL are key allies here. And remember, MCP inherits your
here. And remember, MCP inherits your environment's existing security posture.
So, the stronger your overall setup, the safer your MTP implementation will be.
Here are a few essentials to include.
Follow secure coding practices. Think
OWASP top 10 and OWASP for LLMs. Harden your servers. Use multiffactor
authentication and patch regularly.
Enable logging and monitoring and design with zero trust architecture in mind. So
to recap, MCP introduces new and unique security risk, but most of them can be addressed with the right controls and a strong security posture. and tools like
prompt shields, Azure content safety, and GitHub advanced security help make it easier to build responsibly.
In the next chapter, we're going to shift gears and get handson, walking through the end toend process of creating an MTP server all the way to
deployment. I'll see you there.
deployment. I'll see you there.
Hey there. Ready to build your first MCP project? In this chapter, we're setting
project? In this chapter, we're setting the stage for everything that follows.
Whether you're brand new to MCP or looking to sharpen your skills, this is where your journey begins. In this
chapter, we're going to start with setting up your development environment, followed by creating an agent, connecting a client, and streaming
responses in real time. Also, we're
pretty language flexible here. You'll
find examples in C, Java, JavaScript, TypeScript, and Python.
Here's a quick preview of what's ahead.
First, you'll create your very first MCP server and inspect it using the built-in inspector tool. Then, you'll write a
inspector tool. Then, you'll write a client to connect to that server. You'll
then make your client smarter by adding an LLM so it can negotiate with the server instead of just sending commands.
You'll learn how to run everything inside Visual Studio Code, including using GitHub Copilot's agent mode. Then
we'll introduce streaming with the server send events, followed by HTTP streaming, which is perfect for scalable realtime apps. You'll also explore the
realtime apps. You'll also explore the AI toolkit for Visual Studio Code to test it and iterate quickly.
And of course, we'll show you how to test everything thoroughly.
Finally, you'll deploy your MCP solution either locally or in the cloud. Each
lesson builds on the last, helping you to develop real world MCP skills as you go. You'll be working with official MCP
go. You'll be working with official MCP SDKs for each supported language. These
SDKs handle a lot of the heavy lifting so you can focus on building your service functionality, not worrying about protocol details. And yes, they're all open source. Before you dive in,
make sure your development environment is ready. You'll need an IDE or code
is ready. You'll need an IDE or code editor like VS Code, Intelligj, or PyCharm, the right package manager for your language, and any API keys for the
AI services your app will connect to. We
provided links and guidance throughout to help you get everything set up smoothly. So, what can you expect to
smoothly. So, what can you expect to walk away with? By the end of this chapter, you'll be able to build and test your own MCP servers. connect
clients with or without LLMs, stream content from server to client, and deploy your project to the cloud. It's a
lot, but it's the foundation for everything that comes next. Each
language also comes with a simple calculator agent to help you practice.
These aren't just hello world examples.
Each one gives you hands-on experience with tools, prompts, and resources. And
if you ever get stuck, we've got plenty of resources, sample apps, official documentation, and even full walkthroughs on Microsoft Learn. So
that's your starting point. By now, you should have a clear picture of what MTP is, how it's structured, and how to set up yourself for a success. In the next
chapter, we're going to shift from setup to real world usage, looking at how MCP is applied to practical scenarios and what it takes to build something useful
with it. I'll see you there.
with it. I'll see you there.
Welcome back. Now that you understand the core concepts of model context protocol, it's time to bring them to life. In this chapter, we're exploring
life. In this chapter, we're exploring practical implementation of model context protocol. What it takes to
context protocol. What it takes to build, test, and deploy MTP applications across realworld scenarios. So whether
you're an enterprise developer integrating AI into workflows or a solo builder prototyping your own intelligent assistant, this is where things get even
more hands-on. The real power of MCP
more hands-on. The real power of MCP isn't just in understanding how it works, it's in applying it. This chapter
bridges the gap between theory and practice, giving you the tools to implement MCP across multiple programming languages using official
SDKs built for C, Java, TypeScript, JavaScript, and Python. Each SDK
provides the building blocks you need.
There's simple MCP clients, full featured servers, and support for key MCP features like tools, prompts, and resources. You'll find example projects
resources. You'll find example projects and starter templates in the MCP samples directory, so you don't have to start from scratch. So, let's talk about what
from scratch. So, let's talk about what you're actually building. At the heart of every MCP implementation is the server. And the server is equipped with
server. And the server is equipped with three core features: resources, prompts, and tools. Resources provide context
and tools. Resources provide context like documents, structured data, or files. Prompts shape the interaction,
files. Prompts shape the interaction, guiding the model through templates or workflows. And tools let the model take
workflows. And tools let the model take action, calling functions, hitting APIs, or performing calculations. Think of it like this. Resources are what the model
like this. Resources are what the model knows. Prompts are how it's asked, and
knows. Prompts are how it's asked, and tools are what it can do. The MCP SDK repositories come with sample implementations in your favorite
language. In C, you'll see basic and
language. In C, you'll see basic and advanced server setups, including ASP.NET integrations and tool patterns.
In Java, you get Spring ready builds with reactive programming and type-S safe error handling. The JavaScript SDK supports both Node and the browser with
websocket streaming built in.
As for Python, it's async native with fast API or Flask support and integrates naturally with ML tools. So once you got your server running, what's next?
Testing and debugging.
MCP Inspector is your go-to tool for inspecting live server behavior. After
deploying your server, just connect via your API endpoint, list the available tools, and run them in real time. It's
like a live console for your agent.
Ready to go live? MCP servers can be deployed to Azure using Azure functions.
Even better, you can add Azure API management in front of your MCP server to handle rate limits and token off, monitor performance, balance load, and
secure your endpoints with OOTH via Microsoft Intra. With just a few
Microsoft Intra. With just a few commands using Up, you can deploy everything, function apps, API management, and all dependencies automatically.
And if you're wondering, can I test this locally before I ship it? Absolutely.
These examples are designed to work both locally and in the cloud, so you can iterate fast and scale later. The remote
MCP function samples show how to implement secure productionready servers in C, Python, or TypeScript, complete with network isolation, OOTH, and
support for GitHub copilot agent mode.
Before we wrap up, here are a few key takeaways. Official SDKs make it easy to
takeaways. Official SDKs make it easy to build MCP apps in your language of choice. Tools, prompts, and resources
choice. Tools, prompts, and resources are the building blocks of any MCP server. MCP Inspector and Azure API
server. MCP Inspector and Azure API Management help you test and secure your deployments.
Azure Functions let you scale your solution with just a few CLI commands.
And designing good workflows, well, that's where your creativity comes in.
Now it's your turn. For the exercise in this chapter, you'll sketch out your own workflow, choose the tools that you'll need, and implement one using the SDK of
your choice. In the next chapter, we're
your choice. In the next chapter, we're going to explore more advanced topics in model context protocol implementation.
I'll see you there.
Hey there, and welcome back. If you've
made it this far, congrats. you've built
a solid foundation in the model context protocol, but we're going to kick things up a notch because in this chapter, we're exploring advanced topics in MCP
implementation.
So, if you're looking to build scalable, robust, enterprise ready MCP applications, this is where it gets real. This chapter
is all about making your MCP projects production grade. We'll explore
production grade. We'll explore multimodal integration, scalability techniques, security best practices, and how to integrate with enterprise systems
like Azure and Microsoft AI Foundry.
Each of these areas helps MTP move from simple prototypes to serious infrastructure. Especially important for
infrastructure. Especially important for modern AI applications that operate at scale.
Let's start with multimodal capabilities. Think beyond text. What
capabilities. Think beyond text. What
happens when you want your MCP server to understand images, process audio, or generate video summaries? In this
lesson, you'll see how to incorporate multimodal response handling into your MCP architecture, enabling richer interactions in broader application scenarios. Whether you're integrating
scenarios. Whether you're integrating with tools like SER API or enabling real-time streaming responses, multimodal support is becoming a must-have.
Next up, scalability.
MCP servers aren't just for local testing. They're meant to be deployed in
testing. They're meant to be deployed in high demand environments. That means
your architecture should support horizontal scaling, container orchestration, and load balancing strategies. You'll explore patterns for
strategies. You'll explore patterns for scaling MCP services in cloud environments, and how to optimize for both performance and cost. Of course,
with scale comes responsibility, especially when it comes to securing your MCP server. Security is built into the MCP protocol, but real world deployments require more. This chapter
covers OOTH 2 flows for both resource and authorization servers, protecting endpoints and issuing secure tokens, authenticating users with Microsoft Inter ID, and integrating with API
management layers. These aren't just
management layers. These aren't just best practices. They're essential when
best practices. They're essential when your MCP server is part of a regulated or sensitive system. Enterprise
integration is another major theme.
You'll learn how to connect your MTP server with enterprise tools like Azure Open AI and Microsoft AI Foundry. These
integrations unlock features like tool orchestration, real-time web search, external API connections, and robust identity and access management. If
you're building agents that operate in enterprise ecosystems, these lessons will help you futureproof your approach.
This chapter includes a ton of hands-on samples from routing and sampling strategies to real-time streaming and even integrating with Azure container apps. And if you're up for the
apps. And if you're up for the challenge, there's an exercise that walks you through designing an enterprisegrade MCP implementation for a specific use case. It's a great way to
apply everything you've been learning.
Let's wrap with a few key takeaways.
Multimodal MCP systems allow for richer user interactions. Scalability requires
user interactions. Scalability requires thoughtful architecture and resource management. Security is non-negotiable
management. Security is non-negotiable in enterprise environment.
Enterprise integration brings MTP into alignment with real world AI workflows and optimization ensures your MCP server performs reliably at scale.
So whether you're working on your first enterprise project or just curious about what's possible with MCP, these advanced topics will give you the tools to build
with confidence. In the next chapter,
with confidence. In the next chapter, we're going to explore how to engage with the MCP community and how to contribute to the MCP ecosystem.
Hey there and welcome. In this chapter, we're going to explore one of the most rewarding aspects of working with the model context protocol, community and contribution.
Whether you're looking to file your first issue, share your own tools, or become a core contributor, this chapter will help you understand how to get
involved with the MCP ecosystem and why your voice matters. The MCP community is more than just maintainers and documentation. It's a growing network of
documentation. It's a growing network of developers organizations tool builders, and users who are all working together to shape how intelligent applications interact with models. At
the core, you'll find core protocol maintainers like Microsoft and other orgs that evolve the spec. tool
developers who create reusable packages and utilities, integration providers, companies using MCP to enhance their own platforms, endusers, the developers
building apps powered by MCP, and of course, contributors, community members like you, helping improve the ecosystem.
The official community lives in a few key places. First, there's the MCP
key places. First, there's the MCP GitHub organization, and then there's also the specification site. And then finally, they're also in
site. And then finally, they're also in GitHub discussions, issues, and pull requests. But there are also
requests. But there are also communitydriven channels like tutorials, blog posts, language specific SDKs, and open forums. If you've ever wanted to share your insights or find
collaborators, those are great starting points. So, how exactly do you
points. So, how exactly do you contribute to MCP? You don't need to write a brand new protocol extension for your first try. Contributions comes in many forms. Whether that's contributing
documentation, answering community questions, or resolving bugs. So, let's
walk through a few common paths. You
could contribute code to the core MTP protocol, like adding support for binary data streams in C. This might mean defining new interfaces, handling stream
metadata, and returning results in a consistent testable way. If you're more into back-end reliability, you might squash a bug in the Java validator or improve how nested schemas are handled.
And if you love building tools, Python is a great place to start, like the CSV processor tool that filters, transforms, and summarizes data based on a model's
request. Not a software engineer? No
request. Not a software engineer? No
problem. Some of the most valuable contributions are documentation, tutorials, translations, and testing.
Creating sample apps or improving error messages helps the entire community grow. Let's say you've got a great idea
grow. Let's say you've got a great idea for a tool. Whether it fetches thought quotes, translates text, or gets the weather forecast, you can create a
reusable MCP tool, package it for others, and then publish it to a package registry just as you would with any other open-source library. So, let's
look at a few ways that might work innet. That might be a Nougat package
innet. That might be a Nougat package like MCP finance tools. In Java, a Maven artifact like MCP weather tools. In
Python, a Pi package like MTP NLP tools.
Each tool defines its name, parameters, schema, and behavior, and can be registered, reused, and even discovered through community built registries.
Speaking of registries, imagine contributing a whole service that helps a community find tools. This fast API based MTP tool registry is one example of how developers are building
infrastructure around the protocol, not just within it. So what makes a good contribution? Well, it starts with
contribution? Well, it starts with starting small. Fix a typo, write a
starting small. Fix a typo, write a test, answer a GitHub discussion question. From there, follow the
question. From there, follow the project's style guide, document your changes, and submit focused pull requests. And remember, collaboration
requests. And remember, collaboration isn't just about the code. It's about
communication.
Whether you're opening a PR or reviewing someone else's, prioritize clarity, correctness, and completeness. Be
thoughtful about version compatibility.
And always, always document breaking changes. MCP is still growing and your
changes. MCP is still growing and your feedback shapes the protocol. The truth
is, anyone can contribute to MCP and everyone benefits when you do. If you're
ready to make your mark, head over to the GitHub repository, explore open issues, and find a way to get involved that suits both your skills and your
interests. In the next chapter, we're
interests. In the next chapter, we're going to be exploring how early adopters have leveraged model context protocol to resolve realworld challenges and drive
innovation across industries. I'll see
you there.
Hey there. In this chapter, we're exploring how early adopters are using the model context protocol in the real world. This isn't just theory anymore.
world. This isn't just theory anymore.
MCP is helping solve real problems in finance healthcare enterprise automation, and even browser automation.
So, let's walk through what we can learn from the folks who are putting MCP into production.
From customer support bots to diagnostic assistance, companies are using MCP to standardize how AI models, tools, and data all work together. MCP creates a
unified interface that can connect multiple language models, enforce security policies, and maintain consistent behavior across complex systems. Let's take a look at a few case
studies. A global enterprise used MTP to
studies. A global enterprise used MTP to unify their customer support experience.
The result, a single interface for multiple LLMs, centralized prompt templates, and robust security controls.
They even built an MTP server in Python to handle support inquiries, complete with resource registration, prompt management, and ticketing tools. This
led to a 30% drop in model costs and a 45% bump in consistency. In healthcare,
MTP helped one provider integrate general and specialist models while maintaining full HIPPA compliance.
Using a C# MTP client, they implemented strict encryption, auditing, and seamless EHR integration. The result,
better diagnostics, less context switching and more trust from physicians. A financial institution used
physicians. A financial institution used MCP to standardize risk models across departments. Their Java based server
departments. Their Java based server featured SOC compliant access controls, version control, PII reduction, and audit logging. They saw a 40%
audit logging. They saw a 40% improvement in model deployment cycles.
Now, if you're thinking, "Cool, but how do I build one of those?" Don't worry.
We have a selection of hands-on projects that you can try right now. Here are
three ways to get your hands dirty with MCP.
First, we have a multi-provider MCP server. This route requests to different
server. This route requests to different model providers based on metadata. Think
OpenAI, Enthropic, and local models all under one roof.
For the next project, we have enterprise prompt management. Design a system to
prompt management. Design a system to version, approve, and deploy prompt templates organizationwide.
As for project 3, there's a content generation platform. You can use MTP to
generation platform. You can use MTP to generate consistent blogs, social posts, and marketing content with tracking and review workflows. Each of these teaches
review workflows. Each of these teaches you critical MCP skills from routing logic and caching to prompt versioning and API design. MCP is evolving fast and
here's where it's headed. Multimodal
support for images, audio, and video, federated infrastructure for sharing models securely, edge computing support, and even marketplaces for templates and
tools. These trends are shaping how MTP
tools. These trends are shaping how MTP will power everything from tiny IoT devices to enterprise AI marketplaces.
There's a growing list of open source projects you can explore. For example,
there's Playright MTP Server, which lets AI agents control browsers. There's also
Azure MTP, a fully managed enterprise ready MTP server. There's also the Foundry MTP playground, which is great for prototyping and experimenting. And
then there's tools like NL web, which turns websites into natural language endpoints for AI assistance. Each one
shows a different angle on what MTP can do and how it's being used to drive innovation. Early adopters are proving
innovation. Early adopters are proving that MCP isn't just a protocol. It's a
foundation for building scalable, secure, and consistent AI systems. If you're building with large language models, you don't have to reinvent the
wheel. MCP gives you the structure to do
wheel. MCP gives you the structure to do it right and now you've seen how others are doing just that. In the next chapter, we're going to explore advanced
best practices for developing, testing, and deploying MCP servers and features within production environments. I'll see
you there.
Hey there and welcome. In this chapter, we're exploring best practices in building robust, scalable, and maintainable MTP servers. So whether
you're creating a tool or deploying to production, these practices can help ensure that your implementation is reliable, secure, and easy to work with
over time. So let's break things down
over time. So let's break things down step by step. Let's start with architecture. One of the most important
architecture. One of the most important principles to follow is single responsibility.
Each tool should do one thing and do it well. This keeps your code cleaner, your
well. This keeps your code cleaner, your API is more predictable, and your tools easier to test and maintain.
Instead of creating one mega tool that tries to handle forecasts, alerts, history, and more, you should break it out into small focused components. This
makes your tools more modular and reusable across workflows.
Next, prioritize dependency injection.
Tools should receive services like database clients, APIs, or cache through their constructors. This makes them
their constructors. This makes them easier to test and more configurable for different environments. You'll also want
different environments. You'll also want your tools to be composable. That means
designing tools that can feed into one another to create more complex workflows. Think of them like Lego
workflows. Think of them like Lego bricks for your server. A well-designed
schema is a gift to both your model and your users. Always provide clear
your users. Always provide clear descriptions for your parameters. Define
constraints like minmax values or allowed formats and keep your return structures consistent. This helps the
structures consistent. This helps the model understand how to use the tool properly and reduces unexpected errors when tools are invoked. Error handling
should be thoughtful and layered. Catch
exceptions at the right level and provide structured responses with meaningful error messages. Avoid
crashing on the first problem. Make it
clear what went wrong and ideally how to fix it. You can also implement retry
fix it. You can also implement retry logic for transient issues like timeouts or temporary service failures using exponential backoff patterns.
Performance matters especially in production. Use caching to avoid
production. Use caching to avoid repeated expensive operations. Adopt
asynchronous patterns for input outputbound task and throttle tool usage to prevent overloading your system. This
is especially critical for tools that call external APIs or process large data sets. A little optimization goes a long
sets. A little optimization goes a long way. Security is non-negotiable.
way. Security is non-negotiable.
Validate all inputs. Check for empty strings, enforce length limits, and guard against injection attacks. Make
sure users are authorized before accessing protected resources. And if a tool might expose sensitive data, redact it by default unless explicitly
requested, and only if the user is authorized. Now, let's talk about
authorized. Now, let's talk about testing. Every MCP server should include
testing. Every MCP server should include unit tests for each tool and resource handler, schema validation tests, integration tests for the full request
response life cycle, end to-end tests that simulate real modelto tool workflows, and performance tests to evaluate how your server behaves under load.
Don't just test the happy paths. Test
edge cases, error scenarios, rate limits, and more.
When designing tools, lean on established patterns.
Chain of tools. One tool feeds into the next. Dispatcher
next. Dispatcher routes requests to specialized tools.
Parallel processing. Run multiple tools at once for speed. Error recovery. Try
fallback tools if the primary fails.
Composition. Combine smaller workflows into larger ones. These patterns
increase flexibility and help you build workflows that scale and recover gracefully.
Let's recap the essentials. Design each
tool with a single focused responsibility.
Use dependency injection to improve testability.
Write clear schemas with strong validation.
Handle errors gracefully and log them meaningfully.
Optimize performance with caching, async patterns, and throttling.
Protect your tools with strict validation and authorization.
Test at all levels, unit, integration, end to end, and underload.
And finally, use common workflow patterns to organize complex behavior.
As you've seen, following MCP best practices means thinking holistically about architecture, security,
performance, testing, and user experience. In the next chapter, we're
experience. In the next chapter, we're going to explore realworld case studies that demonstrate practical application
of MCP in various enterprise scenarios.
I'll see you there.
Hey everyone. In this chapter, we're diving into something a little different. Rather than introduce a new
different. Rather than introduce a new concept or diagram, we're going to be exploring just what happens when MCP is
actually put to work. This chapter is packed full of realworld case studies that demonstrates just how versatile and powerful the model context protocol can
be in enterprise settings. So why case studies? Because theory only gets you so
studies? Because theory only gets you so far. Once you understand the
far. Once you understand the fundamentals of MCP, it's incredibly helpful to see how other teams are applying those principles, how they're
solving actual business problems, streamlining workflows, and connecting AI to the real world.
Let's kick things off with the Azure AI travel agents reference implementation.
This one is all about multi- aent orchestration, a full travel planning app where each AI agent plays a specific role, searching destinations, comparing
flights, and recommending hotels. It
combines Azure Open AI, Azure AI search, and MCP to create a secure, extensible, and enterprisegrade experience. Think of
this as your blueprint for building coordinated AI systems that work across data and tools.
Next up, a workflow automation scenario.
Updating Azure DevOps items based on data from YouTube. It sounds simple, but it's powerful. Using MCP, this setup
it's powerful. Using MCP, this setup extracts metadata from videos and automatically updates work items in Azure DevOps. The takeaway, even
Azure DevOps. The takeaway, even lightweight MCP implementations can eliminate repetitive tasks and ensure data stays consistent across platforms.
How about accessing live documentation through the terminal? The real time documentation retrieval example shows how a Python client connects to an MCP
server to stream relevant Microsoft Docs in real time right in the console. It's
great for developers who prefer the command line and want fast contextual answers without leaving their dev environment. Now for something
environment. Now for something interactive, a web-based study planner powered by chain lit and MCP.
Users input a topic and time frame, for example, AI 900 certification in eight weeks, and the app builds a personalized weekly study plan in real time with
conversational responses.
This one's a great example of how MTP can enable adaptive learning experiences in the browser. If you're a VS Code user, you'll love this one. The NE
editor docs case study shows how MTP brings Microsoft Learn Docs right into your code editor. search, reference, and insert docs into Markdown without ever
switching tabs. And when paired with
switching tabs. And when paired with GitHub Copilot, it creates a seamless AI powered documentation workflow inside
your editor. Finally, there's the APIM
your editor. Finally, there's the APIM MCP server walkthrough.
This case study shows how to build and configure an MCP server using Azure API management.
You'll see how to expose APIs as MCP tools, set rate limits, apply policies, and even test your setup directly from VS Code. It's a great entry point if you
VS Code. It's a great entry point if you want to start hosting your own MCP server using Azure infrastructure. So,
what do all these examples have in common? They're proof that MCP isn't
common? They're proof that MCP isn't just a framework, but a toolkit for building real scalable AI first solutions. So whether you're creating a
solutions. So whether you're creating a multi- aent travel assistant or streaming documentation to your terminal, MCP is the connected tissue
that links your models, data, and tools.
These case studies are meant to inspire you and help you recognize patterns that you can apply to your own projects. Here
are the key takeaways.
MCP works across a wide range of scenarios from simple automation to complex multi- aent systems. It integrates cleanly with Azure tools,
open AI models, and web or desk environments.
Reusable components and architectural best practices can help you move faster.
And finally, you don't need a huge project to get started. Even lightweight
use cases can show a return on investment quickly. All right, now that
investment quickly. All right, now that you've seen MCP in the wild, it's time to get hands-on again. The next chapter introduces you to a four-part lab which
will provide hands-on exercises for connecting an agent to either an existing or a custom MCP server with the AI toolkit. I'll see you there.
AI toolkit. I'll see you there.
Hey there and welcome. In this chapter, you're going to be introduced to the AI toolkit for Visual Studio Code extension. Your skills will
extension. Your skills will progressively build in learning how to use the AI toolkit in the context of creating an agent that's connected to
tools from either an existing or a custom MCP server. Let's start with module one. Module one is all about
module one. Module one is all about getting familiar with the AI toolkit extension in VS Code. Once installed,
you get access to a full AI development environment right inside your editor.
You'll start with the model catalog where you can explore over 100 models from OpenAI to GitHub hosted models.
Whether you're doing creative writing, code generation, or analysis, there's something for every use case. Then
there's the playground. This is where you test your prompts and tweak parameters like temperature, max tokens, and top P, helping you understand how
different models behave. And finally,
you'll build your very own custom agent using agent builder. You define its role, personality, parameters, and even tools it can use. Once you've mastered
the basics, model 2 introduces model context protocol. Think of it as the
context protocol. Think of it as the USBC of AI. MCP lets you connect your agents to external tools and services in a standardized way. You'll get hands-on
with Microsoft's own MCP server ecosystem, which includes integrations for Azure, Dataverse, Playright, and more. The highlight, you'll build a
more. The highlight, you'll build a browser automation agent powered by the Playright MCP server. This agent can open web pages, click buttons, extract
content, take screenshots, and even run full test flows just by describing what you want it to do. And you'll configure all of this directly from the agent builder, selecting playright from the
MTP catalog, assigning tool capabilities, and designing intelligent prompts that drive web automation task.
Now that your agent can use external tools, it's time to level up. Module 3
gets into the nitty-gritty of custom MCP server development. In module 3, you'll
server development. In module 3, you'll build your own MCP server from scratch using the AI toolkits Python templates.
Your project, a weather MTP server that responds to natural language questions like, "What's the weather in Seattle?"
You'll use the latest MTP SDK, configure advanced debugging with MTP Inspector, and run your server live alongside your agent inside VS Code.
You'll also learn how to structure an MTP server project, upgrade dependencies, set up launch configurations and background tasks, and test your server using both agent
builder and the inspector.
It's a professional-grade dev workflow that prepares you to create and debug any kind of custom tool your agent might need. And finally, in module 4, we put
need. And finally, in module 4, we put it all together with a real world use case. You'll build a GitHub clone MCP
case. You'll build a GitHub clone MCP server that automates the steps developers often do manually. Cloning a
repo, creating directories, and opening the project in VS Code. This project
includes smart validation and error handling, osaware logic to launch VS Code or VS Code insiders, integration with GitHub copilot and agent mode, and
a clean user experience driven entirely through natural language prompts. It's
the kind of intelligent developer tool you can actually use in your day-to-day work. In just four modules, you'll go
work. In just four modules, you'll go from installing the AI toolkit to building production ready MCP servers that makes your agents truly powerful.
We can't wait to see what you create.
Loading video analysis...