One Year of MCP — with David Soria Parria and AAIF leads from OpenAI, Goose, Linux Foundation
By Latent Space
Summary
Topics Covered
- MCP Evolved from Local to Enterprise Remote
- Separate Auth Server Unlocks Enterprise Scale
- Streamable HTTP Fails Horizontal Scaling
- Progressive Discovery Solves Tool Bloat
- Tasks Enable Long-Running Agent Research
Full Transcript
Hey everyone, welcome to the L in space podcast. This is Allesio, founder of
podcast. This is Allesio, founder of Colonel Labs and I'm joined by Swixs, editor of L in Space.
>> Hey, and here we are joined finally in the studio for the first time. Uh,
welcome back David from Enthropic/MCP.
>> Yeah. Hey. Well, what nice to finally talk to you in person then like last time like a year ago it was over VC and this is way fun.
>> I watched it back those eight months. Uh
it's it's been a crazy eight months and uh I think we just celebrated like the one year anniversary of MCP.
>> Yes, we have at least a public announcement. Um and also last night or
announcement. Um and also last night or yesterday was the agent AI foundation launch.
>> Yeah, that was nice. Was a nice event.
It was nice to see the Enthropic office and I've been >> you like yeah >> very good food I would say. Um in terms of my food bench, Enthropic does rank over OpenAI. Yeah, [laughter] it's a
over OpenAI. Yeah, [laughter] it's a >> at least that that that's what we have going for us.
>> Awesome, man. Do you want to give just a quick overview of what's happening with MCP and how you're donating it to the foundation and then we'll do kind of like a one-year recap of the protocol itself and then we'll have the rest of
the leads from the foundation join us to do more of the high level.
>> Yeah, yeah, that sounds good. Um, yeah,
I mean the where where we at at the moment, right? We have done like a year
moment, right? We have done like a year like a year ago we launched it and then we had this like crazy adoption over the last year now which it felt like an
eternity honestly but we had this like crazy um growth and like adoption um you know through initially through like Thanksgiving and Christmas very early with a lot of like builders building MCP
and then you know you had like the first big clients coming in like cursor and VS Code and then like you had this like inflection point around April with like
Sam Oldman and uh Satya and um Sund and all um posting about like MCP and that they're going to adopt MCP at Microsoft, at Google, um at OpenAI and that was really like the big inflection point. So
>> yeah.
>> Um >> but in all of that time, you also had to do a lot of work on the protocol itself, right? We like we we moved we launched
right? We like we we moved we launched originally as like basically local only.
Um you could like build local MCP servers for cloud desktop. Um but then we like in March this year we moved into like how can you do remote MCP server so connect like really about like to a
remote server and introduced like the first iteration of authentication and then in June we revisited that and like improved it quite a little bit so that it works better for um you know uh for
enterprises in particularly and we were very very lucky that in that time from March like June we were like able to like have absolute industry leading experts that literally work on Oath
itself to help us with some of the the pieces, right? um and how to get it
pieces, right? um and how to get it right. And then we focused a lot of on
right. And then we focused a lot of on like security best practices and this type of work. And now we like have I feel we have a really solid foundation and we're doing we just launched like in
end of like November the recent iteration of the protocol finally like the next bigger improvement to the protocol which is like longunning tasks to really allow for like you know deep
research type of task and like you know maybe even agent to agent communication.
And so I think we're just stepping into like this territory now with like okay we have really solid foundations. We
have like one more big primitive we want to have. We want to make like a little
to have. We want to make like a little bit more scalability things work and then we're you know going to get into a phase where it probably becomes a bit more stable. And so yeah it's been it's
more stable. And so yeah it's been it's been an absolute crazy year man.
>> You did say the agent to agent so there is a A2A protocol. Um I'm curious when the Agent Engineering Foundation got formed or just Agent AI foundation was
there any discussion about any of these other protocols being a part of it or you know Sean wrote a post called YMCP1 um already. So uh
um already. So uh >> one of my favorite post >> maybe it was already it was and it was before Sam and all the other guys.
>> Yeah. Yeah. You were right.
>> Well I mean I I think it was just obvious that was going to happen. So
>> yeah. Um so we we of course have conversations around like what is else is in the market like there are payment protocols that are interesting and so on but when we wanted to to start a foundation we wanted to make sure first
of all two things we wanted to start small and make sure um that the group that is founding this like for for us it's the first time we have anthropic have an open source foundation so this is all new to us we really want to start
it small and making sure we're learning along the ways and being able to like like shepherd this in in the way we feel is best for for the industry together with with OpenI um and block. But the
second part of that is also like we really felt like we wanted to see things that um have a lot of adoption or def factor like um at least on the protocol
side like a def factor uh standards. Um
and and I don't think any of the other protocols it feels like they're not just there yet. But of course if they get
there yet. But of course if they get there then we're like super open as long as they're like complimentary to to what's what's in the foundation. On the
application side, we're a little bit more flexible and we're like more open, but on the protocol side, I think we really want to make sure that we're not like offering like the foundation doesn't encompasses like five protocols
for the same like communication layer and so yeah, there was discussion but I think for now we just want to start it small.
>> Is there a role like a double hat that you have now with the foundation uh or are you more focused on MCP?
>> I am still mostly focused on MCP. It's a
bit of a double hat. So there is this like I I think people need to understand like the foundation part is mostly just an umbrella to make sure the projects under it stay always neutral and I think that's really the most important part
you want to get a lot um you know want to understand because the rest of it is like okay how do we use the budget of the foundation for events and things that are like quite dry and then the
technical parts to like MCP they stay actually the same like on the on the way we govern MCP nothing has really changed and so that's really still my job as the lead core maintainer of like shephering
um the processes shephering the the protocol forward and then beyond that now the additional double role is like I'm also going to be on the the technical steering committee of the
foundation which will like make sure to like figure out what are the projects we want to have in the foundation so if someone comes with a project to us the people that have projects in it will decide is this something we would want
is this something that we feel is like well-maintained has a lot of adoption is not going to go away. We want to make sure the foundation is have like super interesting and important projects and
not like a dumping ground like have you know how some foundations might have ended up with.
>> That's true. Uh so we're going to meet some of the others later but maybe we'll just focus back on the sort of MCP development. You covered a lot. There's
development. You covered a lot. There's
been four spec releases.
>> That's a lot.
>> Yeah. Like some people may have missed some of them is what I'm saying right.
like and I think it's really interesting how uh you we've continued to work on like really important parts like I I always think like it's very hard to follow up a a major success with a
sequel cuz the sequel usually like is it's hard to repeat that uh impact but I think like every single time you've actually managed to like focus on something important.
>> Yeah.
>> So maybe we can cover I guess maybe we'll start with the the March May one uh which is HTTP streaming which is good um and the offspec right. any other I don't know if you want to highlight any others but we'll just catch people up on
that stuff.
>> Yeah. So that that was I think that was really a that was such an important one like >> it was the number one requested thing.
>> Yeah. It was like it really opened up this like remote thing and we were we already knew actually in December and November that the next big thing will be like how can you do this over the over
remote and authentication is quite important. One of the things I think
important. One of the things I think people very rarely notice when it comes to MCP. MCP is very um very prescriptive
to MCP. MCP is very um very prescriptive in like each layer like other protocols are not like that for example like we like you want to do authentication if the client and the server don't know
each other you need to do all right and so we were very early wanted to have like one way to do something and so we really focused on like what does this mean like how do we get it over how do
we build a protocol that is has this like these streaming properties that we require and then how do we do authentication very early authentication in the first iteration I think we did an
okay job but we got some aspects wrong and most of them honestly were just me not understanding enterprises well enough but then again I think the the strengths that we have with MCP and I think the one thing if anything I'm
proud of is like building a community of people that can come together and help me figure [ __ ] out because you know I have my set of experiences of like what I'm good at um and enterprise
authentication turns out is not one of them right but there are way better suited people for that and so that's when we like that's March.
>> I saw I saw you post that but I didn't really dig into the details. Was it like the the typical SL uh type of a authentication issue?
>> The main issue we did is um in in O there are two components. There is an there's the authentication server who gives you the token and then there's the resource server. It takes the token and
resource server. It takes the token and gives you the resource in return. And in
the first iteration of our authentication spec, we combined them together into the MCP server, which if you were building >> unusable, yeah, >> it's kind of usable if you build like an
an MCP server like as like a like an public server for as a you know, you're a startup, you're building a server for yourself, you want to bind this to um the accounts you already have that is
completely usable. The reality in
completely usable. The reality in enterprises is you don't authentic, you authenticize with some central entity like you know you know you have some ID identity provider or an IDP and you know
yeah for most people they what they don't even notice that's happening. All
they know is like, oh, in the morning I'm going to go log in with Google for and then get access to all my work stuff, right? But that's effectively the
stuff, right? But that's effectively the IDP, right? Um, and if you combine these
IDP, right? Um, and if you combine these into the same server, you just can't do this anymore. And so all we needed to do
this anymore. And so all we needed to do is like, okay, we are a resource server, the MCP server is a resource server, how you get the token from the authentication server. We have opinions
authentication server. We have opinions on how you should do it, but it's kind of separated. And that's what happened
of separated. And that's what happened then in the June spec where we separated this out um and worked through all of these like okay you know how do you do dynamic client registration other aspects which also were part of the
March spec um we can talk about that that's a whole other story of like we actually pushing the boundaries of what Oth can do with with MCP because we're trying something very unique with MCP um
but yeah that was that was the big part in in March which we like and that um was that authentication spec the first iteration then fixing it in June Yeah.
>> What's the state of agents authenticating on my behalf? Because
even today with the OAT, I still have to, you know, log into Linear and whatnot. Um, OAT is for the most part a
whatnot. Um, OAT is for the most part a very humanentric protocol. It's just it just tells you how you obtain a token if you don't have a token. Once you have a token, actually, it doesn't matter. You
just put it into the bearer token. And
so, we we're not very prescriptive of what like agentto agent um authentication would look like or behalf of agents. They are ideas that we're
of agents. They are ideas that we're looking into and I don't have all the specifics but we're not prescriptive in the same way we're prescriptive as with but you can't technically as the moment
you have a token that might be like >> bound to like a workload identity or something like that then you just can pass that still to the MCP server we're just not telling you how to obtain it just yet and so we're not prescriptive
and so people do this and they can do it when particularly when they're within like an enterprise and have a somewhat closed ecosystem but if the client and the server don't know each We just don't have a good solution for now.
>> Yep. And then yeah on the remote thing you went from local servers like SSC and then streamable HTTP. Any learnings you want to call out there? Any uh yeah regrets or uh learnings for others? the
man transport. The one discussion has never stopped on the very beginning of the last years about transport and we literally just spent the last two days at the Google offices with a bunch of like senior engineers from Google,
Microsoft, AWS, Anthropic, OpenAI just like what do we need to do here to really really make this solid? Um when
we look into March, we wanted to get a transport going that basically retains a lot of the properties we had from standard IO because we really and I
still believe this until today that the that MCP should also enable agents and agents are inherently somewhat stateful and there's some form of like long-term communication going between like the
client and the server. And so we always looked for something like that. We also
knew that we looked into alternatives like okay what happens if we do websockets for example and we have found a lot of issues with doing a proper birectional stream and we like okay what
is the right middle ground between having something that can be used in the simplest form that people do like where they just want to provide a tool but um then is able to be upgraded to like a
full birectional stream if you need it because you really have like complex agents you know communicating with each other that's where streamable HTTP was born with that intent. And I think there's something that in retrospective
we got right and and something that we got wrong. I think we got right that we
got wrong. I think we got right that we are really leaning just on on standard HTTP in that regard. We got wrong that we made a lot of things optional for the
clients to do. Um like you can the client can connect and open this return stream from the server but it doesn't have to. And the reality is is no client
have to. And the reality is is no client does it because it's optional. And so a lot of the birectionality goes away. And
so features like elicitations and sampling are just not available to servers because they don't have that stream open because the client the client implement like ah that's the minimum viable project from product for
me I don't have to do it and so that that became an issue. So I think there are lessons there. The second part of the lesson is that the way we designed the protocol, the the transport protocol
requires some form of holding state on the server side. And that is fine if you have one server, but the moment you scale this horizontally across multiple pods and a like in containers or
something like that. Well, now if you get like true call and then elicitation and elicitation result, you somehow require to like you might hit two
different servers and you need to find a way to have these two servers somehow get this result together and you effectively need some form of shared
like reddis memcache whatever you want like some form usually pops up or something like that you want to to have like a shared state that you can like have and that's kind of okay and like we have seen this in PHP applications
Python application being done but it it's it's not fun if you do this at scale and we know from like some companies like the Googles of the world the Microsoft of the world they're doing
MCP at a scale that I can't tell you the numbers but it's like in the million of requests and so now it becomes a problem right and so now we're sitting here like okay how do you build an iteration off
the protocol that allows for basically these principles of like make it as simple as possible for simple MTP servers but allow the full spectrum of like really birectional streaming if you
need it, but also make it scalable. And
I think we're just allowed to find the right solutions, but it's it's just complicated. Yeah. Because a lot of the
complicated. Yeah. Because a lot of the technology today is is really just there's very little like that. People
either do the the simple thing and then you do like something like REST or you do like a full eye stream and then you're just going to do like websockets or like gRPC and so on and we need kind
of both. What's it like to be in that
of both. What's it like to be in that kind of meeting where you have all these impressive companies and everyone is senior and everyone has an opinion and it's much fun.
>> Yeah. I got to work with some of the best engineers in the in the industry like this is it's insane.
>> Okay. Well, who decides? You know,
usually usually there's we're trying to get to consensus like the reality is technically I decide in the end of the day but I think that's uh more like a formalism in the end of the day. What
you're trying to do is just to really narrow down of like what's the what are the real problems which we all agree on what are the things where we ne not necessarily agree on and what are the
you know and then within those bounds like build the best solution and it takes a while it takes a lot of iterations but it's it's so much honestly it's so much fun because you get to see these unique problems from
from the companies you you see some of the identity of the companies in the problems themselves right like you know Google has a different set of problems like Microsoft and and and a lot of it comes from like just their the ways of building things and then the problem
from anthropic look different from the problem from OpenAI but what I love about all of this is that everybody is that like sometimes you step back and like you sit in a room with all these
competitive companies but you're actually building something together and I I love that I've been in open source for like 25 years.
>> Yeah, it's very I love this kind of stuff. And when a standard works this is
stuff. And when a standard works this is the ideal >> and these people are all amazing. I just
learn from from all my peers so much. So
I'm I'm very grateful to be in this situation.
>> Yeah, this reminds me of the IETF standards process.
>> Is there some discussion about how this works as a private group versus something more traditional?
>> It's an interesting one. Like it does look a little bit like the ITF. The ITF
is very slightly different. The ITF is like an open forum where everybody can go and the result of that it's like the ITF is very consensus based and by
accident not by like not necessarily because they want to be but by accident quite slow in the processes which is very good in many ways >> it cannot be undone right once it's once it's up it's
>> yeah and like for example when you look at like the OS 2.1 spec it's been in the works for like 3 years or four years and they're just not done with it right and that's like that's the length of which
ITFs standardization works like these things can take a long long time and I think that's good for certain pieces but I think in AI at the moment is just so fast moving you just you are somewhat forced to find a smaller group and so
that's why we run MCP as like a really traditional open source um project with like a core maintainer group of like eight people that basically decide everything and then like input from everybody else like we get input and
people can make suggestions and we have a lot of the changes don't come from the core maintainers but they are the ones that decide it and that's way more. It's like a middle ground of
way more. It's like a middle ground of being somewhat consensus based but also somewhat like a bit of a dictatorship which can be good if you want to move fast which MCP wants to do at the moment.
>> How do you balance the influence of like the model improvements with how to shape the protocol because obviously you know you have an entropic and openi you guys are doing post training on these models to make them better tool calling and you
have preferences on the shape of the protocol versus there's people that are not aware of like how you're structuring that. So yeah, do you like share some of
that. So yeah, do you like share some of these? Like does the protocol influence
these? Like does the protocol influence some of the model post training or like vice versa? Maybe
vice versa? Maybe >> I'm not 100% familiar like I'm I'm a product person. I'm not fully familiar
product person. I'm not fully familiar with everything we do on the research side for sure. But it influences the post training in the sense that we're making use of things like the MCP
atlas that we're like having in our model card of like making sure that like we we're taking this large set of tools in the wild and make sure um that our
models work with that. But I think the primitives of the protocol, they're actually very rarely influenced by model improvements. I think there's a sense
improvements. I think there's a sense that that we do anticipate the exponential that the models are on in terms of like improvement and that we're
relying to some degree of mechanics that you can put into into the model training. I'm going to get more concrete
training. I'm going to get more concrete here. So, for example, people have had
here. So, for example, people have had long conversations around context build of MCP servers. Um, and that happens because MCP opens up the door to a lot
of tools. And if you naively take all
of tools. And if you naively take all the tools and throw them into the context window, you you just get a lot of bloat. It would be just the
of bloat. It would be just the equivalent if you take all the skills, take all the markdown files and just throw them all into the context, you would also have a lot of bloat. Um, but
we already knew and and I think we always knew that we that you that you can do something like progressive discovery and that's like a general principle thing of like you can give the
model some information and let the model then decide to and gain more information, right? And of course here
information, right? And of course here is where we're like you know some of the foresight that we see because we we we are the big model companies. We know
that we can train this if we wanted to and what the training does is just optimizes it. the model can do it in
optimizes it. the model can do it in principle already, right? And you can any model can do it that does any type of tool calling, but if you train the model for it, it's just better at it, right? And so these things then go hand
right? And so these things then go hand in hand in a way. But in the end of the day, the the general mechanic of progressive discovery or this this Yeah.
progressive discovery that's just inherent to to any type of model that can do any type of tool calling in the end of the day. That makes sense.
>> Yep. Yeah. And I think the Yeah. The
context rot point is important. And I
think down there's the MCP versus code mode thing and then it's like well if Entropic says code mode and Entropic made MCP maybe. Is that the best way?
>> So the blog post never actually called it code mode. That's the call fair.
>> That's it. But yeah, but like people call it we call a programmatic MCP and other call it code but in the end of the day what it boils down to is just like okay and here's the interesting part
like the so first of all MCP is a is a protocol between the I application and like servers right? So the model is actually technically not involved in MCP. Um and so now you have an
MCP. Um and so now you have an application go like I have a bunch of tools what can I do with it? And you can do the naive thing and go like okay I have tools uh I'll throw them into tools for the model and I I call them but you
can be more creative with it. You can go and like okay models are really good at writing code. What if I take this and
writing code. What if I take this and treat it like just like API calls and you give it to the model and now the model generates you know code and what you effectively doing is this
composability that the the model would have done anyway by like call tool A you know get the result go back to inference to call B and then combine it into call three. Now you all you've done is you
three. Now you all you've done is you let the model optimize it in advance and put them into a bunch of code that is just executed in a sandbox and go like call one put it into two put the results
into three get a result and all you've done is an optimization in the end of the day um but the the benefits of MCP of having authentication done for you having um something that is that is
suited for the LM something that is automatically that is discoverable and self-documenting this thing has not gone away that's still MCP for you right you're just using it the way. So I'm
always a little bit confused when people go like but MCPs why why does it tell me that that that does not that does it mean MCPS uses no it's still it's just a different use right and I think you will see evolutions as we're getting better
of like how we use these models and the infrastructure around it gets a bit more mature and you suddenly can assume that most model like AI applications will have some form of like sandboxing for
execution you can do a lot more fun stuff like that but I don't think that the value of like an a protocol that connects the model to the outside world is is gone because of it. That makes
sense. I see it purely as an optimization honestly as a token optimization.
>> This is a good time to bring up skills is always [laughter] awesome.
>> So we skills is a more recent concept.
>> Yeah.
>> Uh we I only bring it up because it's mentally linked in my mind to progressive disclosure and to adding preset code scripts and all that. Uh
skills can also create skills which is very fun.
>> Yeah. Well, I think a lot of people are trying to place MCP versus skills.
Obviously, they're not overlapping, but how do you view it?
>> Yeah, I agree. I I think that's the interesting part is like they're not overlapping. I think they they solve
overlapping. I think they they solve different things. I think skills are
different things. I think skills are super great and you know they're I think that the first that really like they've been built from the principle is progressive discovery. But I think the
progressive discovery. But I think the mechanism of progressive discovery that's just universal to any type of thing you can do with a model. But what
skills do they like they give you the domain knowledge for like a specific set of task like how you are how you behave how should the model behave as a data
scientist or how should the model this um behave as I know an accountant or whatever but MCP gives you the connectiveness of the actual actions that you can take with the outside world
and so I think they're somewhat um orthogonal in like in terms of like the skills really gives you this domain knowledge is like kind of vertical and then like MCP gives you this horiz horizontal of like okay you know give me
that one action and of course skills can take actions they can take actions because you can have code and scripts in there and that's great but it has two interesting aspects that I think people got the first one is you need an
execution environment so you need to you need to machine yes and that that's that's perfectly fine for you know if you like run a local like you know cloud code or something then we can talk about
like CLI for example in those scenarios where you have like an execution environment these things make a lot of sense and and um and then it's great um or if you have a remote execution environment then it make a lot of sense
but you still don't get authentication in that regard and so what I think MCP brings is the authentication piece it brings the piece that you don't have to the this like an external like person like for example if you have like a
linear MTP server they can improve the server you don't have to deal with that in your skill right it's not fixed in space and then the third part is that you don't necessarily need an execution environment because the execution environment is effectively somewhere
else on the server Um and so if you build a web application or like um like a mobile application you these things come you know work better in in some of these regards. So I think they are
these regards. So I think they are orthogonal in that regard for the most part and they are and I've seen some quite cool deployments where people use
skills to explore of like different functions different like you know the accountant the engineer the the data scientist and then use MCP servers to connect um these skills to the actual
like data sources within the company and I think that's actually a really fun model and I think that's the closest how I think about this.
>> Yeah. So MCP is the connectivity layer.
I think is the word that you choose.
>> The communication layer.
>> Communication layer. Yeah. So is is it um architecturally? I'm I'm wondering if
um architecturally? I'm I'm wondering if it's like the MCP client inside of each skill or is there a shared client that can discover skills? We do that as
shared. We do that as a shared one. I
shared. We do that as a shared one. I
think you technically want a bit more shared ones because you do the more shared you have the better the more you can do like discovery things. You can do things like okay I have connection
pooling I have I can do automatic discovery of things I can even like you know in a skill you might just very loosely describe what you want and I can look into the registry that I have access to and get an MCP server for you
right these things can you can do when you do but I think both works in the end of the day yeah but this is things to experiment with uh I do want to highlight for people who might have
missed it you say we do blah blah blah actually I think nobody understands uh enough how much anth ropic dog foods MCP.
>> And I only understood this when I watched John Welsh do his talk at the where he was like, "Yeah, we have MCP gateway. Everything goes through this."
gateway. Everything goes through this."
>> Yeah.
>> And like, what can you say more about that?
>> Yeah. I mean, like, you know, we we use both, right? We use a lot of like we use
both, right? We use a lot of like we use a lot of skills internally. We use a lot of MCP servers internally because like we have, you know, obviously, you know, you want to make it very easy for people to deploy MCP. You want to like have some form of integration with your IDPS
and so on. So we have a gateway that we've built custom purpose for ourselves and you just got to like deploy your MCP servers. Um
servers. Um >> and it's all internal apps. Uh
>> it's all internal stuff. Yeah. Yeah.
Some of them are like external things like like technically external things but in the lack of them offering a first party one. We have our own like we have
party one. We have our own like we have a Slack MCP server which I love to use that have claw like summarize my Slack for me. And so there there's quite a lot
for me. And so there there's quite a lot of usage for that. Like we we even have like an MCP server like we're doing like a semi a bianual like survey for example around like how we how we feel like
about the company about the future about AI about safety these type of things and we have an MCP server for that and they can ask lot questions around the results which is really fun.
>> Is it your team maintaining it? Uh no we maintain a gateway but like I think one of the fun part is like when we started MCP it was always like MCP before we even open source it it was born out of
the idea of like >> I'm in a company that is growing crazy.
I'm in the development side of things, the development tooling side of things.
I will grow slower than the rest. How
can I build something that they can all build for themselves? And that's really the origin story of MCP. And so it's fun to see a year later like that's what's actually going on is like people build MCP servers for themselves. I probably
don't even know 90% of the MCP servers that are on topic because you know they might be in research and I might not even see them or I just don't know because people build for themselves. So
>> but do they do they host it themselves?
Is there a remote?
>> They effectively have a command to launch it and it just launches in like a in a cubes cluster for them. So it's
like partially manage. Yeah, that's good infra for anyone at a large company to to build any platform infra and some there are platforms that offer that to you for us from security perspective. we
want to build these ourselves. Yeah,
>> but like they're like um the person who built a um fast MCP Jeremiah like a company that offers like fast MCP cloud which is a little bit like that. you
just like two commands and you have a running instance of an MCP server that does talk stream over HTTP and then a lot of inter like a lot of enterprises use things like light lm as a gateway
and then they can even do like just launch standard IO servers attach them to the the gateway and the gateway does all the authentication all the hard parts of MCP for them and so there's a lot of ways to do this but that's good
infrastructure you really want to have is just like make it trivial make it like one command to just launch an MCP server that was a standard IO server and suddenly it's a stream HTTP server with authentication integrated and you as an
end developer only had to do the standard part.
>> Yeah, I love calling that stock out because people will take that and actually put this into their companies.
So um yeah, otherwise also the alternative is chaos reinventing everything. Shout out to
reinventing everything. Shout out to Jeremia actually I did invite him to do a workshop on fast MCP at uh my New York summit. here recently a very great blog
summit. here recently a very great blog post about like a lot of the usage of MCP we're actually seeing is internal in companies and that's actually what we see at the moment too which is really cool like in what companies
>> internally in companies in big enterprises you see MCP everywhere and it's it's actually way growing way faster than you would think because it's mostly internal to companies and without people seeing it
>> about discovery so you launched a registry there were registry companies there were gateway companies the official registry now has auto registries putting their own MCP in your
official registry.
>> You need more registries, man.
>> I mean, [laughter] >> just one more, bro. One more. Just
>> Yeah. What what's the >> registry to rule them all?
>> Any any learning from that? Like
launching a registry for like a new technology and like whether or not you know people like you know smithy is one example, right? If you go on the
example, right? If you go on the official registry like all these smithy AI MTPs that you need to authenticate through them. So, it's kind of like just
through them. So, it's kind of like just a pass through registry in a way. How do
you see how is this going to shake out?
>> I think we saw a lot of these like different registries come up and we really felt that there is a need for basically like an MPM pi kind of approach to this where like there's one more central entity that that that is
the the where everybody can publish an MCP server too. And that's really where the original registry came from. And we
really wanted to make sure that at least we're encouraging the ecosystem to have a common standard of what these registries can talk to because what we want to do, we want to live in a world where a model can auto select um an MCP
server from a registry, install it and then for the given task that you have at hand and then you just use it, right? It
should kind of feel magic, but for that you need like some form of standardized interface. And so we're gonna do and
interface. And so we're gonna do and that was really the the inflection point of like we started quite early working with the GitHub folks even in April um and then I got distracted [laughter]
with other things. So like
authentication um and worked on that.
And so what I want to see and I think where we slow slowly but this is slowly heading is a world where um we have the official registry where everybody can put their MCP server but this is the
equivalent to an MPM which has the exact same problems of an MPM like um everybody can put it there. There's
basically you don't know what to trust and what not to trust. You have supply chain attacks. These are just
chain attacks. These are just fundamental properties of public registries. I mean that's why we have
registries. I mean that's why we have this concept of subregistries which then like the Smitheries and others hopefully can do where they can filter and curate on top of it and that's the that's really the world we want to live in. I
don't think we're quite there yet but we're slowly getting there. Like the
GitHub registry is is curated off the or speaks the same format as the as the official registry. And so what we want
official registry. And so what we want is like you as an as a company uh can have an internal registry that is secured form of the the the official one
plus maybe you own ones. I mean that's the one you trust and it speaks the same API than the official one and if you have like a VS code or anything else that wants to talk to a registry you just connect it to yours and you're
you're good to go and that's that's really what we want to do. It's
interesting because npm in a way it's almost like a download gateway, you know? It's like I'm not really using npm
know? It's like I'm not really using npm for discovery that often. I don't go to npm and search for packages. It's kind
of like I find them in other ways. And
um >> is this a Yeah.
>> Yeah. I'm interested if you see like discovery as like a core piece of like the registry or like if you still assume that like there's going to be some other way that the agent discovers. I I do
think discovery is important in the in in the in the model world. But I think that that's where it's different from from MPM because we're building like something for AI first and we can assume there's an intelligent model that knows
what it wants. Um I think that's something that didn't exist before, right? If you maybe I don't know if you
right? If you maybe I don't know if you would build modern package management systems with models at heart, maybe you would do a similar approach of just like here's what I want to build. Just figure
out I don't care what what packages you install, just do it, right? I mean
that's the equivalent in the end of the day. But again with a public registry
day. But again with a public registry you should probably not do this because it's a dump it's a dumping ground for everybody. You want to do it against the
everybody. You want to do it against the against a curated trusted registry.
>> I like your phrasing that the model knows what it wants.
>> Yeah.
>> Uh because I think there there's a lot of there's a dream that people that agents can use the MCP directories to discover new new servers install it for itself. That that seems like
itself. That that seems like >> very AGI if it works.
>> Yes.
>> But it may not work. And I wonder what needs to happen in order to do that.
>> I do think we need like a good registry interface on one hand. And then the second part is just like we need to build for this and see works and what doesn't.
>> We need like trust levels maybe.
>> You definitely need trust levels. You
need trust levels. You might need some form of like >> Yeah, you need trust levels. You might
need some form of signatures. For
example, like one of the ideas, I'm not sure if you're going to do it. Just a
random idea, but one of the ideas I always had is like you can attach like signatures from like different model providers that have scanned this MTP server and say we trust this. Here's the
signature from anthropic that these tool descriptions are safe and here's the like the signature from OpenAI that these are trusted by us and then you can decide or wow. So I think these
distributed code signing >> maybe [laughter] and it's not just really distributed it's just like central in a way right but I think this is the kind of stuff you will require or but I think in the
simplest form um what you can do where you probably see it first is in in scenarios like low like internally to a company where you have inherent trust
because they will use a private registry they're effectively using private registries already for mpm they're using for pi and they will also do it for mcp P servers in there you have implicit trust and then you can just search.
>> Yeah. Right. And I think that's really the interesting ground where we want to where we want to experiment and that's like we have our internal registry effectively because when you launch like um via John's infrastructure like an MCP
server that gets registered, right? And
so we we need to go and experiment with that too.
>> Okay. I actually wanted to also ask you you started running some events over in London. Uh you had the agents hackathon
London. Uh you had the agents hackathon and you had Dev Summit that you called on your timeline. Yeah. I just wanted to get anecdotal stories of stuff you learned as you as you saw the community
spring to life.
>> So we had two big summits this year. We
had the the MCP dev summit in San Francisco. Um
Francisco. Um >> and the one in London too. Yeah.
>> And the one in London. And I think what you learn is a few things. I think the the one thing is that you that's very hard to get otherwise is just like these stories around how people use it
internally in their companies and there you see some of the struggles but you see also some of the success stories and one of the interesting bits that that which I really loved is like particular in London you had a lot of financial
people there because it's like clearly a financial hub and it was actually the the whole conference was in the financial district and learning just like the kind of problems of like um
things you need to enforce because you have legal contracts because like financial like regulations. These were
things that were that I did not know before and I learned a lot about like okay what does like a thing like an MCP like a communication layer need to look like if you have these like constraints
that in a normal like development world doesn't exist like I give you an example like um if you are in in financial services and you're exposing some data that data might be coming from a third
party and you must guarantee that you attribute that third party and that's a legal contract right you must like they have client displays this data to you, it must tell you this came from this
third party, right? And these are constraints that just like in the normal model world don't really exist, right?
But like in the financial industry, this is like legally enforced. And so these are the things you're like, okay, how how will the how will this work in a world where MC for MCP? And so now that's when we like started like
creating this financial services interest group that Bloomberg is heading up um to like figure out like what are some of the things that you like a client must do if it wants to speak to
our financial services MCP server for example and you know what are the things that need to be respected and I think that's the kind of things you only learn on the ground in the conferences talking to people right so I think there was
some of these learnings there I think the other things that you just see is like just how many people are building and just the excitement and like the creativity that some people bring to this like that I just love, right? Like
and and from areas you didn't expect, right? Like I loved the guys at Turkish
right? Like I loved the guys at Turkish Airlines who just built like the Turkish Airlines MCP server. You can search for flights and stuff like that. So that was always fun. Um I love when like people
always fun. Um I love when like people bring some really creative parts to to the MCP ecosystem. So I I love these community when they come together because you're just meeting things that are a little bit outside of your bubble
and I you just get some input and I think there's a lot of learning there.
And so we're going to repeat it again.
and we're going to do it in New York in April I think or or March or something like that and then we're going to do it again 6 months later and I absolutely love that.
>> Any good uh sampling use cases that you found?
>> Uh not so much.
>> Okay. Yeah. And that's always the like you know last time we talked about this sampling a little bit man like we I think one thing I learned from sampling like every wants to use tools with
sampling in tools that are not exposed via the MCP server. like when you want to do sampling, you want to have a set of new tools that you only want to use during that sample call and we just had
no ability to do it. And we just fixed this in this iteration. Um, and so we hope to see a little bit more bit more sampling use cases. You will find every now and then an MCP server that does it.
But particularly as MCP servers have moved away from more local to be more remote. In remote cases, it's probably
remote. In remote cases, it's probably always better for you to bring an SDK because you have full control. You can
deploy it. you can deploy a API case and maybe even charge someone in a local case where is something is really powerful because you're shipping something to a lot of people and you don't know what is their what is the model that they have configured what is
the application they plug it in might be vs code might be cloud desktop right and in those cases sampling is useful but also like clients just don't support it so sampling is one of these things I'm like I'm still sad about it I still
think it's a very powerful idea but you know you gota you got to win some you got to lose some you know >> no no but you you're also you know upgrading it and um you know >> my hopes are still up there and it's
like >> weird like my in in some ways you know when you get it right uh this will be the real agent to agent protocol >> yes yes >> are most of the use cases that you see
still data consumption that's been my use case for MCP mostly it's like >> yeah it's contexting >> getting data well the the most action manp takes is like update the linear
task status um have you seen very complex like MCP taking action workflows or are Still people mostly using it for context.
>> Most people use it for context. I think
that's the vast majority of usage.
>> It is in the name model context.
>> Yeah. Yeah. And and and Nick Cooper from from OpenAI always keeps telling me rightfully so that the name MCP was probably a little bit poorly chosen. Um
because it like it feels it restricts it a little bit. Um which I agree with. Um
I I it's mostly data use cases. I've
seen like people doing deep research >> view it. I think people expose agents v it and so they are they a little bit more complex. Uh but it's it's not super
more complex. Uh but it's it's not super common. It's but people have
common. It's but people have experimented with it like they have like the deep research use case I think is a good one. um that is very very that is
good one. um that is very very that is that's not too uncommon where people do like custom research >> yeah for it but beyond that um yeah I most of it is really data
>> beyond data and deep research aspects now you have also this new aspect where people expose like UI components through MCPUI or what we're going to call MCPI in the future >> um and I think that's super super
promising and I think that's really quite fun that's actually see a lot now with chatt apps with mcpy in general that you see a lot.
>> Yeah. And you have the tasks in the last task. Yeah. What I mean I'm curious
task. Yeah. What I mean I'm curious because like if most use cases are like context and then you build task it's almost like people are not really using it for tasks. So I'm curious like how you designed it like what you expect people to use.
>> We design task because people come to us and go like okay we really want longunning operation which is basically agents. We want like a long deep deep
agents. We want like a long deep deep research task that that finishes in an hour. we want tasks that like like might
hour. we want tasks that like like might not finish within a day, right? And
people have like awkwardly tried to do this with our tools and you can because tools are effectively just an RPC interface in the end of the day, but it gets very quickly awkward because now the model needs to understand, oh, I
need to pull this and it it it's just not very fun like it's just not a first class primitive and you run into a lot of limitations. But it's it's come from
of limitations. But it's it's come from the fact that people want to have a long running agents and that's like something we heard from so many areas and and people trying to do this that we like we
really felt we needed to do something at task like on GitHub issues from big companies everybody was like we need we need something that longunning operations is really top of mind. So, I
really think now we're going to see a lot of it now, but it's it's a little bit early to see how good it's going to go because it just landed in the SDKs and it needs to land uh in the clients and then we're going to we're going to
see more of it. But I will I defin like I I think you will see a lot of the custom research parts with in others.
>> Yeah, I'm very bullish on tasks. I think
uh it was very important to get right.
Uh basically every orchestration or protocol needs has a sync version and an async version.
>> Yeah, exactly. the basic vision any like design choices that you want to call out that you know there were there were two directions and you picked one uh in in just the overall design of task. Uh yeah
in design there was a lot of uh conversation like some some were like okay is this just asynchronous tools do we do a different primitives in the end of the day it was important for me my
litmos test for it was always it needs to be able to like if I want to expose something like cloud code or like any other like like coding agent as an MCP server hypothetically this needs to work
and a pure asynchronous tool call would just not do this you want some form of operation that can return for example intermediate results in the long term.
We want like okay I got to this result by calling this tool this tool this tool I had this other input I had this other tool I did this and now this is the result right that's really what you want
to expose and task is is early and it doesn't do that just yet but it's built in a way that it will be generic enough to be able to support this so that was
the main constraint the other constraint was um making sure um it is um it's it's not
a cop copy of tools where you can you can think about like okay we just do tools again have slightly different semantics. Um but instead what it's
semantics. Um but instead what it's doing is like this like you can create a task by calling a tool with a certain set of like metadata fields and then it automatically creates a task. So the
task itself is just the concept of like a container that can something yeah like just like you do something asynchronously from starting here and to ending here and the thing we're doing is
a tool call. I mean that opens the door to like later plug in other things and maybe even other tasks >> like observability as well potentially which is obviously going to be important.
>> So I think um that was really the design goal which makes it a little bit more abstract and a little bit more complicated to implement but that goes away because the SDKs just do it for you in the SDK and then over there you just kind of like async call this and you
return something. I mean there you start
return something. I mean there you start to overlap with uh other async like tRPC and in JavaScript land or you know what whatever go protobuff stuff that go
people have.
>> Yeah. In the end of the day, it's like it's it's it's designed like a classic um operating system interface like you create a task, you pull it
>> until it's done and then you can make an optimization which we're going to do in the next round which we didn't get around is like okay instead of having to pull every minute or hour or whatever you interval you choose the server can
call us events call like a web hook or something and go like I'm done right that's the optimization but the actual core interface is always that the client can pull and that's actually how like operating file system operation on an
operating system can work is like you pull if has the file changed has the file changed but you can also use like a modern interface on the kernel like I notify or something like that um that
tell or or uring or something like that that tells you oh I'm done >> great >> the file has changed >> there's a trick I I learned uh where like servers can hold the HTTP connection until it's done and then they
terminate and that's the signal to to the back >> yeah which we do not necessarily want do because it might take a few days and I don't know what people >> it's very irresponsible but it's cool.
>> Yeah. Yeah. Yeah. Yeah. There are plenty of ways. I think we were just going to
of ways. I think we were just going to go the web hook way. Honestly,
>> tasks are really interesting and we basically have to invent this um when we did this at did like the Devon API uh and cognition and I think that's also
like uh an interesting reinvention of of like well everyone is going to need some kind of longunning operation and and this is well when you're calling an agent you also need this.
>> Yeah. But the interesting part for us I think what what MCP is always trying to do MCP always tries to encapsulate what currently people trying to do and we not want to be prescriptive what you're supposed to do in a year from now. We
don't know. We don't predict. We did
tasks because people like we need this now, right? We needed this basically 6
now, right? We needed this basically 6 months ago and we're like, okay, I guess now it's time to do this, right? Instead
of trying to do being predictive of of the future, which is why we're trying to keep the protocol somewhat minimal and have I think to some degree achieve this although other people would think already there's too many primitives in
the protocol.
>> One minor thing and so let's say let's say super longunning task.
>> Yes. um lots of messages go back and forth. The enthropic actually was a kind
forth. The enthropic actually was a kind of leader in context compression or uh and compaction maybe let's just call it and I think a lot of the other labs are also doing the same thing. Is there a
way to handle that or do we just statelessly sort of cut context and it's fine. Do you need a a full log of
fine. Do you need a a full log of everything that happens or no? We just
got stuff out. Yeah. Right. No, you
don't. Like I think we I think they're this is the thing, right? We're very
early in the industry still. we're
learning a lot about like what does the model 8 what does not need right um and even today like some agents start to like drop tool call results after a few rounds because they don't need it
anymore and I think that's very very very good and so I think besides compaction you will see um just better mechanics of like understanding what you need and what you don't need like for a
long asynchronous test you might have a way where like okay maybe for a while the model sees it but once you get the result you just drop everything else or you might might even call like a small
model like a haiku model and go like what all this I should retain tell me right like you might be like the agi build approach will be just like let the model figure out what it needs to retain right and so you can you can see both
worlds and and I think there's just lots to learn I think there's not the one answer yet because I think we're still figuring these type of things out and we're just improving and compaction compaction is is a good step for it but
I don't think it's the last step there either actually the the most obvious one but I don't think it's like I think if you pay more attention to it if you particularly think about like okay what
could you train a model to do here I think we get to much better ways of doing that but they're all like independent from how you obtain the context and I think MCP I always see is like back to like it's an application
layer protocol that's just how you obtain the context how you select the context that the problem for the application and that's the problem all the agent applications will have in the end of a day and there will be a lot of
different techniques a year ago everybody would have told you it's rag style stuff that now apparently dead right and now we do use models and we use compaction so I don't know what's going to happen in a year from now
>> cool around the MCPS another question I had is like how do you see them as being used by developers to build AI apps versus being a protocol for AI consumers to plug things in I I think that's one
of the main things people get wrong where it's like well I can just use a REST API why do I need MCP and to me it's almost like it's not really for the developers to use it's for like people using AI tools to just plug things in.
>> I get the comparison with the rest API quite a lot and I I think there's interest. It's funny enough because
interest. It's funny enough because there's two problems in general. Like
the first one is REST does not tell you what to do on authentication. The second
part is already complains to me about like tool bloat, but have you looked at like the average open API spec length?
If you put that into a model like you will have a lot of bloat there too, right? Actually way worse. And funny
right? Actually way worse. And funny
enough, when people try to like map one to one things, the often the model gets slightly confused because you have like search by name, search by ID, search by something, right? And like suddenly you
something, right? And like suddenly you have like five tools that look very similar to each other and the model goes like which one do you want, right? I
have no clue anymore. Um, so anyway, that side note to to rest versus MCP, but I do think MCP I want to live in a world where it's like very like much
like a consumer focused thing. That's
something consumers should know about.
Well, then what I want is I want a world where you go to your application, you say do this and it should just do the thing and it should just connect to the
right services that MCP is under the hood is a detail or that the developer needed to know about because that's the communication channel that they're talking but in the end of the day you
just get the the tasks done right and I and I actually prefer a world where nobody of like my mom should not know what MCP is right if she wants to use cloud the end of the Um, but I do think
it's very focused on on that pluggability of like an external like service and in that regard more like on the consumer focused side and there there's still use cases for developers
in general like first of all as builders but also like I still love my playright MCP server man.
>> Yeah. Yeah.
>> Well, I ch from developer uh tool the new chrome one is like the new the new meta. I also understand like for
meta. I also understand like for developers right like the draw like cloud code locally you know like things that he can be better approached right to some degree and that's okay
>> I'm curious about the MCP apps UI with what you're talking about where it's like every client like Chubd has their own right >> so it's like if I'm used to the MCP app
of this product but then if I go int there's like a different version that they curate it >> it's kind of like a different experience so I'm curious how you feel about that like do you feel especially now that you have open eye in the foundation right do
you feel like all of this will be MCP backed in the same structure >> there's two influences right like MCPY existed as a project um which had a lot of really good ideas um open I took some
of them and really improved upon them >> and now one thing we just announced 3 weeks ago on the MCP blog is that we're actually working with all all two of
them together to build like a common standard and so we're really hoping that we're getting back to a world where you build for one platform and you can use it across all of them or you build for
>> great ones run everywhere >> or chat and you might be able to use it in cloud or in the goose or whatever it might be that the program of your choice that implements this but I think the the the the general promise what we have is
I think that there are certain problems like if you if you think about a modern AI application everything is very text based and that's okay it's nice but there are things that as a human you're
just way better suited to do in in visual Right? The most basic example is
visual Right? The most basic example is like you want to like book a flight seat selection, right? Like you now get
selection, right? Like you now get select like you want to do seat selection in text. It's like here's like the 25 seats you have available. Like
nobody [ __ ] wants to do that, right?
Like I have no clue where these seats are even >> based drawing.
>> Yeah. And of course you want an application that you can select with or it might be like a theater that you want to book for or something like that. It's
so obvious that you do want to have some form of like an application and a user interface that the model can navigate and that the model can um interact with, but you as a human can also interact with at the same time. And I think
that's what we're looking for. And so I think it's just this next iteration of like building richer interfaces because the pure text interface is just somewhat limited and there's very natural things and you like you see this in music
production, you will see it of course or you will have like certain brands who will deeply care about presenting their interface. Shopping is a good example,
interface. Shopping is a good example, man. Like like shopping has like 20
man. Like like shopping has like 20 years of like AB testing. What's the
best way to sell you something, right?
And so and and shopping interfaces are super complicated actually. Um and you just want a way for for displaying that to the user so it's familiar to them and that they they can interact with it. And
that's what MCP apps is in the end of the day.
>> Yeah. And technical direction wise is the iframe the the way they were >> Yeah, it's an iframe. It's it's um you are serving basically raw HTML over um over an MCP resource. It goes into an
iframe and then it talks to the outside re um post messages over a specific interface. And so what you can do now
interface. And so what you can do now you can because it's raw HTML and you're not re like loading some external content. You're going to get uh you can
content. You're going to get uh you can analyze it in advance if you wanted to with security. Yeah, because you have an
with security. Yeah, because you have an iframe, you can like the external application like can just speak like a very clear bounded or like security bounded. Um,
bounded. Um, >> yeah. And this is this has been in
>> yeah. And this is this has been in browsers forever. I think that the I'm
browsers forever. I think that the I'm scared of it only because I hate course issues. Yeah. And iframes always have
issues. Yeah. And iframes always have course issues.
>> Yeah. But this again this does not load ex anything external like it surely like it should not, right? like there
probably are restrictions that we like then iterate and iterate and then in 5 years maybe it has like [laughter] 25 course headers and whatnot whatever right but um but I think at we we're
starting small again with like pure raw HTML you should probably not have external references so you don't run into these issues but you're right >> and can I inherit styles >> uh >> no in this iframe >> yeah I think you need to put it in in
line >> yeah like like you will want it I feel like this is really minor but UI people care about this. Yeah.
>> Which is you have it it should look like CHBT.
>> Yeah.
>> Inside of CHBT should like cloud cloud.
>> I think that's a very good question. I'm
100% agree with you. Like brands and others will deeply deeply care about >> designers will 100% 100%. [laughter]
Yes. And that's something we need to figure out and that's where we need to get it out of the door and see how people use it and then iterate on it.
>> That's that's why I don't think it should be iframe long term. I don't I don't know what I don't know what the solution is, but like we need like new iframe >> that lets some permeability because of this stuff.
>> Well, I think I think that's sensible.
Yes.
>> Um well, I don't know.
>> But the other solution to the problem is the IGI build approach of like I just give it a tool that says give me a sty and >> the model can call you and tell you what you're supposed to look like.
>> Okay. Should an MCP app be know what it's being used? What the parent application is is? You know what I mean?
Like it >> it might be like the the the application also exposes tools, right? That the
model is free to call it.
>> Right. Right. Right. Okay. So maybe
standard is interface for people to pass down styles.
>> Yeah. Maybe I don't know but it's it's a very big question. Let me let me ask the team. I'm just like I'm mostly like
team. I'm just like I'm mostly like directly there. I'm like not in the in
directly there. I'm like not in the in the weeds of doing everything there.
>> Yeah. It seems like a little bit of a surprise to me. I I never really paid any attention to MCP UI and then suddenly you guys all adopted it. I was
like okay well I guess this is a part of SCP now. Yeah. And it went from a purely
SCP now. Yeah. And it went from a purely back end concern to now front end.
>> It's also like notable is technically an extension to MCP. Like it's not MCP MCP.
Um that's a pure technicality because >> it's a governance thing, right?
>> Yeah. It's mostly like if you are a client that can render HTML, then you might want to consider implementing it.
But you you're still an MCP client if you don't. And the reality is is like
you don't. And the reality is is like your average like CLI agent can't do it, right? So they will never do it. Um and
right? So they will never do it. Um and
so I think that's fine.
>> Are there any other extensions that are similar? uh we got to look into like
similar? uh we got to look into like financial services as an extension like okay you might you might end up in a world when a year from now there might be clients that have like certifications that they are an amp like and get like
um a signature that they're like financial services MCP clients and they can prove it to the server and only then the server allows connections because it
knows they're respecting attributions these daily contracts that you put into place and you will see this everywhere you will if you want to deal in the long run with uh public servers and public uh
clients that do like deal with HIPPA data like fin like healthcare data you will have to have guarantees isn't it part of just off or off >> uh not necessarily like I give you an
example like if I have the client might need to have have five servers installed and if there's one healthcare server that healthcare server might tell you you are not allowed in this session to
use any of the other MCP servers because this data I'm giving you cannot leave you right you must guarantee that this data doesn't go anywhere else because it's HIPPA data, because it's
financial data, whatever it might be.
This is a good example. And that might be some of the enforcements you need to do >> because you just like, yeah, you don't want to have your or your social security number or your healthcare data show up in your back accident, right?
>> Awesome. We're going to transition and have the rest of the AIF um group join, but any final call to action like either you know people that should join your
team, people that should contribute to the MCP spec or anything else?
>> I think the most important part is still building with MCP on a on a day-to-day basis for people to just go out build really good MCP servers. I think we see a lot of mediocre MCP servers and it's I
mean some very very good ones. Um and
just building good MCP servers looking at like how to use them. Um I think that's super important. The second
aspect to that is like we're a fairly open community and we're running it as a as a traditional open source project that is based purely on like what people are able to put in in terms of uh in terms of effort and time. And so just
like being an active part like either giving us feedback, being in the discord channel, talking with us, giving us ideas while also just helping us implementing like the TypeScript SDKs, the Python SDKs, we're always looking
for new SDKs, right? Like we have active Go SDK development, but like we don't have a Haskell SDK. I don't know if you're a Haskell developer, maybe you want to write that, right?
>> Yeah. There you go. And so I think there's a bunch of stuff we can do and and be part of it. And I think um yeah don't underestimate how much you can just be part of the community but also
just like uh go and build um and I think there's so much opportunity now particularly to build like amazing clients now that we have understood progressive discovery better now that we have understood code mode better there's
just this next iteration of clients to build and the next iteration of servers to build that I'm just looking forward for people to do.
>> Yeah. Um my my last uh question or call out is uh I wanted people to hear directly from you. I sense the energy.
I'm very uh excited by everything that you're doing. Um but a lot of people are
you're doing. Um but a lot of people are anxious about joining MCP joining the Linux Foundation. They're like, "Oh, is
Linux Foundation. They're like, "Oh, is this Anthropic taking his eye off the ball?" Can you address those concerns?
ball?" Can you address those concerns?
>> Yeah, I love that you asked me that.
Like I think Yeah, I can totally see why people think that, but like it's actually quite the opposite. Like the
commitment of Anthropic is the same, right? I'm still we still have the same
right? I'm still we still have the same people um helping with the SDKs. We're
still super committed in our products to MCP. I'm still the lead core maintainer.
MCP. I'm still the lead core maintainer.
nothing has actually changed. What
really is the main part of the foundation is like two things. The
number one is like making sure that the whole industry knows that this will stay forever open that this cannot be taken away. And there have been there have
away. And there have been there have been like I probably would never do this I think but there have been histories of like companies going like taking an open source project and suddenly making it proprietary again.
>> We have protocols that are proprietary.
Look at HDMI. look at like what's the problems of HDMI in Linux?
>> What's HDMI?
>> Uh [laughter] H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H H HMI 2.1 HDMI forum does not want to allow the AMD to develop open source Linux drivers for HDMI 2.1 really there's some look it up.
>> Wow.
>> Um so you know there's people like keep a very close tab on it and what this does is like no this is now owned by a neutral entity. It will always stay
neutral entity. It will always stay open. You can use the the the word MCP.
open. You can use the the the word MCP.
nobody's going to sue you over it. So
there's a bunch of that just giving the ecosystem and the industry that confidence that this stays neutral. I
think that's important. The second part to that is that I think one thing I'm if anything I'm the most proud of is that I think we have set the the tone for open
standards in the in the industry and being able to now use that momentum to build like community in a space where
people can come and bring really well done well supported well-maintained uh projects and have and have them part of this foundation. I think that's the
this foundation. I think that's the that's the other part to that. But the
funny part is like our bar for the for the foundation is going to be like it needs to be like really well maintained.
It's not like you getting you're taking the ball off. It's actually exactly that what we don't want and so we will not do that for us. MCP is still core to the product and still super important and
anthropic and say um we're still just as much as committed as we've ever been.
Amazing.
>> Awesome. Thanks for joining David.
>> And we're here in the studio with core team members of AAF. Um it's it's the biggest panel we've ever had on on the podcast. So, welcome guys. Uh, maybe we
podcast. So, welcome guys. Uh, maybe we go left to right and introduce everyone and also identify the voices for people listening on audio.
>> Uh, I'll start. I'm Jim Zlin. I'm the
CEO of the Linux Foundation. I've been
working there 22 years and um I was the person who helped facilitate the launch of the uh foundation. Uh, but take no credit for any of the technology work
that that's to my my left.
>> I'm Nick Cooper from Open AI. I've been
there just over two years now, I think.
Uh I'm generally open AAI is like head of a lot of protocol things and very interested in the open ecosystem and our representative for Aif as well as a core contributor to MCP.
>> Got it. What's another protocol that might fall under that umbrella?
>> Uh agents ND just in general like not just the protocols but also the product experiences of where open AI products intersect with other SAS provider things and other systems.
>> Uh I'm David Sorya. I am the working at Enthropic number of technical staff there. I'm the co-creator of MCP and um
there. I'm the co-creator of MCP and um yeah at Anthropic I mostly lead all the MCP efforts. Great. And I'm Brad. Uh I'm
MCP efforts. Great. And I'm Brad. Uh I'm
a principal engineer at block. Uh so by day I build AI products and by night I work on open source like goose and the original author of goose. It's great to see everybody come together. I think
when I heard about the the news I didn't really expect it. It wasn't on on my bingo car. So maybe let's have a little
bingo car. So maybe let's have a little bit of inside baseball. So you obviously have open anthropic and yesterday at the launch event you were joking on how uh you didn't know that the two companies
even talked to each other. Uh and then yeah how did the conversation start?
>> The conversation started out of two things. The first one is that on the MCP
things. The first one is that on the MCP side we always knew that we wanted to find a neutral home for MCP to make sure that the uh that the industry
understands that this is uh stays open that this is something safe to adopt. Um
and then um very early in the process as we were looking around like what to do about this. Should this be a project in
about this. Should this be a project in a foundation? Should this be inside its
a foundation? Should this be inside its own foundation which is like these common patterns you see for this kind of work. And we got approached by by the
work. And we got approached by by the our friends at block to um discuss uh because they were looking into like donating uh goose I think at the time and so there was a question around doing
something together. Then um we
something together. Then um we approached um OpenAI and and they were very very uh welcoming and like very open to the idea as well and and it slowly like formed and I think you know
the time frame of this is like a few months these things are not happening out of thin air in like a week or so and so just a lot of conversation like what do we want to do what are the kind of like constraints we want to have and
what is the thing we want to build and of course we were looking for where to put this kind of stuff and um that's where the Linux foundation comes in as is um I think the biggest uh foundation
of its kind and certainly uh has like the like decades of experience helping companies through a process like this uh and building what is a tech is technically called a directed fund
within the Linux Foundation to build these kind of things out. I think David said basically all the story from my side as well which is um so we saw this like need to connect systems and then
MCP gained such very large developer traction and we at OpenAI were very excited to like use and then contribute and actively participate in this and from my point of view it was always very
natural that this would grow into something bigger and move to a neutral place and like MCP's always been like a foundation for like communication between agents and contexts In a similar
way, the Agentic Foundation is well, it's a foundation, but also it's like the starting point where I really look forward to other contributions like starting with Goose, our own agents MD, where we're really open for like a lot
of technical contributions to build out like a full agentic ecosystem. I'm
curious, Jim, uh, you know, I've been to Linux Foundation events before. I've
spoken at them. Is it it's almost like is an MCP so early that like how do you even structure it in a way? I'm curious
because so many of the technologies that the foundation supports are kind of like core pillars of infrastructure and the internet. This is probably like the
internet. This is probably like the youngest technology that you brought in as a foundation. What are the goals of it?
>> Yeah, I mean I I think what's interesting here is even though it's young, I think if you >> I think AI years are kind of like dog years.
>> Absolutely.
>> It it Do you use this metaphor? But
>> yeah, totally. This is why I run three conferences a year.
>> Yeah, exactly.
>> You can't do annual.
>> I think last night someone was asking, "What do you what do you see a year from now?" And I'm like, "Well, if I dial the
now?" And I'm like, "Well, if I dial the clock back a year, would have I anticipated where we're at right now?"
There's no way. And so, I think part of the thing with MCP is that we're just living in this kind of dog years velocity. Uh, in the past, I think
velocity. Uh, in the past, I think things took a lot more time to coalesce.
And what is clear is that a lot of people are adopting MCP. You see it in commercial products that companies are rolling out. You see a lot of usage in
rolling out. You see a lot of usage in the enterprise already and there still is a ways to go in terms of the technology becoming mature. But I think the the same thing held in internet
protocols you know that took a little bit longer to mature and the internet matured over time. But I think the thing I'm most excited about, it's becoming
clear that MCP will be a key protocol for this technology movement. And I
think, you know, David and these folks were all pretty wise to realize that, you know, if internet protocols had been owned by a single entity, we'd still be
calling it American, America Online. It
like, you know, it would be it wouldn't work. And uh I think that this has got
work. And uh I think that this has got all of the underpinnings to be a huge movement. At the Linux Foundation, we
movement. At the Linux Foundation, we ask three questions for every project.
Will this be meaningful and impactful for industry and society? Uh the second question is do you need more than one organization to collaborate to do it?
Otherwise, you don't need us.
>> This case clearly we've got that. Then
three, can we get the resources and build an ecosystem around it? and 50
companies on day one. You know, a a huge set of folks in line to participate and join my L inbox. I'm sure your guys are
too is like full in 24 hours. How do I participate? We want to contribute. How
participate? We want to contribute. How
do I get in there? I've never seen that kind of inbound interest starting any project at the Linux Foundation in 22 years. How do you pick? You got all
years. How do you pick? You got all these people reaching out. you know,
there's good and med.
>> It's a really good question. I think
like how to pick uh how we expand the foundation itself from like a governance standpoint, but also like technical contributions and how can the foundation best support them as well. That's like
really top of my mind. It's like the first thing we need to like define some structure and work out what we like how to bring it all together. But I think even before like those details, there's
such value in establishing this one forum that people can come to like even having a list of eager technical participants and potential
opportunities. That's a huge opportunity
opportunities. That's a huge opportunity in front of us to like distill what's truly meaningful to developers, users, and everyone. And very appreciate
and everyone. And very appreciate Billings Foundation acting as like sort of a galvanizing rod for this like attention. Brad on on the the sort of
attention. Brad on on the the sort of block and goose side it's an interesting the involvement that you guys have had and the sort of engagement that you guys have had. Uh what was your calculus in
have had. Uh what was your calculus in joining the AIF?
>> So for us I think it's uh like in developing something like goose I think it the thing that I see it as being part of this umbrella is it's the most concrete piece. So you can actually like
concrete piece. So you can actually like download goose and use it in a way that you >> it can download an HSMD [laughter] >> right like what what do those parts do together without having something that
actually like connected the client CI like a real client and there's I think a lot of value in that because when you you get into the pro protocol space you want to add things to it but you have to
actually show like what is enabling like why are you making the protocol wider and putting it into like a reference implementation shows you like oh it's giving this value to people like very concretely and I think there's some like
for example there's like a spec uh for MCP MCP apps that is brand new and we've been working on MCP UI >> for goose >> yeah so so goose has been like kind of a day one partner with the MCP UI team
>> oh I didn't know that >> and so now we've had MCP apps we'll go we we have opened an issue today about how we're going to go get that into goose and so that's something where I'm people I think you hear something
abstract like that like what is send what is the server sending an iframe to the client like do and I think goose is a place where you can see it like okay you're going to build a dashboard or you're going to have this enhanced chat
experience and so this is something where I think we collaborate more and more to say like this is what it looks like and how you look at some of these abstract things and make them real I think the other tidbit here maybe like
back to the history of of both MCP and goose is like um goose was the first open-source agent interface or agent that reached out to us and work with us
to integrate MCP. And I think RAD is actually like technically the first non-anthropic contributor to MCP ever uh on like day two or something like that like very very early. So this goes all
the way back to like November last year to like the partnership of having MCP inside Goose.
>> Yeah, we had uh we had a version of of Goose um that was still you can go check the GitHub history. It was there a little bit before MCP came out and we were sitting there with like a plug-in
ecosystem who were like this is awful [laughter] like what like why would you why would anyone yeah why would anyone come develop a plugin just for goose but we saw all these opportunities and so we started talking to anthropic and we were
like I think that there's a space here for a protocol that they're like well let me tell you about uh and it was really cool to see you know what the zite no we didn't we reached out before we heard the zet thing and so we were
like okay like yes we just want to like we want to pile onto something that has a chance of succeeding because as like a client, it's like an ecosystem, right?
Like the more people are going to be using it, you get more value as a client than as a server because you know your servers are going to work with any client and that as a client, you know, you have this li giant library of servers and so that's been like a big
part of what GS does is that it's not really like it is a coding tool. people
use it as a coding tool, but you can turn off the code part and you can just connect to any MCP server. And so it can be like operating like, you know, a science experiment. I've seen that or
science experiment. I've seen that or like just like Google Docs or whatever.
And I think that it kind of shows you how MCP goes beyond just like the bat coding space.
>> Yeah. I think as well it's also the fact that it's concrete is so important like for all these standards like there's a long history of standards throughout computing that like people like
proactively write a standard and then when you know the when it's actually tried out it has problems. >> Yeah.
>> But like for MCP and like all these new agentic standards we're coming with we really want demonstrated utility. Like
the most common thing on the core committee is like there's a proposal and we come back to people saying like have you tried it out? Does it work? But the
protocol is about communication. So if
you're trying something out, you need collaborators and you need like concrete open source projects like goose and you need clients, you need a variety of servers cuz it's only with that sort of open ecosystem they can meaningfully
understand if this is actually going to work.
>> Yeah, I totally agree with that. I mean,
I think the world of standards and open source development are just merging, right? You sort of co-develop these
right? You sort of co-develop these things together. Um, I think I was
things together. Um, I think I was trying to figure out whether David is Vince Surf or uh Lena's Torvalds for agents and I think maybe it's a little
more it leans a little more vent surf and then maybe ghost is a little more Apache web server and my whole leanest part kind of falls apart at that point.
Uh but you do need something substantive to try the protocols out in order to make sure you know how to improve them.
It's that feedback loop that's so critical. Openi also has a coding agent
critical. Openi also has a coding agent that is open source. I I think what's the thinking there apart apart from like well would codeex ever be donated to AF
or we just don't know yet. I think the short answer is we don't know yet. But
like it's sort of like a feedback loop which is um in this open ecosystem like we don't want to have too much alignment
all on like one implementation one thing. There's like real value to users
thing. There's like real value to users and developers or whoever's the participant to like active competition in some parts. So there's this balance of like we need openness to foster like
collaboration and experimentation but like I would like to see a variety of coding agents and each one might deliver unique value different value and be free
to explore independently. So it is sort of like a careful balance here. There's
a bit of like a taste making approach to like contribute things that benefit from being open like agents as an example which is open up any GitHub repository.
it has this file that works the same way like if everyone sort of did their own thing there that's very low value potentially damaging to the way but um
so there's commonality value whereas for actual concrete implementations and projects it's great to have reference implementations in many ways or experimental grounds like goose but I really favor a huge variety of them
because that way we'll see what comes to be the best is there a road map for what you want to add for example the agenda commerce protocol like ChubD already uses but that's not a part of it. Um
there's no model as a part of the foundation. Um do you already have a
foundation. Um do you already have a road map or like you said you're just kind of like going month by month seeing what are people using and what should be in there. I think we don't have a road
in there. I think we don't have a road map in the sense of like well projects lined up but I think what we have is principles by which we will select
projects um to some degree and I think the effort here is mostly around sitting together after the now the the foundation is created and then evolving
these principles as we're seeing people going to ask us about the projects they would want to put in um and then develop the the foundation um further as time goes but at the moment I think the most
important part is that we have the principles in in place and then um you go and having the conversations with people who who want to uh be part of this foundation. One principle that like
this foundation. One principle that like really comes to mind to me is um composability. Like I often use the
composability. Like I often use the analogy of like Lego blocks sort of thing which is agentic systems are a sum of many many parts. And so like something that I hope the foundation can
evolve to do is have these like interoperable composable bits that all work together being. And so we don't have a road map of like future contributions, but like I welcome all
contributions that play nice with other contributions and like really create this potentially like a future flexible open agent stack to be like not
a universal agent but an agent that suits everyone's purpose or need.
>> Yeah, it [clears throat] it's tricky.
You got it's a hard and and these guys have the harder job of early in innovation cycle. You don't want to
innovation cycle. You don't want to restrict innovation by saying like, "Oh, well, this is the one, you know, versus that one." But you also don't want to
that one." But you also don't want to let every single random thing, you know, into an organization like this. And so I
do think you need this tastem is a a good way to describe it where a group of you know elite architects and developers you know folks like the three people
sitting next to me are are more curating and you know some some things can work some things might not but there needs to be a process which I think we'll define
uh to do that that curation um that happens via taste makers and and it's more of a not just more but essentially a technical effort, not something where
a committee of, you know, folks from vendors get together and say, well, my product should be in this road map and that guy's product should be in this road map. That tends to not be very
road map. That tends to not be very successful.
>> Yeah. Which leads to this other principle that we really want projects that are have a lot of traction, that are well-maintained, that that have um that are very very healthy in the foundation. I think that's super
foundation. I think that's super important to us, >> right? Yeah, I think like you're you're
>> right? Yeah, I think like you're you're looking for something to have already found a niche and to be established because you don't really want to be pushing like a speculative architecture.
You really want to be embracing something that already works. And so I think a lot of the stuff that we're talking about like payments or like kind of a I I really enjoy the like interface to model architectures. Uh I think those
are really interesting, but it's not yet obvious that that pattern needs to exist. Uh, and so that's something where
exist. Uh, and so that's something where we can go see it and like try to make it work um in some projects and then come bring that back if it really has a role.
>> And on on the opposite, what's my incentive to bring my project you? I
have a a project with adoption. It's
well-maintained. It's healthy. What's
the benefit that I get from um donating it to the foundation? I
>> mean, I can start that, but I'd love to hear from these guys as well. I I think what you all technology is an implicit futures contract right and so you know
if there's technology that has traction and uh that traction sort of wants to be built upon having that technology at a neutral place like the agentic AI
foundation where the whole industry is making decisions about how to invest and and when I mean when I say investment I don't mean like becoming a member of the foundation because you don't need to
become a member to participate on the technical side. It's decisions about,
technical side. It's decisions about, hey, I'm going to, you know, assign 10 of my company's engineers to co-develop this with your organization, the
contributing organization. And, you
contributing organization. And, you know, the that's a a way that we can all essentially co-develop together. And
that will provide better support, more development velocity, higher code quality because more people are participating in it. And that's a massive incentive if you know you want
your technology to actually be used and adopted in industry and get more feedback and kind of a a positive feedback loop of great project gets great products in the market. That
market feedback then you know allows companies to make money off of them.
They then pay engineers to improve the project. Better products, more profits,
project. Better products, more profits, better project. And you know that's the
better project. And you know that's the incentive uh which is a pretty high one.
could add a techical sort of spin which is none of these things are built in a vacuum like all these projects build on lessons and learnings or practical code
from other projects and that's a big opportunity like any technical technical contribution will bring its own unique value to the foundation at the same time it then gets to learn the lessons that
all the other participants in the foundation do and like I found it really valuable over this past year working with David and others on on the MCP committee about like it's actually that
communication thing that makes our ideas more robust, makes the implementation better. We can be sure it's secure and
better. We can be sure it's secure and safe and actually works. This requires
communication and that sort of the foundation is the natural like town square for this in a way.
>> I think so one last angle on this I think if if you're working on a standard or a protocol, this is such an obvious decision, right? Like the value in the
decision, right? Like the value in the protocol is about how many people are adopting it. So being a part of this
adopting it. So being a part of this like gets you that reach. But I will say as someone who's working on a client and not a protocol, I think there's value there too, right? Like you we want this
to be part of the foundation because we like develop these ideas together to your point and so it makes it better.
Like we're we're donating Goose because we think it's going to make it a higher quality tool.
>> I I actually have a followup question on just the LF side. LF has many other funds and and and organizations including data data and AI foundation as
well as like dedicated like PyTorch and all the other uh ones. Um I guess why a new foundation?
>> Well, because everyone's special.
[laughter] No, I I think that the way we look at uh in this space, you know, I'll I'll put aside the projects in semiconductor tech
and operating systems and stuff, but in AI, we think of it sort of like how the market has evolved. You know, it started with tools like PyTorch and you know
that the transformer tech that is used to create LLMs. the the Linux Foundation kind of took a pass on a like frontier model world because in the open source
space having connection to an the internet and some intelligence and a computer that's sort of entry in the world of frontier LLMs it's a computer connection to the internet some
intelligence $2 billion worth of GPUs and a ton of data harder for consortiums to do that kind of work so pass uh then you look at how you know reasoning
models have come uh need to be you know so in in inference world things need to be scalable okay now you've got interesting technology VLM ray things
like that they have to be deployed on something Kubernetes is sort of that uh these are all distinct components agents are a distinct enough set of of
technology that it merits its own community >> separate from data >> yeah because like a pietorch dev isn't really doing a ton of stuff in agent land, right? You know, somebody working
land, right? You know, somebody working on Dockling maybe is a little more adjacent, right? But not quite the same
adjacent, right? But not quite the same as somebody who's uh like working on transformer tech or VLM. And so they are logical categories. I mean sometimes
logical categories. I mean sometimes stuff comes in over time and you know we sort things out later. We had early on in the telecommunications sector, you know, a software defined networking
effort, a network function virtualization organiza uh orchestration effort, a whole bunch of stuff.
>> Uh all separate entities and uh they I was like let's just bring all these things together because the technology is now mature. We're taking all this money in, but we don't really need the
resources anymore because the market's already mature. And so it took me a year
already mature. And so it took me a year to get all these companies to decide to bring all these things together and not pay all these separate fees and have all these separate orgs. I have a little
folder in my inbox that says, you know, convincing people not to give me money.
But in this world, I think it's a different kind of audience. I think it's narrow enough. I think it's specific
narrow enough. I think it's specific enough to agents where it merits its own entity. I think as well it um dubtales
entity. I think as well it um dubtales somewhat with the earlier thought like there's a taste making aspect to this >> for these organizations to be effective they really need a focus like something
that brings them together >> and ultimately like you can imagine an alternative where we snowball and there's only the Linux foundation as this Uber do everything remotely connected to a computer and that
wouldn't be that effective so there's a taste making here as well which is we want to be focused on the gentic systems and how they connect together hence the Gender Ki Foundation, but like
everything's about growth and evolution.
So there's a possibility that later down the line we recognize some natural affinity. We have something new,
affinity. We have something new, something old, and then they can be brought together, but the focus helps at the beginning.
>> What's going to be the actionable outcomes? So obviously you have the
outcomes? So obviously you have the funds to direct u I know a lot of the Linux Foundation does events. There's
also like eventually like certification things like that.
>> Yeah. What's the split of the foundation investments? Is a lot of it going back
investments? Is a lot of it going back to uh different projects individually?
Is it about the community building? And
then from people that have not been involved from the outside, it's like this just seems like a nice blog post and a bunch of logos, but like in reality, how are things going to be actioned?
>> Yeah. I mean, 50 companies coming in to fund a bunch of blog posts seems like overkill, >> right? Exactly.
>> right? Exactly.
>> Um, so I think there's a couple of things. One, the intellectual property
things. One, the intellectual property assets now are owned by this entity.
That entity is responsible for making sure that you know that IP is managed effectively, that licenses are complied with, that intellectual property problems are dealt with. Some funding
goes to that. There's a leadership function where you know to help bring consensus across the industry and within developer
communities you have to have a special kind of someone to do that and I think they need to be technically knowledgeable but humble enough to know that they're that the community is the one who makes the technical decisions.
So kind of just you know sort of lead through influence to kind of help people organize things effectively. So you you hire some people to do that. You hire
people to do like developer outreach, community engagement because you want more developers coming into the community. So funding to go to that. And
community. So funding to go to that. And
then there's a huge convening function.
The Linux Foundation hosts 50,000 plus uh virtual meetings a year. So we have this like I think we're probably like one of the largest users of Zoom. I know
for sure we're the largest Slack user in the world. And so that convening
the world. And so that convening function is is critically important.
Make it as seamless and easy as possible to convene. And then yeah, we we hold
to convene. And then yeah, we we hold events because I think you know to your point developer engagement face to face being the the town square where you physically get together means something.
So I think you guys have been to KubeCon. You know we have on easily
KubeCon. You know we have on easily 10,000 people that come to that conference twice a year. uh in Europe this summer there were 13,000 folks there who you know come in and they
exchange ideas the core maintainers get together and make real decisions. Uh and
then the last thing we spend resources on and and you can go even just check these out for some of our other projects is we have a whole platform that enables you know maintainers to look at their
community and and understand you know like what's our velocity how many developers are we adding like what's the social media scuttlebutt around this
project uh what are leading indicators of adoption how's our security uh doing like you know do we have good practices about application security and those are
all things that we, you know, invest in to help make these communities, you know, better commercially adopted so that we get that positive feedback loop of like adoption begets more investment
in the form of developers providing input and that that virtuous cycle kicks off. That's where the funding goes for
off. That's where the funding goes for these kind of things.
>> I put in that question into our doc because it says is a directed fund. So
my cheeky question was well what are you directing him to? So, so yeah, I mean that's it's kind of [laughter] so directive fund gets into like the the nerdiness of this. Yeah. So, but it's that the reason we structure it that way
is somebody has to own everything. The
Linux Foundation is actually the ownership vehicle. And remember, we
ownership vehicle. And remember, we separate technical governance from the governance of actually how money gets
spent because we don't want this sort of pay-to-play aspect of technology that tends to screw everything up. And so the directed fund is really like, you know,
real stakeholders who really care about this tech put money in and use it in a way to help build the market and the community and all the things I just talked about. and just let developers do
talked about. and just let developers do what they're super good at. Get
together, solve tough problems, be, you know, taste makers. That that's
something that we separate. Yeah. I
think there's a great essay by Rich Hagi who created closure about um open source is not about you. Uh just because something in open source, I don't owe to respond to your issue and to pull request. I think some of the worries
request. I think some of the worries sometimes that people have about the groups is like well you know if not you're a part of this thing uh am I supposed to also listen to your thing and implement the thing that you said.
So uh I I think that's going to be a super interesting thing in a technology that is so new. You know I feel like everybody because there's so much venture money in like early stage companies and like obviously the foundation model labs have raised so
much money that they need to be on top of it. There's a lot more pressure I
of it. There's a lot more pressure I think from the community to try and be a part of it and like put their stake and be like yeah we've contributed that or whatnot. So I just think it's like a
whatnot. So I just think it's like a unique compared to like the CNCF for example where the hyperscalers are kind of like around the clouds and we all know what those workloads look like and
like nobody's really trying to influence there's not like a openAI preferred thing versus like a anthropic preferred thing >> but it wasn't it wasn't always so. So
when we started CNCF, I got a call from I think it was Ers Holtzel and Brian Stevens who were over at Google and it was 2014 I want to say >> and uh you know they're competing well they had they weren't even competing
they weren't in the cloud business and they wanted to be in that business Amazon was you know hosting virtual machines on EC2 and they were the deacto leader they said we will give away
Kubernetes which was kind of the Borg and they renamed it Kubernetes uh to the Linux Foundation and we've never run a virtual machine.
We think containers are a better way to scale cloud applications, we'll give you this tech and it'll be helpful to us if the entire industry adopts containers and Kubernetes as the way to build and
deploy applications. So that was the
deploy applications. So that was the strategy out of Google. Uh and they contributed some serious IP that we all know today is awesome. But at the time,
remember MSOS was still a thing, >> like Paz was still a thing, right? like
you know Hioku, Cloud Foundry, even OpenStack you like virtual machines were still kind of a thing. It wasn't clear what the abstraction layer for cloud
computing was. But once the market
computing was. But once the market started sort of piling on to Kubernetes, you know, like, oh, now Microsoft joined cloudnative computing foundation.
They're investing in Kubernetes and creating Kubernetes services. Oh, wait,
Amazon's now investing this. Then the
consensus was really coming and built up here. I think there's a somewhat similar
here. I think there's a somewhat similar situation here with the caveat of saying like 10 times faster right like just day
one so much momentum around MCP so much interest in this and then also 10 years of CNCF to sort of teach the developer community and the vendor
community how to do this well where you know investment is not mutually exclusive to great technical outcomes I think has been super positive. So, I
think this is going to move super fast.
>> Awesome. Um, we don't want to keep you guys too long. I'm sure you've been on a media tour this week. What's maybe from each of you like one thing you look forward in the new year from the foundation?
>> So, I don't really know what it's going to be look like, but I really look forward to like the next step like as David mentioned like it's being months of development and discussion whatever
to bring us to this. And there's this sense of I guess relief achievement as like you know you made a foundation We're collaborating. We created this
We're collaborating. We created this open space. It's great. But the what
open space. It's great. But the what next? I'm super excited for the next
next? I'm super excited for the next technical contribution for the first AIF event or night or conference or whatever form that ends up taking cuz there's
another world where like bodies and foundations are created and then like eventually they get forgotten. And this
is not that. This is really a beginning.
And so I want to see it be healthy and grow. and I just don't know what comes
grow. and I just don't know what comes next. So I'm most excited to see that in
next. So I'm most excited to see that in the new year. Yeah, I think I'm most excited like if if I really take a neutral look of like what just happened in the industry with creating this is
like you have Google, Microsoft, Amazon, um uh blog Bloomberg Cloudfare OpenAI, Anthropic, just a platinum member create a foundation. I think it's
just like this is actually quite quite cool and quite substantial. And now it's just like now we at this like starting point of like what can we do with this?
And and to to Nick's point, we don't I think there's a lot of like things we don't know yet and like things we need to figure out like for Anthropic, this is the first big foundation um we're we're creating and we have to learn a
lot here. Um but I think it's just a
lot here. Um but I think it's just a such an interesting like starting point and I'm just super excited for these like new uh when you when you start something new like what you can build with it and it's it's in a way of
building something that I'm not familiar with. So, I'm super excited to learn
with. So, I'm super excited to learn about this and seeing what we can do with with this like I feel like quite unique vehicle now and like really
driving the agent like AI um open source community forward and focusing on what some of these companies who are very competitive with each other have common ground and where we can build things
together that is just benefiting and uplifting every user in the market and every developer in the market, every builder in the market significantly and that's what I'm really excited um to see.
>> I definitely agree with both. I think
there's a lot of like opportunity to figure out what the structure does. But
let me give you something more specific that I think is like already in in coming up which is I I want to see how agents become asynchronous and I'm really tired of like reading through
chat sessions and I and I want this to be a thing where I can go have like 20 agents working for me and actually see that come together. So I think that MCP is starting to like approach that answer and then we want to like figure out how
to make those reference implementations and show people how they can actually get like another order of magnitude out of what AI can do for them.
>> You don't enjoy pressing yes every five >> the approve every every 3 seconds. Uh
>> bypass bypass >> dangerously skip permissions.
>> Yeah.
>> I'm with you on that one. I think what I look forward to is, you know, the success stories of, you know, the the organization that's implemented Agentic
Technology in that way and hearing how it really impacted their business. I'm
looking forward to stories about MCP startups that made a ton of money. I'm
looking forward to stories uh like like in CNCF this year. uh CVS pharmacy joined the cloudnative computing foundation right like a you know a
pharmacy company that's really a user and adopter of technology sort of you know the late majority I I think we're going to start seeing organizations
really really use this tech impactfully provide feedback back to the community and like it it just the potential of the technology I don't need to tell this
crowd how huge it is, but we'll start to see that truly manifest. That that is going to be cool.
>> Well, thank you all so much for joining and congrats on the launch.
>> Thank you. Thanks for having us.
Loading video analysis...