Secure Enterprise AI at Scale: The MCP Gateway | Shannon Williams (Obot.ai/Rancher Labs)
By AI with Arun Show
Summary
## Key takeaways - **MCP: LLM's Universal Tool Protocol**: The Model Context Protocol tells an LLM what tools it has available and how to use them, like searching Gmail or Outlook, acting like HTTP for connecting apps, data, and tools to chatbots and agents. [03:00], [05:41] - **Obot.ai's Open-Source MCP Gateway**: Obot.ai delivers an open-source MCP gateway that runs on Kubernetes, managing MCP servers with proxy controls for access policies, guardrails, filters, and full auditing to securely connect users and agents. [01:50], [02:06] - **Enterprise MCP Security Fears Slow Adoption**: Enterprises worry about malicious MCP servers scraping data or misusing API keys, leading many to ban them or restrict to authorized ones only, slowing adoption despite developer enthusiasm. [07:28], [08:04] - **Open Source Fuels Rapid Enterprise Growth**: Open source lets organizations inspect code, deploy behind firewalls for free, and iterate fast with community feedback, as proven by Rancher Labs competing against giants like Red Hat starting with just four people. [10:21], [12:54] - **Future: MCP-Powered Company Agents**: MCP servers will evolve into full agents with their own LLMs, passing entire user queries to deliver business-aligned responses like suggesting cheaper tickets, becoming more vital than websites. [21:46], [24:08] - **Unleash AI on Business Users Now**: Common mistake is limiting AI to developers; give sales, marketing, finance, and lawyers full LLM capabilities to build automations and agents, rewarding fast adopters to ride the innovation cyclone. [30:08], [31:20]
Topics Covered
- Full Video
Full Transcript
[Music] Hello and a warm welcome to the AI with Arun show. What if your company could
Arun show. What if your company could connect every AI model, database, and enterprise tool securely all through one opensource gateway? Can one protocol
opensource gateway? Can one protocol become the foundation for how enterprises safely adopt AI at scale. To
address this and much more, we are joined by Shannon Williams, who is the co-founder and president of OAT.AI, a company pioneering open-source infrastructure for secure and scalable
enterprise AI. Before Robot.AI, Shannon
enterprise AI. Before Robot.AI, Shannon co-founded Rancher Labs, one of the most successful Kubernetes management platform. So Tannon, a very warm welcome
platform. So Tannon, a very warm welcome to the show.
>> Hi Arun, thanks for having me. It's
great to be here.
>> Excellent. Shannon, let's get started.
So explain to our viewers in simple terms what Ob.AI does and why it matters for companies using AI.
>> Yeah, OBOT is delivers an open- source MCP gateway. And MCP is the model
MCP gateway. And MCP is the model context protocol. If you're not familiar
context protocol. If you're not familiar with it, it's a incredibly fast growing approach for connecting apps and data and tools to LLMs. And so at OOT, we've
built an open source project that really sits at the control plane level for managing what is an explosion of MCP servers in lots of organizations. So as
teams have looked to build MCP servers, connect them to all their apps and data, it opens up a lot of questions about who can use them, what kind of audit is necessary, what kind of security is necessary around it, and then how do we just get it in the hands of people
really quickly and so that they can easily find these MCP servers, provision them and use them. And so OOT is a software platform that solves that problem. It really makes it easy for
problem. It really makes it easy for organizations to set up an internal MCP gateway to run MCP servers on their own infrastructure. Obus runs on top of
infrastructure. Obus runs on top of Kubernetes. So you deploy it on a
Kubernetes. So you deploy it on a Kubernetes cluster and then it leverages Kubernetes to run all these MCP servers on demand. And then it connects all of
on demand. And then it connects all of those MCP servers to your users, your agents, to anything that needs to call them. It implements a proxy and a
them. It implements a proxy and a control plane so that you can define who's allowed to access which MCP servers so that you can put in place filters or guardrails to ensure that the
MCP servers are only sending out what you want them to and you can audit all of it. So it allows you to sort of track
of it. So it allows you to sort of track those conversations and understand what people are asking of your MCP to ensure that it's performing correctly. So, it
really solves a lot of the problems that an organization needs to address if they're going to adopt MCP as a standard way of connecting their apps to their users that are consuming LLMs, their
chat bots, their open whatever they're using for chat internally. That's that's
what we're addressing with OT. We've
made it open source so it's free and easy to use for anybody. And yeah, it's been only in the market for a couple months, but it's it's really taken off like crazy.
>> Excellent. Shannon, thank you. Shannon,
explain to our viewers for those who may not be familiar what what is MCP in a layman's term.
>> Sure. Yeah. I mean, the model context protocol was really only introduced this year, but it is a protocol. So, it's a it's a way of communicating that is
effectively built so that you can tell an LLM what tools it has available to it and how to use those tools. So, most MCP
servers are relatively simple. they
often connect to databases or application APIs and where the the good ones provide all the context necessary for the models to understand how to use
the tool. So for example, let's say I
the tool. So for example, let's say I want to build an MCP server that was going to sit in front of an app like you know Gmail, right? I would want to have tools available in that MCP servers that
could do things like write an email, send an email, read an email, search my emails, uh delete an email. And so once I had those, then I could talk to any kind of chat client, you know, your
cloud desktops, your chat GPTs, and say, "Hey, check, you know, search my email from for an email last week about the new shoes I ordered and find out when they're coming." And by giving it that
they're coming." And by giving it that context of look in my, you know, Gmail, it's going to know to call the tools it has available, one of which is a Gmail tool, then to see that it has the
ability to search. So it then would inside of that context, it would tell it, here's the kind of information I need to do a search. And so when it has that context, it allows the LLM to say,
oh, cool. I can search this email. and
oh, cool. I can search this email. and
it would have a different tool to search Outlook and it might have a different tool to search the web and it might have another tool to write posts to your WordPress blog or to update your X feed
if if there you know if you had API access to X. So you can build tools and to connect to really anything and they don't have to be APIs though a lot of them are. You could build tools that
them are. You could build tools that talk to databases that talk to application like you can build applications with its only front end is an LLM as a as an MCP. So there's all
sorts of ways that people are leveraging this MCP technology to put functionality in the hands of an LLM and that's becoming what's nice about it though is that you could always sort of integrate
an LLM with an API uh you know through different types of of agent frameworks.
Now you have this repeatable approach that is getting supported by most of the clients and most of the uh builders so that you can you know find one of these things use it over and over again other
people can use it you know if you find a good one that works with Chrome you can use an MCP to drive your browser right to go to a a website and do something on it so MCPS have taken off since this
technology came about anthropic and developed it it's been growing like crazy and you know today there's tens of thousands of MCP servers out on the internet. There are thousands of
internet. There are thousands of official MCPs coming from vendors. And
this MCP protocol is turning out in a lot of ways to be a bit like HTTP as a protocol we use to pass websites, right?
Something that engineers can use to put functionality in the hands of chat bots and AI agents and users consuming, you know, chat GPT or claude or Gemini or
any other kind of of chat. So MCP seems to be really gaining a lot of traction as a standard that's going to allow us to build kind of the gentic web or the Gentic you know platforms that I think a
lot of us envision AI turning into.
>> So Shannon in the case of the enterprise context right it looks like you know every app or every data base that they could use could you potentially use on MCP. So looks like there's a lot of
MCP. So looks like there's a lot of connections calls that are being made.
So how do you ensure that there is data and AI systems are secure and well managed? Well, it's a lot that goes into
managed? Well, it's a lot that goes into that obviously, but one of the key elements is first of all, most of these MCPs today are being written by engineers. They're being written and put
engineers. They're being written and put into GitHub and they're kind of being run like any other piece of technology.
You know, teams are writing them.
They're deploying them into their apps that they're building and that's it.
They're kind of buried. And so,
organizations are relying on, you know, their existing app development processes to make sure those are secure. And for
the most part, the ones that are being built by, you know, your app teams probably are well built and they're calling APIs or using, you know, effective API keys and things like that.
The problem is there's loads and loads of MCPs being built out in the wild that organizations are finding coming in the door. And so not just do you have the
door. And so not just do you have the ones your teams are building which are kind of buried in GitHub, you also have all the MCPs that people are finding themselves and running on their local cloud desktop or they're running in
their VS Code or they're connecting to whatever it is that they're using to, you know, consume AI. And so
organizations are concerned, right?
Their concern is what if some of these MCPs aren't just friendly, right? What
if they're malicious? What if they have, you know, they're scraping my data when I send it? what if they're potentially using the API keys that people are giving them to make unauthorized calls
to my applications. So, you know, for it like with many big booming technologies, there's this concern that this is the kind of technology that opens up a lot of security holes. And so, IT organizations have broadly and most big
companies, they've done things like banned MCPs, like you know, you cannot use an MCP servers. You you know, you can only use authorized MCP servers if you come to us. And so they're you know the adoption has actually been pretty
slow in enterprise. You see it growing like crazy among developers and people in their personal universe but in most enterprise MCP adoption is very early very slow and you know the idea of
putting it in the hands of business users who might want to talk to their email and might want to talk to their Salesforce and might want to talk to their marketing automation systems and their websites and their HR systems and their all sorts of different things
hasn't moved quickly at all. And so
that's really been the impetus for organizations to say, "Oh, we probably need to put in place a management plan.
We need to be able to secure these. We
need to be able to know about them. We
need to be able to audit them." And so that's been the driver for gateways for, you know, effectively a firewall for all these MCP servers. Something that can run them, can make you make it obvious
to employees which MCPs are authorized.
come here, get the authorized MCP and then you know allow the organization to set policy like okay these MCPs are available to this group and those are available to that group and you know some of the functions in this we want to
make available to to some teams but not others we need to audit this we need to have a history of it we need to you know give our auditors access to things to double check and make sure that you know
only the correct type of information is going out through an MCP server so all of that sort of risk combined with FUD combined with you know potential for
risk has slowed down adoption of MCP in a lot of organizations. So tools like the OOT MCP gateway allow organizations to take that on head on define what's
approved and then accelerate adoption which is I think what we all we all would like to see happen. And I think if you're using NLM and you're just starting to see the potential of it, you
can just imagine how powerful it starts to get when you have all sorts of application integrations at your fingertips. All the things you actually
fingertips. All the things you actually do at work, you can have the LM take on and it just unlocks a lot of potential productivity improvements.
>> Yeah. So for all my viewers, you know, this is something that you'll want to take note. uh if you are interested in
take note. uh if you are interested in LLMs and using the power of LLM, you know, Shannon has a great product on the NCB gateway which you could leverage to make sure that it is accessible to all of your enterprise users. So, thank you
Shannon Sh. So, you alluded to earlier
Shannon Sh. So, you alluded to earlier that your technology is open source. Why
did you choose to make it open source and how does it benefit developers, enterprises? You know, the the reality
enterprises? You know, the the reality is, you know, in this ecosystem, open source is far and away the best approach to getting your product in front of the
most people. And as an entrepreneur,
most people. And as an entrepreneur, multi-time entrepreneur, you know, I found over and over again that open source allows us to iterate infinitely faster than closed source. It means that
when we have an idea, we can put it out to the market. We can get feedback. we
can get improvements, enhancements as the community adopts it. And so, you know, there are two things about OOTT that, you know, kind of differentiate it. There's a lot of different tools
it. There's a lot of different tools coming out that are are sort of addressing, you know, kind of trying to provide an MCP registry or a place to find MCPs. And we we had what makes OOT
find MCPs. And we we had what makes OOT quite unique is is that one, it's software. I think almost everything else
software. I think almost everything else out in the market is a SAS service. and
and most of them they're really geared around the public community consuming MCPs as opposed to how an organization would run MCPS. But two is that in addition to it being software, it kind of runs on your infrastructure. And
running on your infrastructure means you can deploy it behind the firewall if you're a bank, if you're a government agency, if you're any large company, you can kind of put this into your, you know, closedwalled garden. Uh, which is
important when you're trying to determine, you know, I might want to connect to things that are also behind that closed wall garden. I might need MCP servers that can connect there as well. So being able to sort of run in
well. So being able to sort of run in the most secure environments was pretty important to us. So software was the right choice for OOTTO. But secondly, in this ecosystem, we found you know over
and over again that the easiest way to give organizations like that the financial services companies and the big uh government agencies and people who are enormously security concerned is to
make the product open source. So by
being open source, it allows them to look right into what we're doing, allows them to take the product, use it for free, and decide it's the right product for them. And sometimes that means we
for them. And sometimes that means we never hear from them. They take the product and and run. Rancher was a great example of that. Rancher is is used today by thousands and thousands of organizations, most of whom we at
Rancher Labs, the software company, never even had a relationship with. They
they ran Rancher for their own purposes.
But at the same time, we offered a lot of great enterprise technology around Rancher and support for Rancher and that worked out beautifully. We we built a an excellent business that grew really fast
that was enormously profitable and were able to compete with companies that were so much bigger than ours. You know, at Rancher, we started it was just just four of us, right? So, we're building up
momentum trying to get to market and competing against companies like Red Hat. Open source gave us that
Hat. Open source gave us that opportunity. It allowed us to get
opportunity. It allowed us to get momentum and you know come out by having the best product. We could kind of compete at the product level and that's >> approaching again is compete at the product level be open source let people
use it and see how it gets adopted. No,
that's a that's a that's a great strategy and a very smart one at that.
But talking about large enterprises, Shannon, many of the large enterprises use all kinds of software, right? There
could be cloud platform, there could be some old legacy models, legacy software.
So, how does your product work with all of those old systems and the new system and the somewhat newer systems?
>> Yeah, I mean the the cool thing about MCP is it's actually pretty straightforward. It's not a particularly
straightforward. It's not a particularly difficult thing to create. An MCP server is in a lot of ways a description of functionality. It really is, you know,
functionality. It really is, you know, it's a piece of code that describes to an LLM what it can do and then it has calls out to APIs or to systems or to
whatever it is that it needs to execute against. And so you can build an MCP for
against. And so you can build an MCP for anything. And we're seeing that already
anything. And we're seeing that already that MCP servers are not just being, you know, built for, you know, your your browser and locals tools, but they're be
getting built for anything, you know, Door Dashes and AWS and Amazon shopping and, you know, to control your iMessage on your laptop. I mean, you can build
MCPs for almost anything that has hooks.
And inside of an organization, everything has hooks. That's just the nature of an organization. Like these
days there's very few applications that exist in such a silo that you don't have a way to communicate with them. And so
with with MCP the the power is really in the protocol. We don't have to do
the protocol. We don't have to do anything special to make MCP work on old applications or old systems or new systems for that matter. Now we do have to help organizations build MCP servers.
And so we do a lot of work with organizations to teach them the best practices. MCP servers are definitely
practices. MCP servers are definitely not all created equal. You know, there's a lot of nuance in building good MCP servers and providing layered information for the model to understand
the tools that are available to it. And
sometimes just pumping all the tools you have on an API into a model is is not the right thing. In fact, most of the time it's not the right thing. So, a
good way to think about building MCP servers is really to think about how um how users use the product. So, you know, think about the functionality that a user would call. The LM's probably going
to try to do the same type of things. So
that might require multiple steps. It
might require a little bit of underlying coding to take multiple actions as one thing. But if you do that right, your
thing. But if you do that right, your MCP servers will be a lot more effective. They'll be a lot more
effective. They'll be a lot more accurate because that's the risk with MCP servers is, you know, does the model, you know, the real question is does the model use the tool correctly.
So testing is important, sampling is important. What's the actual outcome
important. What's the actual outcome that you're getting when you call the MCP server? Does it do what you thought
MCP server? Does it do what you thought it would do? So those are kind of where we we tend to help a lot. So one thing we do in is we ship with a lot of MCPs that are pre-built
some by us a lot by vendors but they exist for platforms like Office 365 or Salesforce or you know Jira things that you are very commonly used by
organizations. Some of those have good
organizations. Some of those have good MCP servers a lot of them don't. And so
you know today there's no good office 365 there's no good Google Workspaces MCP server. So, we've had to build those
MCP server. So, we've had to build those things. So have others. Like if you look
things. So have others. Like if you look at the new tools coming out of OpenAI for Chat GPT, they've had to kind of build their own tools because there really aren't tools yet from those vendors. And it makes sense. If you go
vendors. And it makes sense. If you go look at, you know, Office 365, there's just thousands of API endpoints you could call. Getting it right isn't all
could call. Getting it right isn't all that easy. You've got to put some
that easy. You've got to put some thought into building an MCP for something like Outlook. And so the there's a lot of work that we've done to build MCP servers, which are available
in in OT. We do a lot of work with companies who are trying to figure out a strategy for building those to help them figure out how to, you know, build and scale the building of MCP servers.
>> Yeah. And that sounds like a lot of work. Channel, what are some of the
work. Channel, what are some of the challenges that you faced while you're trying to make the platform easy and practical for real world companies, real world enterprises?
>> Yeah. I mean, there's a lot, right? Like
different challenges that we run into. I
mean, start with off, right? like OOTH
is by itself is probably one of the trickier parts of this because the experience you want is to have the user sort of log in once and then have that O flow all the way through so they're able
to call the applications that they want to talk to and only get access obviously to their you know resources in there. So
if I have access to Salesforce and you have access to Salesforce it doesn't mean we can see the same things or do the same things right we have pretty limited access but we might be using the same MCP server. So one of the key
elements is implementing OOTH and passing through OOTH as we go from you as a user working in a chatbot saying hey update my opportunity in Salesforce
for you know this deal I'm working on.
We need to a tell Salesforce that you're you and you want to access it. We need
to have you authenticate in and then we need to establish that connection and pass that through. So these are things you know we had to really work very hard on in the process. you know, building a
proxy that would sit between MCP servers was a was another interesting challenge because the MCP servers, you know, they really only talk to MCP clients. So, as
we built this proxy, we had to look and say, okay, what is a proxy in this case?
Well, it's probably it's actually an MCP server and MCP client on either side to talk to one another. So, when you're calling to get an API endpoint or an MCP endpoint, you're hitting the proxy first
and then getting authorized and driven back. So there were a lot of technical
back. So there were a lot of technical challenges we had to work through. Our
engineers spent a lot of time on things around off around you know making sure the proxy was accurate and fast and could you know could pass through everything necessary and those have been
you know continued to be areas that we keep working on and improving. I think
you're going to find the O stuff just gets easier and easier over time.
There's a lot of interesting work coming on in the community that's around MCP to drive this forward and we're going to see a lot of improvements there I think in the next year.
>> Yep. Yeah. No, that makes sense, Shannon. So, let's talk about, you know,
Shannon. So, let's talk about, you know, the the impact on the future vision. So,
how are you thinking that, you know, AI assistants, AI agents, they will change the way companies work over the next, let's say, few years, 1, two, three, four years?
>> Oh, I think it's going to happen really fast. I think we're about to I think the
fast. I think we're about to I think the next year you're going to see explosion of of a new kind of agent that is pretty universal for users, I think. And I
think it's going to use the MCP protocol, interestingly. So the the one
protocol, interestingly. So the the one of the interesting things that's happened is there's a lot of companies who saw MCP and were like, "Oh, fantastic. This is a way we can put our
fantastic. This is a way we can put our technology into our users hands and let them use LMS to leverage it." And so the first approach they made was building an
MCP MCP server for their product. So
like if I have a ticketing system, I build an MCP server. Now you can attach that to your model and you can say search for tickets, buy tickets, you know, do transactions and and I think for a lot of companies that was a really
good first step and they were pretty excited by that. The interesting thing though is that it kind of disintermediates them pretty fast. So if
you're a if you're a software vendor and you want to, you know, you want to be part of your users's journey to AI, you want to be there. Like there's no doubt you want to be in their chat client. You
want to be an MCP server. But when you think about that flow, the way that flow works right now is the user says, "Look for tickets for a Warriors game and it, you know, if all I have is an MPC MCP
server that has a bunch of functions, one of those functions might be search tickets." And so the MCP server, it
tickets." And so the MCP server, it would say, "Hey, you want to search tickets? I need this string of
tickets? I need this string of information so I can search." And so the client, my chat GBT would say, "Okay, let me get that search. He wants to know about Warriors tickets. So that's what tickets he wants to go this weekend. So
those are the dates. Maybe he's looking for something in this price range.
There's the price range. submit that
query through to the MCP server which would call an API and get a response and I would come back and say cool there are tickets available this weekend on this ticketing platform to buy some Warriors
tickets for 300 bucks which is good but from a client from from the vendor side from you know if I'm the ticketing platform all I got was a query like I know that there was a query I know that the query maybe was tied to a user but I
didn't have I don't know what that user asked right the ask went to the user's lm it went to chat GPT it went to claude it went to whatever they were using and it figured out what to call. And in the
long run, it means that I can only respond data, right? I can only respond with here are the tickets and what's available. I can't respond with any kind
available. I can't respond with any kind of conversational logic of my own, which means I can't uh, you know, suggest that the next day they're a lot cheaper, right? I can't look at my, you know, I
right? I can't look at my, you know, I can't try to provide an excellent customer experience to that user. All I
am is a as an API endpoint. And so
there's another approach that is ju is really just theoretical right now. A lot
of people, nobody's really implemented it, but a lot of people are talking about it, we're actually working with a bunch of people to implement this using a project called Nanobot, which is to build your MCP servers as agents
themselves. And so in that model,
themselves. And so in that model, instead of providing an MCP server that produces 10 uh functions, right, 10 tools that it puts in the hands of the model, you instead build an MCP server
that just introduces maybe one function.
And that one function is like query or chat or something. And so it says, "Hey, I'm a chat that can help you with anything to do with tickets." And so when I say, "Hey, do I have um you know, check my ticketing system to see if they
have any tickets for warriors?" It
actually passes that whole query through to the MCP server on the other side. But
this MCP server instead of just being a collection of tools is actually an agent and it has its own connection to an LLM.
So it can read and comprehend that entire query and go through its own system prompt. its own system prompt
system prompt. its own system prompt saying when a customer is looking for tickets, go query to see if they're available, but also query and see if there's any other things like, hey, there's some tickets that are a little bit more expensive, but they're a really
good deal. Maybe they'd want those. Or,
good deal. Maybe they'd want those. Or,
hey, there's some tickets that are available next week or um, oh, this t we can't find anything for this level, but I do have something that's, you know, if you're looking for something fun to do, there's a concert that same night,
right? So the the ticketing agent is a
right? So the the ticketing agent is a little different thing than than the ticketing MCP server because it has a goal and its goal is aligned with the company's goal. And so when I when I
company's goal. And so when I when I want to talk to sort of the the company, I probably am okay with that answer because I need to know about the tickets to the Warriors that I want to get. But
if it came back and told me, hey, I can get you tickets the next night a lot cheaper, I might be excited to get that information. And so what we're what
information. And so what we're what we're finding is that organizations, they want to they don't want to be disintermediated from that that flow. So
they want to be part of the conversation between the user and the MCP server. And
to do that, they have to build something more like an agent. And I think this is true for almost every SAS company, for almost any web services company, that they want to ensure that they're delivering an awesome client experience
to their customers and they want to be able to shape that conversation to their own business goals. And so my expectation is that we're going to be really quickly seeing an the emergence of kind of universal agents for
organizations that become almost that are heavily engineered and very important to their companies almost as important as their website maybe more important than their website going forward in the sense of when I reach out
to you know at ticket ma master from my client ticketmaster wants to be a full participant in that conversation. They
want to know not just who I am and and the the query details. They want to know what I'm looking for. They want to know as much context. They want to be able to ask qualifying questions. They want to be able to suggest more things. So, they
want to deliver an awesome experience.
And they don't want to just do it, you know, from a little clicky like chat in their website. They want to do it
their website. They want to do it everywhere that I'm chatting. So, they
want to be on ChatGpt. They want to be on Claude. They'd like to be in Gemini
on Claude. They'd like to be in Gemini and Siri. They want me to be able to
and Siri. They want me to be able to say, you know, at ticketmaster at StubHub, you know, whenever I'm asking a question about tickets. And the in fact that what I want is probably the same thing, too, because what I want to be
able to say is like, you know, hey Gemini, I want to go to the Warriors games tomorrow night, but I don't want to spend more than 250 bucks. Check all
the ticketing sites and have it send out my query, go look at everything, come back to me and say, hey Shannon, looks like based on, you know, all of the conversations, these are the different offers that are available to you. you
know what what do you think about these seats? What do you think about these
seats? What do you think about these seats? So, I want to be able to talk to
seats? So, I want to be able to talk to multiple things or have Siri, chat, GPT, Gemini, whoever my my client is talk to all these things for me and they want to
be part of those conversations as much as possible. So, I think we're heading
as possible. So, I think we're heading towards a world of very heavily developed, very powerful kind of universal agents that are the main front end for engaging with clients. And it
makes sense, right? If you think about, you know, websites and phone lines and every other way that companies have tried to use technology to let us talk to them, they eventually really really
put a lot of focus into those. They want
to be present. They want to have an excellent experience. They want to put a
excellent experience. They want to put a lot of stuff in front of you. But
nothing is better than just talking to a person, right? A super knowledgeable,
person, right? A super knowledgeable, incredible. It's why like all the work
incredible. It's why like all the work that banks have done for the last 60 years to you know replace the client manager replace the teller with ATMs
with websites with mobile apps with phone lines with automated phone lines.
All of that in theory is about to get wiped away by simply rebuilding the teller in an amazingly powerful way as an LLM powered agent and suddenly giving
it to people everywhere they are.
Whether they're in your bank, whether they're on a browser, whether they're next to their phone, they just be like, "Hey, city, how much money is in my bank? What what's going on with this
bank? What what's going on with this transaction? Am I how much am I spending
transaction? Am I how much am I spending on blank? Can I can you figure out how
on blank? Can I can you figure out how to cancel Netflix?" Whatever it is that you want to do. And so I think you're going to see the rise of really really powerful, really good AI agents really
quickly and all the clients are going to support them because they're all supporting MCP and so that protocol is going to be how we deliver those powerful agents.
>> Yeah. Wonderful. I mean that sounds like a lot of exciting work that's coming on supported by MCP and MCP is is probably becoming the new HTTP or HTTPS.
>> It really is. I mean that that and that's what's so powerful is like all I think about this all the time but it's like all an engineer need is sort of like a defined protocol and they can almost move the world right so that's
why I think this MCP thing wasn't really built to be the agent framework and it's not an agent framework and yet all the agent frameworks are going to support it as the front end and all of the frameworks are going to allow you to
build agents that can talk MCP because the clients are going to support it and so once the clients support it they want to support it. We've already demoed this like I've already build literally we're right now working with two or three
companies on building these sort of next generation agents that are themselves MCP servers using nanobots. So I think there's going to be a real blow of holy crap if we don't move quickly we're going to get left behind by our
competitors who are you know dominating our industry by having by far the best AI assistant for it.
>> Yeah. Yeah. Sh you're right. I mean I saw some research that one one one AI age or AI genetic AI year is equal to you know seven internet years right so that's how fast is moving seven times
faster so yeah companies don't have time they just have to move on >> it's much easier to build this stuff too because the APIs are all there right like the we've already digitized the processes so now building a new front
end to those isn't any harder than moving from one mobile app to another and luckily what's amazing is that LLMs are so rich in their understanding of
human interactions that you know creating an agent like this really doesn't even require a lot of specialized model development or specialized model training. You're
really almost better off using a standard LLM combined with a rich set of tools. So the these next agents are
tools. So the these next agents are really going to be a whole bunch of MCPs that align with the kinds of things people want framed as capabilities to a super MCP that's delivered by the client
by the by the enterprise or by the company to the the client which will be anything everything will be a client. So
everything will be able to talk to these MCP servers.
>> Yeah. Fantastic. Fantastic Shannon I mean this is like fantastic conversation but you know uh we'll move on to the next segment which is a rapid fire round >> fire away.
>> All right. So here's my first question to you. What is one AI technology that
to you. What is one AI technology that you admire most outside of robot.AI that >> rapid fire. I I love the coding assistants. I am not a developer. I my
assistants. I am not a developer. I my
co-founders are all developers but I have lived in technology my entire life and so for the first time ever I can build software and I'm building amazing software for myself. Things I'm super excited about and just you know we're in
the early days. There are days of of high levels of frustration, but the ability to code up applications in minutes and hours that, you know, would have taken me tens of thousands of
dollars of consulting fees to have someone else create for me. I am just loving. I think the coding agents are
loving. I think the coding agents are incredible. Vzero from Versell to code,
incredible. Vzero from Versell to code, these are the things I'm using regularly. So, I love them.
regularly. So, I love them.
>> Yeah. Yeah. You're you're right. I use
Replet in my personal projects and it's fantastic >> what the tools can do. All right. So
move on to the second one. What's a
common mistake companies make when setting up AI systems?
>> I think the most common mistake right now is to to limit them so much to just developers. I think people are
developers. I think people are completely missing the impact that putting AI tooling in the hands of business users is going to have and the organizations that figure that out the first are going to have a massive
massive advantage over their competitors. your sales people, your
competitors. your sales people, your sales managers, your sales ops people, your marketing ops people, your lawyers, your internal operations groups, finance, not like they don't just need
sort of specialized AI tools or oh, there's an AI tool in Salesforce that's good enough for the sales people. They
need all the same capabilities you're putting in the hands of developers. They
need to be trained to use them and they need to be unleashed. Let them get creative. Let them build automation. Let
creative. Let them build automation. Let
them build agents. Let them build assistance for themselves. Let them go crazy and then share these things everywhere internally. Remote the people
everywhere internally. Remote the people who create cool things and share it with other people. Give out massive bonuses
other people. Give out massive bonuses to the people who adopt AI the fastest as organizations. You know, if you're
as organizations. You know, if you're not consuming tokens right now at an like at an excitingly high rate, you're probably getting left in the dust by your competitors. Use all the AI tools.
your competitors. Use all the AI tools.
get everything in the hands of your your users and see what comes out of it because this is an innovation cyclone and innovation cyclones are where companies get destroyed and they or they
explode into you know whole new industries they weren't even expecting.
So use it use it aggressively and put it in more people's hands.
>> I like the word innovation cyclone. I I
love that phrase. I'm going to borrow it Shannon and I may or may not give credit to you so pardon me for that.
>> I'm sure I'm borrowing it from someone else. So don't don't feel like you're
else. So don't don't feel like you're you need to.
>> Awesome. So I move on to the second the third one. If you had to pick one,
third one. If you had to pick one, speed, safety, or openness, which is the most important in AI platforms?
>> Oh, it's got to be safety. I think right now we have to focus on I mean safety again. It's speed very close second like
again. It's speed very close second like you have to balance those two things and not you know openness I think is is the least of those importantness. I mean
openness is is makes sense for some organizations makes sense you know doesn't make sense for others but safety first speed second and openness last openness is you know to me as an open
source software company I love building open source but all for selfish reasons right I build open source because it helps us grow faster it helps us put product in people's hands open by its
nature is certainly valuable in some cases open government you know o open governance for sort of shared things like protocols like there are things that are open that are useful to be open
but in of itself it's not an intrinsic value to me compared to things like safety and and speed.
>> Yeah, I love that answer because particularly in today's world everybody is looking for safety.
>> All right. And that's how trust is built. All right. So then moving on to
built. All right. So then moving on to the fourth one. What is one myth about AI security you wish everyone understood better?
>> One myth about AI security. I think like the biggest I think the biggest myth is that that we know a little bit already about where the attack vectors are coming from. The the truth is like this
coming from. The the truth is like this base is moving really fast. And I think that there's a lot of companies out there that are pitching very comprehensive security tools and audits
for AI. And I think that those are a lot
for AI. And I think that those are a lot of a lot of those are very creating a very false sense of security. The truth
is this cyclone we're in is moving really quickly. What you need right now
really quickly. What you need right now is to put a lot of faith in your, you know, CISOs to give them the resources they need to be empowered and to let
them, you know, let them make decisions.
I think organizations want to look outside. They want to assume that they
outside. They want to assume that they can solve this stuff with technology. A
lot of the risk is going to be process.
A lot of the risk is going to be what people do with these tools. And so, a lean on the people you already have deployed and trust to focus on security.
don't believe there are some AI security experts out there who suddenly understand this way way way better than than your team does. Most of the fundamentals of this are the same as always. They're man-in-the-middle
always. They're man-in-the-middle attacks. They're, you know, claimed
attacks. They're, you know, claimed identity attacks. They're people who
identity attacks. They're people who are, you know, injecting code into things. The the truth is, you know, I
things. The the truth is, you know, I think your security teams are well positioned with a lot of the existing tools and investments they've already made to secure this layer of technology
because it's so so much built on the last layer of technology, right? It
leverages so much of the existing authentication and authorization streams, you know, like OOTH just connects to Octa and Entra and other off systems. It doesn't reinvent Oth. Like I
I saw a company the other day that's like we're reinventing O for the AI era.
And it's like, well, that might work, but I I think that the existing investment in in tooling here is excellent and it just needs to be applied. It doesn't really need to be
applied. It doesn't really need to be thrown out and a whole new O layer developed. We've got OOTH. We've got a
developed. We've got OOTH. We've got a lot of the right tools. OOTH 2.0 is is a good framework. It gives us almost
good framework. It gives us almost everything we need. Stay where you're, you know, focus on the things that work and and drive it forward.
>> Yeah, that makes sense. Makes sense. All
right. Right. And then the last one, the fifth one that I have for you is, what do you think is a must-h have skill for someone building reliable AI systems?
>> Must have skill for reliable AI systems. Well, I think you know curiosity is probably the must have skill for everything to do with AI right now. Be
curious. Be super how does this work?
What's actually happening behind the scenes? Where does this data go? Who
scenes? Where does this data go? Who
would we be giving this to? What else
can you do with it? You know, ask the questions. Those things will give you,
questions. Those things will give you, you know, the same questions you would have asked when mobile was taking off and you had questions about that. Trust
the senior people in your org. Like
you've seen it before. I think I'm always amazed how often people like, "Oh, there's a new thing. We've got to go find a whole new branch, you know, a new trench of people to come in to to take on the problem." It's so similar,
right? like the the knowledge that your
right? like the the knowledge that your gray beards like me have that have been around security for 25 years and have been you know dealing with web application firewalls and dealing
with identity before and have been dealing with you know these types of challenges trust them talk to them ask them the questions they will provide you with most of the the answers you need be curious though what's happening that
will tell you a lot of you know as you try to understand what the potential risks are >> wonderful thank you Sharon that brings us toward the end of the show thank Thank you so much for your time, >> Arun. It was great being on here, man. I
>> Arun. It was great being on here, man. I
I appreciate the invite and you do a great job with this.
>> Yeah. Thank you. Thank you, Shannon.
>> Have a great one.
[Music]
Loading video analysis...