The Implement AI Podcast #71 - The 10 Hidden Layers (Explained)
By Implement AI
Summary
## Key takeaways - **Capacity Architecture Deconstructs Roles**: Break roles into measurable repeatable units of productive work like tickets, CRM updates, before assigning agents. What seems like four agents often reveals four teams with eight elements each. [06:26], [07:41] - **Human-in-Loop Handoffs Essential**: Design clear handoff points and escalation parameters for edge cases, with a human-in-the-loop portal to oversee AI output. Protocols define who does what between humans and agents. [08:52], [09:05] - **Knowledge Must Be Structured, Updated**: Agents need structured, validated, current data in accessible repositories to avoid static performance; flooding context with everything causes inaccuracy like garbage in, garbage out. [10:45], [11:04] - **AI Cost Per Ticket Crushes Payroll**: Recruitment firm example: AI £160 credits per ticket vs £2,800 payroll cost, paying only for actual work done. Enables variable cost scaling over fixed FTE expenses. [20:59], [21:11] - **Insurance Agents Saved €2M, Doubled Profits**: Agentic team on Slack cut human review from 70% to 30%, saved €2M in hiring, NPS +25%, doubled underwriting profitability. Digital twins cloned experts for 24/7 participation. [40:44], [48:09] - **Slack Audit Trails Beat Black Boxes**: Agent discussions on Slack provide immutable perfect audit trails showing logic flow for regulators, unlike unrecorded human meetings or opaque model inputs. [46:02], [46:32]
Topics Covered
- Full Video
Full Transcript
There's all kinds of risks to do with cyber attacks to do with not having enough skills, making the wrong bets, all that sort of stuff. But the risk that I ended up deciding was the biggest
risk was a strategic risk of not adopting aentic AI quickly and effectively enough. So that is what I
effectively enough. So that is what I focus on. Welcome to the implement AI
focus on. Welcome to the implement AI podcast. The podcast where we explore
podcast. The podcast where we explore the impact of AI on your business. I'm
Pierers Lenny alongside my co-host and co-founder of implement AI Dr. Alec Shukler. We cover the real world
Shukler. We cover the real world applications and impact of how AI can be practically applied to drive growth and efficiency in your organization. We cut
through the jargon to focus on actionable strategies and use cases to highlight the transformational power of AI. Let's dive into today's
AI. Let's dive into today's conversation. I'm Pier. and my luck.
conversation. I'm Pier. and my luck.
>> Right, traveling again. So, um uh started in where have I been? Good
grief, I'm I'm worn out. So, did a great event with Manchester Growth Company and uh PWC's nice offices in Manchester, which is the same building as our lawyers where we did our event. That was
really good actually. Got some um good feedback from that. And then um I was down then to Lisbon. So, we actually been together with a whole team even people that live quite far away
actually. We all got got it to Lisbon.
actually. We all got got it to Lisbon.
We got an office there now. So that was it's great to get a team together, isn't it? And the amount of ideas that came
it? And the amount of ideas that came out of it.
>> Of course, when you you spend two days live with everyone, it's really good.
>> Yeah. So we're doing like quarterly team events in in Lisbon. Also get me out of the weather.
>> So that was quite good.
>> It's raining today. So you didn't we timed it. We timed it perfectly.
timed it. We timed it perfectly.
>> Lift it out. So And then next week I'm at the uh the British Franchising Association um national event. That's
going to be interesting. I think huge opportunity in franchising to use AI just to basically create almost it's a bit like M&A in a way. You've got the sort of the franchise always to create
an AI first model and then distribute that down amongst the franchises. Huge
opportunity there. And also um a couple of things we're going to talk about is we've started a series of webinars. So
every Thursday 11:30 if you go on LinkedIn you'll find the links. We're
going to build a page on the website as well. But if you want to go hear about
well. But if you want to go hear about our AIOS and what we're up to and just learn a bit more about our product, we got a product now which we'll talk about a bit later. Um you can join a webinar.
So it's every week now. So just feel free to join and Alex it's discovery basically. Alex who's our uh marketing
basically. Alex who's our uh marketing and growth lead will take you through that. So we got an interesting show
that. So we got an interesting show today. We got another guest on. So,
today. We got another guest on. So,
we're going to go through um a lot of did a post actually we quite a few of podcasts are based on some of our our posts and this is covering the 10 layers really that you need to get right
between just thinking you can just stand up an AI agent. Okay, people think it's easy and we'll we touch on the dev projects we've come across which is doomed to failure. Uh and also between
that and actually having AI that actually adds value and does something useful. So, we're going to go through
useful. So, we're going to go through Alex's post really and talk about these 10 layers. I'll I'll go through digital
10 layers. I'll I'll go through digital worker like what does it take to have a digital team basically?
>> Yeah. Then we'll be joined by Simon Torren. He's actually one of our
Torren. He's actually one of our partners. So if you don't know yet, if
partners. So if you don't know yet, if you're a technology consultant, you're a small MSP with a big MSP and you're talking to your clients about AI. But
then the question is, well, how do I deploy it? We've got an introducer
deploy it? We've got an introducer partner program which is growing pretty quickly actually and we're going to next year we're building a reseller program.
So Sam is one of our partners and we'll let him introduce himself, but he's got 30 years of experience in strategy innovation and he's working with clients to help them sort of implement technologydriven growth strategies, but
especially focus on Agentic AI. So I'll
have Simon on a bit later. So if you don't know us, just make sure you go on the website um sign up to our resource center. Lots of resources there. It's
center. Lots of resources there. It's
all been updated as well. But let's get into it, Aloc. So, so this these are 10 hidden layers that kind of determine whether your AI is going to be a gimmick or something you've stood up because
you've been told to by your board or your your team or you think you need to and creating something which is you know is kind of fit to purpose and it's it's
AI and especially a genetic AI that actually is going to deliver value. So,
and we see this all the time right these things don't just come out of thin air.
We don't just sit around you know just thinking this stuff up. Everything we
write about, everything we talk about in this podcast is literally born from experience. Sometimes real painful
experience. Sometimes real painful painful experience real >> it's just like you know because you know we know quite a lot and then we engage with organizations that you know then they know their business completely
better than we do and when you have that sort of interaction we learn so much about how to implement this technology.
So let me go through the list and then we'll we'll cover off each one. So first
one is capacity architecture. to that.
That's a massive one. Workflow
engineering, so agents, humans, humans, agents, that kind of machine human partnership. Knowledge infrastructure,
partnership. Knowledge infrastructure, the integration layers, which can be really complicated. Um, and this is
really complicated. Um, and this is where people think, well, I won't I won't go into it, we'll come back to it.
Quality assurance, that's a big one.
People have got concerns about AI. And
then monitoring, performance monitoring, reliability engineering, cost optimization across across the whole piece, governance frameworks and
then change management because sometimes you got to change the way you operate your business to implement this technology. So let let's start with the
technology. So let let's start with the first one. So capacity architecture to
first one. So capacity architecture to before automate before automation, you got to understand what that actually means to organization, haven't you? So
if you're if you think you're going to increase it, it's kind of measurable.
>> Exactly. So so let's just start there basically, right? Like so like you like
basically, right? Like so like you like you said Pers like most businesses think that like you know, okay, I've got chat GPT. I can write a prompt and I see
GPT. I can write a prompt and I see something. Okay. Therefore, can it
something. Okay. Therefore, can it handle my accounts or can it do my sales? basically and there's a big
sales? basically and there's a big difference between just an API some prompts and actually like somebody who's in your department or running a department by themselves doing the work in that kind of way. So we always say that you've got like you know you've got
your human team and you got your org chart for that. But then the first point we're talking about here capacity architecture the AI team that would then almost either complement or augment or
even replace certain departments in that kind of way. The org chart for the AIS would be different. This is what we call capacity architecture basically because the thing is you need to understand like you know what is the primary focus of
that department and that that person. So
imagine for example one job support person's role is like handling support tickets answering a few phone calls looking up systems on the you know in the system to see if they're logging in
and using the different details. They're
like different areas and different jobs basically. So each of those needs to be
basically. So each of those needs to be designed as a different agent workflow or a digital worker which is looking after those things so it can be done.
And this is what we call capacity architecture because you need to understand what capacity actually means to your organization. You know like each role needs to be in measurable repeatable units of productive work. So
outcomes documents emails phone calls, what it looks like, you know, tickets created, CRM updates, which fields, all that kind of stuff. And this
becomes the almost like a scaffolding and the blueprint to assign those agents before. So what happens all the time is
before. So what happens all the time is that we get someone says, "Oh, I've been talking about an AI project with somewhere else." And they say, "I need
somewhere else." And they say, "I need four agents, you know, Bill, Tom, Sue, and and James, for example, right?" And
we look at that and we actually see that actually that's four teams basically and those teams each one might have eight different elements because they still want quality and and optimization for each element of the flow. So we need to
make sure we've got all those different things in place basically. So capacity
architecture is the first bit basically looking and deconstructing what the role is into like what their digital workers are going to do basically. You go to the second.
>> Yeah. Well, I'll just say on that one is unless you unless it's you hear the it's the old saying, isn't it? If you can't measure it, you can't manage it. And
that's that's a key part of that. And
the next one then is um workflow engineering. So, what we often see is is
engineering. So, what we often see is is you've got your workflows and in the old world you kind of automated your workflows and people think about an AI, don't they? But it's not always the
don't they? But it's not always the case. Sometimes you have to change the
case. Sometimes you have to change the workflows.
>> Exactly. So if anyone says that you can fully automate something or identify something, they haven't really thought through that whole department basically because I'm talking about a department.
I'm talking about a digital worker co-orker that can run things and do things in your business basically as we all know that like there's two things that need to happen. If you have any kind of worker there needs to be a manager. So who is the people who is the
manager. So who is the people who is the person that's overseeing the output of the AI? So straight away there needs to
the AI? So straight away there needs to be a place where the the person is able to like see the output of what's going on there in that kind of way. And the
second thing is there's going to be edge cases. There's going to be people that
cases. There's going to be people that like need more support or you know the the the process that you've already got doesn't handle that situation or whatever like that. And so there needs to be a clear handoff. So if you don't
design in handoff points and and and and escalation parameters, these things don't happen by themselves basically, right? So what we always find is like
right? So what we always find is like you actually have to design a workflow and we always pretty much have a human in the loop portal as part of our deployments. Basically what that means
deployments. Basically what that means is that basically the AI can be doing its thing. you can see it doing its
its thing. you can see it doing its thing, but you can also see when a human might need to like step in or do something. And that's completely normal
something. And that's completely normal because if you're trying to operate at scale and have digital workers doing that, that's really key. So, you need to have protocols which define who does what, the human, the age of the system, and that's where orchestration begins in
that kind of way. So, these are the things that's designed into our AIOS in those sorts of ways, the approach that we do things >> and we we'll come back to uh we're going to come back to reliability engineering as well, but it's quite rare. It's
different with a voice agent because it's real time. Unless it's real time, it's pretty rare, isn't it, that agents are doing work and something is leaving the building without a human at some
point being involved. You're going to see more and more of it over time. So,
the next one is um this is a big one uh is the knowledge infrastructure. So,
it's about you know agents, people always say it's all about the data. It
is true. agents, they can't do a lot really unless they can access data that's structured, validated, relevant, and updated, and it's the right data.
>> I'll say it the other way around.
Actually, agents can do a lot if they have access to the right data, basically. So, that's
basically. So, that's why >> you're always you're always positive.
>> I am, but but that's how we designed our AIOS, right? Like it's got the central
AIOS, right? Like it's got the central data repository. It's got CRM, so it can
data repository. It's got CRM, so it can access everything. So what's knowledge
access everything. So what's knowledge infrastructure agents? You know like if
infrastructure agents? You know like if you want to have someone just doing random stuff like why do you why do people not hire a university graduate and say crack on they train them right they actually say this is how we answer the phone this is how we do things this
is our policy for this this is our policy for that basically right so you have to think of like the the knowledge infrastructure as almost the training and and the guidance for how agents are going to do different work in your business basically and the thing is the
world is obviously changing and updating. So for example, you might have
updating. So for example, you might have offers on this week in your business or you might have a new directive that's come out or you might have a new product you want to promote or you might have a new change or a seasonal thing. Now if
you don't have those kind of like accessible kind of like repositories which can be updated with that information, your agent is static basically right and that's not useful to anyone. So it's very important to make
anyone. So it's very important to make sure you know that you've got access to the you know structured and validated knowledge which is essentially w within the system which is vectorized in the way that the agent can operate it and it's also the memory. Now people think,
"Oh, I'll just put everything in there."
That's not good because what will happen is you'll flood the context window of the agent and it won't be accurate.
It'll forget things. It's a bit like basically someone becoming a meiac. So
you actually want to give only good quality information and remove away the other stuff. So how that's done is is
other stuff. So how that's done is is part of a process that we guide basically. But it's very important to
basically. But it's very important to understand it's like garbage in garbage out. You want to have the best
out. You want to have the best information within there. And then the second thing is that every time the agent is communicating or interacting with someone, it's then going to generate that insight. So that needs to be stored so that it could remember it can act on it. So that's why we got long
and short-term memory in our systems and it can come do things in that way. So
you want to think about it in that way because you need to have that essentially that foundation that context engineering to allow you to make decisions and understand things and even to go back over time and look for opportunities to improve basically.
>> Yeah. But also what people forget and we've seen this in like the dev projects we've seen that you know we're kind of up against some when we're sort of bidding for a business and they haven't got any memory.
I find it amazing. They got no concept or thought about short-term memory, long-term memory, about the kind of exponential growth of the information you're going to use. Recording all your phone calls, you're analyzing them.
Where's it all going? What are you doing with it? It amazes me. I I'll tell you
with it? It amazes me. I I'll tell you what another thing basically. How do
they have identifiers for accounts? Like
in terms like what what is that your data structure? Like what are you
data structure? Like what are you actually remembering? Can one can one
actually remembering? Can one can one record have multiple attributes? What
can it be classified under different things? Like this. See, if you're trying
things? Like this. See, if you're trying to build something from scratch, you haven't actually thought about all this stuff and how everything passes to each other. you're gonna have too many blind
other. you're gonna have too many blind hole, too many blind alleys and too many dead ends basically and almost like booby traps. You need to really take
booby traps. You need to really take this very very seriously basically >> and the thing about AI is unlike humans is that all knowledge as you as it grows exponentially you collect more and more information it can be used if it's
structured properly whereas with humans you can't access what somebody said on a WhatsApp when you're on a phone call on a WhatsApp two years ago whereas you can come agents right next one so that's knowledge
infrastructure and then the other one which is something we get asked about a lot which is the integration layer so aos you've You've got different layer.
You've got memory, you've got integrations, you've got the agents obviously, you got the memory, you got the integration, you got to the task engine. So the integration layer is
engine. So the integration layer is often how you access existing well data, existing data. We get asked about a lot
existing data. We get asked about a lot say well can agents be integrated with existing systems and then new systems and then each other and in our case humans.
>> So the answer is yes, but you have to actually go even deeper into this basically. So to integrate basically all
basically. So to integrate basically all of our digital workers can actually do real work which means updating your CRM doing different information you know pulling you know invoices or understanding different things and creating documents whatever you want but
for it to do that let's break it down we've got more than 600 connections okay but each connection has potentially more than 80 to 150 possible applications so let me give an example Gmail okay you
you've got an API for Gmail yes okay there's probably about 150 things it could do right it could draft an email it could actually you know create an event it It could, you know, like delete something. It could update the name of
something. It could update the name of something, could add something to a thread. If you don't have the right um
thread. If you don't have the right um instructions and the right details about how you're actually um managing those things, then the agent will get confused and the context memory will be flooded basically, right? So you actually have
basically, right? So you actually have to design these things about how it's going to be used within that agent itself essentially, right? So it's not just a question of like connection to that thing and doing that thing. I'll
give a real example. People think that okay, we connect to a calendar, read everything within there, and then update it. But if someone's got many different
it. But if someone's got many different elements in their calendar with very deep with very deep descriptions of each event, you could flood the context window and forget all the conversations.
So this engineering and integration has to be thought about very very carefully because nothing happens by accident basically.
>> Yeah, I'll leave that there. And the
next one is um and again another thing about AI is and we're going to come back to reliability slightly different isn't it? Is quality assurance QA. Something I
it? Is quality assurance QA. Something I
I'm always bang on about in Lisbon.
Everyone got bored of me talking about this. So, it's about having those having
this. So, it's about having those having those checks and balances just so you're confident that what you're doing. And
now I always say at the event I'm at as well is that people tend to compare AI to Nana or the almighty, right? And all
magic. Yeah. And now that's fair enough.
You want it to be 100% perfect. You want
to work towards that. But compare it to the alternative which is us and and we're pretty infallible. So talk about quality assurance. So the thing is what
quality assurance. So the thing is what you have to understand is that like quality assurance is very different for each type of agent and each kind of output of different work basically right so that the point is is that like there's three things here one thing has
to be accurate information and a confidence interval so for example say say okay I wanted a researcher find things now there might be very weak signals on the internet or even false information surprise surprise there's liars in the internet basically right
you know and so you need to have like a a almost like a scoring standard for like the confidence of that that piece of data but then you also have to have protocols to actually validate that piece of data in different ways basically other things in terms of the output. Is
it holding the right structure? Is it
doing things the right way? What if the the tool failed or what if the connection didn't happen? How have you captured that going back again? Because
it sounds wonderful, but I I look at others videos where they basically release a thousand marbles off like a a flight of stairs, you just get whole chaos just happening afterwards basically. So, if you don't actually
basically. So, if you don't actually have like clear paths, guardrails, all those different elements within there, you're just going to get a mess basically.
>> Performance monitoring. So, this is kind of what it's all about really. It's
about it's about the ROI. by having um you know you can have quant qualitative metrics as well but having having metrics and what you talk your article about you know productive capacity units
versus FTES so humans is about how how do you track productivity so you want to have somewhere performance monitoring I also don't see this built into sort of development projects >> yeah this is actually quite interesting
because this is very important because like it's about having measurable contributions to business outcomes like number of support tickets completed number of customers reactivated number of points treatments booked with our senior sales consultant, whatever it is
like that basically. So the point is to be able to like measure what's the outcome that you're trying to do or it could be for example number of qualified potential leads that can be reached out to from from you know based on research but the thing is what we find often with
some businesses like they haven't even thought through what quality looks like what good looks like basically and we have had real examples where they basically said but the AI is not doing anything which our humans don't do and we were just reiterating the point that
that's correct right because it's doing it without the person needing to do that step basically right but then when there was other elements where they were saying oh but this is not correct or this is like this that's because they haven't even defined clearly for their own team what the the issue is. So so
when you're actually trying to figure out is something a good use for an AI agent or not. There's there's a framework that we kind of give and recommend to our customers basically right like but the key thing here is you need to know what good looks like and you need to also understand like you
know what you're trying to actually get done for the performance. So then you can say actually yes my agent has now done or my digital worker has now done you know 40% of the capacity of my out ofour stuff or gone in a different way.
I mean I had somebody else real call they've got like you know hundreds of locations across the UK and they were saying their team spend so long listening to voicemails and I was saying why they even listen to voicemails why doesn't the AI just answer the the the
overflow call transcribe everything and give the insights directly so so many ways to like improve the performance of the AI and also your people as well.
>> Next one is reliability. So this is important and obviously you've got infrastructure as well but also with the way the agents work because if you think about it if your when you have humans right they might be tired they might
have a bad day they might be a bit unreliable now and then but they're there. So if your if your technology
there. So if your if your technology isn't doing what it should do or it goes down for example then you have nothing there so reliability is going to be really really important from the infrastructure layer which it's more
reliable these days but we saw recently AWS went down didn't it? So you know do you have to count do you have to have you know backup systems on different platforms to that it depends what your agents are doing.
>> If you just look at different things like our agents are always connected into different people systems doing different things doing different pieces of work. Now imagine for example the
of work. Now imagine for example the other system goes down. Imagine let's
say like the um CRM system which they were connecting into or let's just say some work management system or whatever like that goes down or let's say their API um limit gets overloaded. So let's
say that the customer didn't have like enough subscription for their API usage with their current provider or whatever like that and those those calls start failing from our agent. There has to be somewhere where that's captured so you know what wasn't completed and what
needs to be complete. Like these are things that people haven't even thought of essentially, right? But it happens every day if you're doing hundreds of thousands of executions a single day basically, right? You know, so the
basically, right? You know, so the reliability can't be assumed. It has to be engineered basically, right? So you
need to have like, you know, fallback systems like what happens if open a goes down? What's the backup footing for
down? What's the backup footing for that? What happens if like this isn't
that? What happens if like this isn't there working this way? What will you do for that? Which are the edge cases you
for that? Which are the edge cases you will handle? Like you have to understand
will handle? Like you have to understand all those different elements. And the
key thing is like without having that, it's just, you know, that's not that's not solid. I think a lot of this as well
not solid. I think a lot of this as well you mentioned the word there actually what we tend to have a lot of experience of are which you know the edge cases and the edge cases you know it's like courts like support isn't it you've got your
where's my order it's quite straightforward but there are lots of edge cases that a lot of platforms and the way where people are building genic AI they just don't deal with they're just focused on the kind of the average
the middle of the bell curve really next one's cost optimization so this is really about and we get asked a lot about cost so we have for example credits instead of this like a variable
cost. So I always say that a variable
cost. So I always say that a variable obviously you need ROI but a variable cost is always better than a fixed cost.
You know assuming add up to more than a fixed cost but humans and teams have lots and lots of fixed costs. AIS don't
have to have that and also you they're only costing something when they're actually working.
>> Exactly. So, so this is actually linked to this cost optimization is linked to point number one capacity architecture basically, right? Because what you've
basically, right? Because what you've got is like imagine you got a team of 10 people in customer support, right? And
say you're looking after 4,000 customers a year basically, right? So, and let's say let's just say it was 40,000 tickets a year for example, right? So then each person is doing 4,000 tickets a year basically. So you take their salary, you
basically. So you take their salary, you divide it by 4,000 tickets, that's the cost per ticket basically, right? If you
look at AI for example, its cost per ticket will be in credits which would be a fraction of the cost. So, when we looked at a real example with a recruitment firm, it was £160 in credit cost versus nearly nearly £2,800 worth
of um payroll cost because you're just paying for the actual work that's being done. So, the point there is you can
done. So, the point there is you can really optimize the cost and even unlock new opportunities by being really clear on what you're getting. So, that's that gives you that kind of the ability to balance capability and cost per unit.
And then, you know, this is where you kind of have a whole basically an infrastructure cost advantage where you could basically start saying, you know what, I will offer a lower cost to my customer. It'll offer a higher cost,
customer. It'll offer a higher cost, higher money to my actual um you know contractor and and it means that I've got a business model advantage because I've got my own virtual BO center you know which I can scale up and then you
know I don't need to have a fixed cost of a building and all the people and everything like that >> governance framework. So you know you've now got new decision layers you know risk management you have you know racy
risk matrices you you got to start thinking about that in a slightly different way as well haven't you as you start to introduce these agents.
>> Yeah. Yeah. So, so you absolutely have to have this because what's going to start to happening if you don't have if you got these decision layers happening, who's approving the action, what was the prompt required for it, what's the rules for deciding different things and then what are the escalation paths and order
trails they can do it. So, imagine
you're reviewing you know a hundred different CVs like we've been applying for we've been hiring for two different roles recently. I use AI as part of my
roles recently. I use AI as part of my screening to evaluate job fit versus skills and even for the conversation we have and everything is scored. Now, if
everything's scored objectively, if someone turns around to me, hey, look, I wasn't why wasn't I hired for this role?
I I I believe I'm the best candidate. I
can share if they wanted to exactly what their scoring was compared to and the percentage, you know, range of wherever it is in that kind of way because that's going to happen when you start operating at scale like why is this like this or this is like this or whatever like that.
Basically, you've then got the ability to show the audit log of how it happened. So the thing is if you can't
happened. So the thing is if you can't have that built into your system and that's that's like a fundamental element of the AOS that it's got all those things within there then you know you're flying blind basically right like and if someone changed your your system prompt
without you knowing it basically >> and and the last one is um this this big one is change manager isn't it it's not about what we often see is people deploy digital workers and they've got a workflow and then it's kind of almost at
the second order they oh hang on a minute now we got so much of volume or so much more capacity we've got to go and change something else and also than what the human co-workers do or how they do it. So it's not sometimes you have to
do it. So it's not sometimes you have to start thinking about how you change your business from soup to nuts. But always I think what you should be doing anyway is looking at well how do we go from where
we are to AI first anyway.
>> Exactly. You have to train the team to like start thinking that way but then you also have to give them a clear vision that they're going to be brought along that journey. Their department's
going to be upskilled going to be AI first. you know before you hire a new
first. you know before you hire a new person think how could we use AI to do this you know either a digital colleague or you know in some kind of like custom internal workflow or whatever like this because that's the way to do it but the thing is like the key thing here is
mindset and upskilling because if you don't have that in place also for example sometimes the way people do things can be very ver not very very varied incoherent and unstructured but when AI is doing you want to make it
very clean so that okay 40 to 50% of the flow will be handled by AI but these things anything beyond these the people will do basically so it's very important to bring people along that journey because this is all about augmenting
departments basically, right? So, you
know, that that's quite a lot of stuff there, but we've obviously just show shown a few elements there, but there's a lot to kind of think about before you like trying to like turn on digital workers and stuff like that basically.
>> Well, it's really really important as Alex said, you don't sort of go down these sort of blind alleys. Um, you
know, we we see a lot of different organizations across lots of different sectors, lots of different things. We
bump into lots of problems and we solve them and that's how we that's how we understand this stuff. Yeah, there's
three because there's three things you have to think about, right? So, one part of it is actually having the actual system connections technology which is bestin-class to do that. That's why
we've built AIOS with like everything from computer usages to analyst agents to interactive agents from voice and everything like that. Then the data structure, the integrations, that's one thing. That's the tech stack, right?
thing. That's the tech stack, right?
Second thing is obviously what we just talked about these 10 things. And the
third thing that's pretty critical, people that actually can make this happen, right? That's our solutions team
happen, right? That's our solutions team basically, right? So without all those
basically, right? So without all those three things, you know, you're going to have challenges basically because you're going to fall down. Welcome Simon
Torrance to the podcast. Uh we've got more and more guests joining us now. Now
I always get people to introduce themselves. People introduce me at
themselves. People introduce me at events a lot and they get it wrong. So
introduce yourself, Simon. Um great that you're a partner. It's one of the uh our introducer partners. But give a bit your
introducer partners. But give a bit your background and and your why you've got this expertise in Jetic AI.
>> Yeah. So I've been a adviser to companies for like 20 years now on strategy and innovation and I've always focused on fundamental business model
innovation, the way that firms can change the way they create value and capture value. And uh I was asked a few
capture value. And uh I was asked a few years ago to run a think tank by some large financial institutions on AI and
the impact on the financial industry.
And I got fascinated with AI. I've been
I've been focused on other types of technologies in the past, but I hadn't really deep dived into AI until a couple of years ago. And it sort of was an epiphany moment. It sort of opened my
epiphany moment. It sort of opened my mind particularly when you started to look at really advanced forms of AI and agentic AI was was just starting to be
talked about back then and I got so excited about it I created a new business called AI risk and this is about uh 18 months ago two years ago I was saw when I first saw your company
name I thought why >> because because some of the financial institutions I was dealing with were insurance companies and so they're thinking about risk they're thinking the risk to their customers from AI, all the
risks that that creates, but also the risk to them and their the way that they operate. And then I, you know, you look
operate. And then I, you know, you look at all the different types of risks.
There's all kinds of risk to do with cyber attacks, to do with not having enough skills, making the wrong bets, all that sort of stuff. But the risk that I ended up deciding was the biggest
risk was a strategic risk of not adopting a gentic AI quickly and effectively enough. So that is what I
effectively enough. So that is what I focus on. So now all I do now I help
focus on. So now all I do now I help companies big and small work out how to exploit Agentica AI to change their
business model for competitive advantage and it's a bit like what you do uh and it's a fascinating space it's very early
stage still as you know um but I've what I've done is I've collected around me re people right at the cutting edge like implement AI like other people who have
who have been doing agentic AI not just talking about it for the last couple of years there's not many and uh we have done some incredible deployments like
you have um and we're taking that to traditional companies who are not super techsavvy helping them to transform with agentic AI actually you made a point there you mentioned it which wasn't
actually on our list so it's number 11 which is it's not just cyber is it it's the security prompt injection the lots of things that people aren't aware care of that that actually introduce quite a
lot of risk to AI systems deployed by people that don't fully understand what they're doing >> or having built those in at foundational levels basically right you can't just add that on top essentially so yeah >> and what I what I find so so I I spend a
lot of time particularly at the moment working with financial institutions because insurance companies and banks are perhaps the most exposed to AI in terms of automation and augmentation
because their business is pure information it's it's pure knowledge uh work and AI as we know is is very very
good at at knowledge work uh analyzing lots of data. Um and what I find is I don't know if you you have the same
fighting I'm sure you do. What I find is that most leaders of very significant financial institutions do not appreciate what agentic AI really
is and what it could be for them. I did
an event um it was I won't mention their names but was it was event for one of the very large um accountant consultancies and it was a room full of
um like footsy 350 chairs and we were talking about AI and what I found was when I was I was a little bit provocative about where it's going you know about what it can do as I normally
am but the response there was a linear correlation between the response to me and their understanding of AI the ones that never used AI I will never do that.
And then there were some people in the room who who are very well known in technology who were like yeah he's right. So and it was like it was really
right. So and it was like it was really interesting. Well I I see this tension
interesting. Well I I see this tension between the sort of senior commercial executive particular let's just take insurance. the sector I'm focusing on a
insurance. the sector I'm focusing on a lot at the moment and and insurance is I mean it sounds really boring insurance but it's fascinating um because everybody needs it but nobody wants it
and there are these what what's called the insurance protection gap it's the gap between what cover people really need and businesses need and what they
actually have and what why that exists is because the industry has not been able to satisfy that demand because it's
too expensive it's too complicated but the size of that protection gap is nearly the same size as the whole existing insurance industry today. It's
the untapped opportunity because I mean I was presenting a financial advisor conference in September, right? Like and
then and I'd put together as a demonstration like how from a single person's name and maybe a company they're involved in like any property associated with them, their age, their kind of like potential net worth calculations, you know, what's going to
happen over the next 10 to 15 years based on their current trajectory, what financial products they're going to need and different types like all that stuff could be kind of like put together. And
the thing is like if someone's a serious operator, they actually want those things and if it's brought to them like you're missing this thing, you're missing that thing. this kind of you know personalization of this experience at scale. We always talk about revenue
at scale. We always talk about revenue capacity and experience. This is
experience right? So if you can kind of communicate to particular customers in a particular way you can massively increase the value you can deliver to them you know and and it doesn't actually cost more to send that email or to send that SMS you know it's just it's just the messaging in it. It's like
difference between a 50 pound note and a 10 pound note is the ink basically right.
>> It's very important because I mean if you think about the benefits to society as well those protection gaps if something happens if natural catastrophe happens or a cyber attack happens or people can't pay for healthare who picks
up the tab it's the government.
So so you've got this opportunity you've got this if you like it's non-conumption it's unmet demand which is out there.
The industry is unable to satisfy that because it's inefficient and can't work out how to do it. And so the government picks up the tab. So what what I say to to that and that industry by the way
insurance has just been flat as a percentage of GDP for 20 years. So it's
it just and the executives are remunerated on achieving 3 to 5% growth peranom. You know if they do that
peranom. You know if they do that they're happy they get their bonus. But
I see this bigger opportunity. So if we think about agentic AI, what I what I see is that many companies see AI and if they understand agentic AI as as an
efficiency game, we can do the same with less and we make more profits that way.
That's one way of thinking about it which is that I would say is the predominant way today. How I see it and I think you share the same uh quotes with me as well. It's exactly it's a
growth game. It's doing more with more.
growth game. It's doing more with more.
You have your human workers and then you have a near you know multi multiple times the number of AI workers and collectively you have the
operational capacity to do a lot more to to address that non-conumption that value to society and in return create value for your company and and of course
you're you're creating a growth story where it's not about sort of reducing the size of the workforce necessarily. I
think that's the exciting thing but it's very difficult and I I guess you know as you say peers you have the same experience you know when you talk to people about this you know a lot of them who don't understand what is even
possible and I'd love to come on to some examples that bring it to life uh in a second when they haven't seen what is possible you know they you know they they sort of reject the story those who have seen what's possible you know can
appreciate this and think you know let's get on with it yeah we always say that um there's no historic frame of reference I always use cloud cuz cloud you see a server in a room you know
worring away with an air conditioning unit and you understood what it did so a cloud was that delivered in a slightly different way you've never seen it before you know
you've never seen a lot of us users you know a call agent make you know three and a half thousand phone calls in 3 hours and update the CRM perfectly every single time
>> like one person could it takes 60 minutes of their time to make 60 minutes of conversation assuming they talking on the phone all the time versus the AI could do 60 minutes conversation in 6 minutes or even 1 minute basically or 30 seconds right like because it could do
it in parallel and that the people can't get around that basically right it's faster horses against motorc cars that's I mean that's an analogy no analogy is perfect man go for so go
through obviously you're out there doing kind of what we do someways but go through like go for your experience where you think the market is what you're seeing what what you're seeing in the market the opportunity for AI and
then some of your uh some of your examples bring it to life.
>> Yeah. Yeah. So, so what I do is I do agentic AI strategy and implementation.
So people you as as we said people they don't know what they don't know. Um they
might have heard about this topic and sometimes you know really know very little. The IT department is often a bit
little. The IT department is often a bit more savvy because they they listen to what we've been talking we talk about and and so on, but they're not the ones making the strategy decisions and making
the the investments of course. So what I do is I help companies well leaders understand what you know Agentic AI is about transforming your business model
and creating a new workforce etc. And crucially as part of that I demonstrate and show them things that have worked and are working and and can work in the
future. So that that helps to wake them
future. So that that helps to wake them up, educate them about the art of the possible today and then they're in a frame of mind of thinking right this is quite significant. We need to do a
quite significant. We need to do a proper plan to work out what to do rather than as you see a lot of this you know we'll do some random testing in pilots here there or wherever and that
never gets anywhere that the notion of pilot purgatory we do lots of these things and don't get anywhere. So, so,
um, I, and I'll give you the examples I tend to use that wake people up. And
this is a case study that we did, um, 18 months ago, and it's still, um, probably I I claim the most advanced, uh,
commercial deployment of agentic teams, teams of AI agents working in an unstructured environment to manage a commercial operation. Well, that was 18
commercial operation. Well, that was 18 months ago with tools from 18 months ago. Yeah. This is why it's so
ago. Yeah. This is why it's so interesting. Um, and in fact, I got a
interesting. Um, and in fact, I got a call from one of the uh AI strategy leaders at Google in San Francisco the other day saying, you know, I published I wrote about this and he said, I have
never seen that. We we've got things like it in our lab, but I've never seen a commercial deployment. So, let me describe it to you because this one really wakes people up. And as you say as you say peers sometime the the IT
people who you know they can you know latch on to this the uh the commercial people are sort of a rabbit's in a headlight going oh my god I don't you know is this true whatever but it's a
story of a noninsure again it's an insurance example it's a non-insurance company that wanted to create an insurance business that um was
complimentary to its core business or an affinity service and that's about growing your business. You got your existing customers, they buy a particular service, let's offer them some insurance on top to add extra value
and increase our our you know value to them, profits for us. And they tried to uh hire insurance people to run the operation. But they were in a part of
operation. But they were in a part of Europe where insurance people didn't want to work. Insurance people are quite expensive uh and they just couldn't get them. So the CEO said to my my
them. So the CEO said to my my colleague, "Is there a different way of doing this?" My colleague is a is a
doing this?" My colleague is a is a serial entrepreneur. He he was a chief
serial entrepreneur. He he was a chief innovation officer for a large financial institution in his past and crucially he had a PhD in artificial intelligence
from the 1990s and he came at it back then at artificial intelligence in a slightly different way than the sort of ML big brain approach to it. He he is a
expert on what we call artificial life.
So it's like trying to replicate some of the more natural biological ways that intelligence operates. I'll come back to
intelligence operates. I'll come back to why that's important in a second. So he
had a bit of time on his hands my my colleague and he said well okay let's try an experiment. This is now with two years ago we started implementing you about 18 20 months ago and he he he said
well let's see if we could create a whole operations team out of AI agents only and he he did that. So they they operate now they have um they have agents that are uh acturies that are
customer service reps that are GDPR experts, process experts etc etc. They all operate on Slack. So like we interact on Slack, they interact on Slack and that means they can interact
with the human workers in the human call center on Slack as well. So they can they can work together. You need a a place where that where that can happen effectively. And what happens is that um
effectively. And what happens is that um a ticket comes in or a problem from a customer question. What instantly
customer question. What instantly happens is the team it's a sort of more complex request. The team is formed by a
complex request. The team is formed by a facilitator agent who says we've got to solve this very quickly. It's like
having a meeting you know when you're in a company got a big problem. Get the
right people together in the room discuss it come up with a solution and make it happen. That's what they do.
They do it incredibly fast. They also do it incredibly well because we have QA agents who are there monitoring what they do and we have we've created this threshold mechanism that the action
isn't taken until a certain threshold is reached. Now underlying all of that
reached. Now underlying all of that because it's not a structured process and it's quite a complicated well it could be any type of ticket and request
uh my colleague through his artificial life background created a swarm algorithm that manages this. It's like
it's like the way that flocks of birds or ants or other types of human like as humans we we will have our little brains together they're quite a big brain but no one has no one can do everything
themselves. So these little brains work
themselves. So these little brains work together they discuss it they they we created a mechanism by which they come to a conclusion and then the action is
taken very quickly given the fact it's insurance it's regulated.
Well, how does a regulator how does a regulation uh interrupt and and then where there must be a human in the loop somewhere?
>> Yeah. Yeah. So, let let's I'll do um human in loop first and we'll come to regulation and then I'll come I'm going to come on to innovation next after that.
>> Hum in the loop so the regulator can have a can ring them up. What's the
reason?
>> So, and initially we had a lot of humans in the loop. So we've had humans participating in the discussions because they because the human call center the
human operations uh system uh you know they know the customers in the market very well and they would they would interact with the agents saying well you need to remember this what about this
this is how we do really and then at the end before the action was taken a human had to review it now it used to be 70% reviews now it's gone down to 30% because the agents
are actually really good you know we've tested it. They don't make well they're
tested it. They don't make well they're not they don't make many mistakes you know far less far fewer mistakes than the humans in fact let me just I'm just going to stick on this human in the loop
bit just for now come on to regulation um what we found is the human workers they said I can't keep up we can't keep up with these agents because they just they don't take any breaks they don't
sleep they keep working all night I can't keep up and they work really fast could you clone me could you create a clone of me such that I can participate in these discussions without actually
having to be physically there. So my my my colleague uh he he looked at all the interactions of the the particular um human agents that wanted to participate
over their Slack or their Slack history, their email history and we created an agent a digital twin of those humans and so then they they now participate as
part of the agentic team. they have been turned into an AI agent and participate in that team.
>> That's quite interesting, Sam, because we're working on something similar because you can clone humans, you can clone customers and then start to look at them in different ways in ways you couldn't do before. And what's then interesting, what I think is really
interesting, we're not quite gone there yet, is once you do that, the agents can start to look how the humans are performing and manage that.
>> Yeah. So I mean the I it's really interesting the agent dynamics as well because they they are given objectives and you know that they don't you know they don't stop until they achieve that
objective. They are very very forceful
objective. They are very very forceful and I'll give you one good example of that. Um at one stage we wanted one of
that. Um at one stage we wanted one of the agentic teams to do some innovation on the product because in insurance you need to get loss ratios quite low to
make profit and we said to the agents just you know said your task is to reduce loss ratios. How can we do it?
Now we they they struggled with that to start with. So we thought well part of
start with. So we thought well part of the reason is they don't have actuarial understanding. So we created an actury
understanding. So we created an actury bot in one week and an actury normally has 30 years of experience school university they are trained they you
know 30 years down to one week but this is the important point it's not because we needed a full human actuary we just needed actuarial input the rules-based
thinking but you were doing that 18 months ago so the models weren't as good a maths as they are now they were cra so and all we all we had to do is just take this actuary bot created in one week was
just taking information that was on the web. You know that that's all it was.
web. You know that that's all it was.
There's no because of course in the company there was no insurance knowledge. But when you take that bit of
knowledge. But when you take that bit of knowledge and you combine it with the the knowledge and the the way of thinking of the other agents, they came
up with a novel or a number of novel innovations to reduce loss ratios and increase profitability. Now,
increase profitability. Now, interestingly, some of them were semi illegal and unethical.
>> Look at the screen. It's just generating ideas.
>> Why we got loss ratios got people who make lots of claims? Let's get rid of the people who make the claims. And it's the it's the paperclip maximizer. It it
is exactly that. It's Yeah, exactly. Or
the one about, you know, how do we reduce climate change, kill all the humans. So, in this case, um this and
humans. So, in this case, um this and this is another really interesting example about how you train the agents as well. So rather than, you know, going
as well. So rather than, you know, going into the system and re re-engineering everybody, all we did about to to deal with this unethical or semi-legal, we
just we just posted a a picture of the company values uh onto the Slack channel and gave them a PDF of the of the law.
That's all it was. So you didn't have to train. You just showed them this is now
train. You just showed them this is now comply with this. I mean that was one of the things you mentioned earlier was the the guard rails that you got to be really clear about where were the four corners of the box because like you just
say there AI will find novel ways to operate outside of them.
>> Exactly. So what we found is that um before action is taken we have we've actually create and created QA and GDPR and other bots who monitor what's going on. So it's a bit like in a human
on. So it's a bit like in a human meeting have a meeting there'll be someone in the corner of the room thinking about regulation. They may not say anything but when they need to they they they pipe up. Well, come on to the
the regulatory aspect then because you know we implement we we're not trying to create regulated box or something which is a you know a medical device. We kind
of do everything else. Other companies
raising billions of pounds doing that.
But to the the regulatory aspects then because you know you only need one issue in that kind of aentic waterfall, one hallucination or problem somewhere that's missed and then the human even
misses it on the way out of the building and you've got a problem.
>> Yeah. So I think there's two ways two aspects to that. Well, one is obviously putting in place the the QA agents that are monitoring things so you avoid so you minimize that. I mean the other
thing is in terms of compliance the regulator of course in in insurance and other sectors they require that they need to know how decisions are made. Now
the actually the great thing about aentki is it's all recorded. It's all
logged. It's it's a immutable record and a perfect audit trail. So you know we the regulator asks for information about how we send them we press a button and send them 5,000 pages. You've got an input into an AI model. No one really
knows how they work. But this has got a Slack audit trail. Without that, you don't have that because it's showing the thinking.
>> Input output, but not always why the output looks like that.
>> Well, exactly. Except that on Slack, they are you see all their discussions.
>> Yeah.
>> Yeah. You see all their discussion. It's
all there. It's all recorded there. So,
you can show this is the the logic flow and where how they got to that answer.
It's not I mean humans they get in a room, no one's recording it and you don't know how things are done.
>> Which but which by still as well.
>> Yeah. Yeah. So, so we've managed to, you know, I mean, there's lots more to do around this when you get into more complex types of >> that was 18 months ago, right? Right.
The technologies moved on. So, if you think about the complexity of building solutions like that, which you you could do on our platform, but a lot of people would look at that and run a mile, they're kind of like saying they're
worried about deploying a a voice agent to make phone calls or take phone calls and that's it or or a knowledge base with a a chatbot over it. So you're
you're lucky you're over here in terms of the the complexity on that continuum.
But given that's possible, it's even more possible now and this we kind of can start to wrap this up. But where do you where do you think people are? What
do you see as the blockers? Because if
that if what you're saying is doable then is much more doable now. Why aren't
we seeing more of this?
>> I mean it's this a great phrase, isn't it? If the future's already here, it's
it? If the future's already here, it's not evenly distributed. That classic
phrase, isn't it? So, you know, things that are possible, you know, are just not pervasive at the moment. So, it
depends who you're showing that I mean, that that example is is very extreme and um and sophisticated.
But if you're work if you're speaking to a visionary leader who wants to do something different and get the benefits, then they I working with a number of companies say, "Yeah, I want
that tomorrow." Now, and part of the
that tomorrow." Now, and part of the reason is the results of that. I I
didn't mention that, but that just going back to that case study, that company saved €2 million euros. They didn't have to buy they didn't have to, you know, hire 15 people. So, that's a pretty good
saving. They they got better NPS scores
saving. They they got better NPS scores than their competitors by 25%. Because
the agents respond quicker. That's a big metric as well. They reduced they collapsed uh loss ratios and and doubled underwriting profitability. That's a
underwriting profitability. That's a pretty big tick in the box. And finally
employee human employee motiv satisfaction rose because they the company was doing more they were contributing more. In fact the person
contributing more. In fact the person who or the people who ma who digital twins were made of they got pay got a pay rise as a thank you. So you know I can talk about the theor you know I can
show what's what's happening when I show those metrics. Yeah those are what
those metrics. Yeah those are what people are are judged on. But that's
even that's even more my point then. So
now you've got you can show the design you can show what the path of the possible is and now you got the metrics which we were talking about earlier is the you can show it's measurable that the productivity the the revenue the
exper revenue is up cost is optimized capacity is up the experience is better so again why is it everybody doing this because yeah it it takes time firstly
they don't know what they don't know so I I literally in in the I'm just focus on the insurance industry for now but I'm I'm talking to a very senior executive who've never seen that before.
I show it to them and the the visionary ones say, "I want it." Yeah. Now, what
happens? They say, "I want it." And
then, of course, they get into the the organizational trile. So, you know, the
organizational trile. So, you know, the the COO says, "Oh, but we're very busy doing other things, or we, you know, we don't have the resources. We don't have the budget." And then the IT people say,
the budget." And then the IT people say, "Oh, this is very dangerous, and there's all kinds of security issues." So, so, you know, you you go through like any for any this is for sort of bigger companies. you've got that sort of
companies. you've got that sort of organizational inertia that you have to just sort of, you know, that you've just got to break through and be be patient and push it on. But what I always say to
people is that that's why you need a strate you need a strategy. You need to show that if we did 10 of those types of deployments plus some plus a whole stream of much simpler deployments, you
want a portfolio, not just the most advanced collectively that is going to deliver X by Y time. And that is worth why it's worth changing things and
activating things now. Unless you do that, you then it's all it sounds too difficult and why bother let's do a few tests. So for what's your say know we're
tests. So for what's your say know we're we're all in this sort of the business of implementing AI but just to sort of wrap up then what what's what's the
advice you would give any organization about how to approach implementing and deploying this technology? So firstly to
understand what agentic AI is and how it works and the different types of it. So
that's that's number one. Secondly to
appreciate and accept this this is about changing your operational capacity. You
know you've got your human workforce you got your digital workforce collectively you can do more with more. So those two things are quite that's a big you know step forward just understanding those those two key aspects. That's that's
step one.
Step two is creating a proper strategy.
It doesn't mean it takes months and months of analysis. You can do it in weeks literally because we use agents to do the strategy, you know, with companies. Create a strategy hypothesis
companies. Create a strategy hypothesis that creates the that demonstrates what we could achieve and then like good entrepreneurs, you need to test and
iterate. So test and iterate fast but
iterate. So test and iterate fast but within a strategic framework, not in a random, you know, unstructured way. So
that's what I tend to suggest to people rather than diving in with with pilots and tests and so on and that's the way of doing it properly if we believe that
aentic AI is about delivering superior commercial performance over the next three years which is how I tend to position it and what's your one line then so we can make a nice neat short
for to promo the podcast what's your one line in terms of you know what AI means and the power of it I'd say superior commercial performance That that was
fascinating. Thanks for joining us. Um
fascinating. Thanks for joining us. Um
we'll we'll we'll talk offline again about various leads and ideas you've got as well, especially in the insurance sector. But thanks for thanks for thanks
sector. But thanks for thanks for thanks for sharing that. Pleasure. No, really
good to see you. Yeah,
>> thank you. Cheers.
>> Well, that's it for this week. We've got
it kept just under an hour I think just about. Um great talk with Sim. He's one
about. Um great talk with Sim. He's one
of our partners. So if you're interested, if you're like Simon, a consultant, you're in strategy, you're building even managed services and and there there will be solutions that you can deploy on our platform work with us.
So if you're interested, go to implementi.io/partners
implementi.io/partners and you can download the partner document there. Do uh register and
document there. Do uh register and access our resource center running more documents over time. And if you're interested, join Alex on our webinars Thursday 11:30. details are going to be
Thursday 11:30. details are going to be on um LinkedIn and you can learn all all about the AOS. So that's it for this week. Um we've not been doing much
week. Um we've not been doing much traveling over the next week. We're
coming up to Christmas. We're now um working hard on sort of um deploying customers on on our platform now, which is a really exciting time for us. Thanks
for joining. See you next time. Thank
you for joining us for this episode of the Implement AI podcast. If you're
interested in learning more about how we can assist you and your business in leveraging AI for growth and efficiency, visit our website at implementai.io.
Don't forget to subscribe to the Implement AI podcast on Apple Podcast, on Spotify, or wherever you listen to stay updated on future episodes. Thank
you for listening and we'll see you next time.
Loading video analysis...