LongCut logo

Cybersecurity’s Year in Review: ClickFix Attacks, Vibecoding Vulnerabilities & Shadow Agents

By IBM Technology

Summary

Topics Covered

  • AI Agents Automate Kill Chains
  • Protect AI from Social Engineering
  • Vibe Coding Risks Malware Injection
  • Trust Drives Cyber Failures
  • Sessions Become New Perimeter

Full Transcript

this idea of vibe coding. Yeah, it it's it's great until it's not. The AI, the code that it writes, you know, is getting better. It didn't used to be

getting better. It didn't used to be great, but it's getting better and it's going to keep getting better. I like to say, don't bet against AI.

>> All that and more on security intelligence.

Hello and welcome to Security Intelligence, IBM's weekly cyber security podcast where we break down the most interesting stories in the field.

Now, today we're doing something a little bit different. Uh instead of breaking down stories from the past week, we are breaking down stories from the past year with the help of some of

our favorite panelists. Of course,

they're all favorites. These six just said yes when we asked them to join us.

And here with me today is Patrick Austin, staff writer for IBM Think and the news correspondent for this podcast.

Patrick, thank you for stepping in front of the camera with me today.

>> Thanks for having me, Matt. Uh, happy to be here.

>> Absolutely. Now, the way the show is going to work is we have three segments for you all today. We've got Michelle Alvarez and Jeff Kroom on the year in AI and data security. We've got Dave Bales

and Nick Bradley on the year in incident response. And we've got Suja Visen and

response. And we've got Suja Visen and Shridhar Mupiti with a broad survey of some of the big lessons, trends, and innovations of 2025. Now Patrick, you talked to Suja and Shridhar. Can you

maybe give us a little taste of what your conversation covered?

>> Of course. Yeah, we um Suja Shridar and myself touched on the topic of cyber security software and cyber security in the past year. It's been a very uh busy

year in terms of cyber security especially when it comes to AI powered um cyber security incidents. We

discussed um we discussed the proliferation of AI agents. We discussed

shadow AI um and the and the havoc it can wreak on a corporation potentially.

And we discussed innovations and responses that companies can take in the in the next year when it comes to combating uh cyber attacks.

>> Absolutely. And that kind of, you know, really funnels into what Michelle and Jeff and I talked about, which was some of those gaps and edges in AI security, right? Like we talk a lot about, you

right? Like we talk a lot about, you know, how we have to protect ourselves against the AI and and deal with shadow AI and stuff, but we also talked a lot in in my conversation with with Michelle and Jeff about the need to protect AI

from people, right? Like this is a new kind of technology that can be like socially engineered in a way that other things can't. So, we covered that. And

things can't. So, we covered that. And

along the way, we also touched on vibe coding, cyber security sensationalism, quantum threats, and of course, IBM's cost of a databach report. can't look

back on the year without touching that one. And then with Nick and Dave, we

one. And then with Nick and Dave, we dove into a kind of veritable carnival of cyber incidents. You know, we talked about the tea app and and click fix and the shy hallude worms, scattered laps as

hunters, all kinds of stuff. And a big theme that emerged for this year and next year across both of my conversations, and I'll be interested to hear if you felt like this theme was in

yours, too, was this idea of trust. Who

has it? Who shouldn't have it? How do

you give it out? Did that come up in any of your conversations at all?

>> Absolutely. I think that maybe was one of the most uh pivotal or integral points of the conversation. Um giving

that trust to an AI agent or or system and you know knowing that it you can trust it to do its job properly without any sort of um malicious um intent. Um

we talked about observability and and resilience as well. um and how it is super important to be able to see what what AI is doing behind the scenes so you know where exactly your data is

going and how it's being treated. Um it

was it's a very huge um problem and one that I think will you know companies will take steps to solve in the next year.

>> We can only hope you know but I yeah so our our watch word for this year as we look back and look forward is going to be trust. And so without further ado

be trust. And so without further ado let's see what our experts had to say.

There was no shortage of AI security stories this year. In fact, there has been no shortage since chat GPT burst on the scene in 2022. The impact of AI on

security has been top of mind ever since. But 2025 did feel like a year

since. But 2025 did feel like a year where we started to reflect a bit more on that coverage. We started to poke and prod at the discourse and wondered, are

we doing this right? here today to talk about the year in AI and data security.

With me, Michelle Alvarez, manager, Exforce threat intelligence, and Jeff Kroom, distinguished engineer, master inventor, AI and data security. [snorts]

Folks, to kick us off, I was hoping to open the floor to you to tell us what do you think were the biggest stories or most interesting trends or moments in AI and data security this year. Michelle,

I'll go to you first. What really caught your attention this year?

>> Absolutely. I think we have moved from you know contemplating our attackers using AI and attacks to verifying right I believe in our exforce threat

intelligence index that we published in February of 2024 we had said we had not seen confirmed use of AI now we can erase that right we know that they're

actively using AI so so much has been around are attackers leveraging how are they leveraging um to what extent are they leveraging

right and also um how can we use AI to defend against attacks generally speaking regardless if AI is being used >> absolutely we have definitely seen like

the the threat the AI threats come from being this thing we talked about like oh it could happen to like they're happening now right and I think about that uh anthropic story very recently at the time of recording this recently

anyway where they busted that spy ring that was using claude code to basically run an automated campaign of of espionage. So like we are seeing these

espionage. So like we are seeing these AI threats actually take place now.

Jeff, how about you? What kind of stood out for you this year?

>> Yeah, definitely the same kind of thing.

AI has gone from being a more theoretical threat to a real threat. And

in particular, we've seen in the second half of the year, I think, the emergence of of agents, AI agents, as a way to

amplify the attackers capabilities. It

amplifies risk for us. Now, I'm not against agents. Agents can do some

against agents. Agents can do some really great stuff if they're in the right hands and they're under the right control with the right governance and so forth around them. But the reality is,

and we've seen these stories breaking lately here toward the end of the year, that agents have not only emerged, but they're actually able to automate and

run independently, autonomously the entire kill chain from beginning to end.

We we've had, you know, where it was easier for attackers to do their thing by click on a tool and not have to understand the details. Now we've got where it does the whole thing beginning

to end and that is a trend I don't expect to see abating that that's going to continue >> and you know I also think about how this year doesn't didn't just show us how attackers were using AI and how

defenders were using AI but it also showed us how we have to protect our AI right and I'm I'm paraphrasing IBM's Suja Vuison with this actually but it's something that she said in a recent

podcast which was as much as we need to protect ourselves from AI I we need to protect our AI from people, right? And

one of the things that jumped to me in that uh uh kind of realm was these agent session smuggling attacks that Palo Alto uncovered a little while back. I don't

know if you folks remember this, but basically they found out there was a vulnerability in the agentto agent communication protocol that basically allows you to set up a malicious agent to socially engineer a legitimate agent

without the end user even knowing. And

this kind of thing again brings to my mind that need to protect uh our AI as part of our systems. I'm wondering if you have any thoughts on that. Do you

think we focus on AI protection enough?

Michelle, I'll start with you. Matt, I

think securing AI itself is often an overlooked aspect of AI security. Uh

most of the attention of course has been on how AI is being used in attacks uh or how it can be used in defenses.

uh things like we've seen of course AI fishing, deep fakes um and we're definitely leveraging AI uh in order to detect attacks and identify and contain

them. But uh just as important and what

them. But uh just as important and what we're hopefully trying to emphasize during our discussions with clients is how how important it is to secure AI

applications and the infrastructure and the data flows themselves. uh because we do anticipate this to be an attack surface that's continuing to grow as AI

adoption increases and therefore it will be an incentive for attackers to target AI technology.

>> And speaking of that expanding attack surface, uh something that I've been thinking about a lot and I'm not the only person is the kind of rise of vibe coding, right? We saw that uh term

coding, right? We saw that uh term coined this year back in February uh by OpenAI co-founder Andre Carpathy. And

it's kind of really taken everybody by storm now, vibe coding, right? You don't

even need to know how to code. You just

tell the AI what you want and it spits it out. But that opens up some new

it out. But that opens up some new vulnerabilities in our coding process, doesn't it? And and Jeeoff, I was

doesn't it? And and Jeeoff, I was wondering if I could get your take on vibe coding from this security perspective. Do you think it introduces

perspective. Do you think it introduces kind of new flaws in the attack surface?

How do you feel about this?

>> Oh, yeah. So first of all the AI as an attack surface that to me is a really important question. It's a a

important question. It's a a presentation I've been doing at conference events all over the country uh for this whole year a presentation called AI the new attack surface and

that's one of the things I drill into and this idea of vibe coding. Yeah it

it's it's great until it's not. The AI

the code that it writes you know is getting better. It didn't used to be

getting better. It didn't used to be great, but it's getting better and it's going to keep getting better. I like to say, don't bet against AI. I keep

hearing people make predictions about, well, AI can do this, but it can't do that. Well, uh, most of those

that. Well, uh, most of those predictions, we've in fact seen AI learn to do those things. So, even though the code's not perfect now, you know what?

Nobody, no humans are writing perfect code either. So, that can't be our

code either. So, that can't be our standard. However, to your to your

standard. However, to your to your question though, as we get less and less involved in the writing of code, then it

means if the AI were to be subverted in some way to inject malware into our code, when are we going to recognize it?

You know, who is going to know that that's happened? So, that means if we're

that's happened? So, that means if we're going to have an AI creating our code, we're going to need other AI looking over that code. And we got to make sure that that neither of them have been

poisoned or under the the control of of another system. Um it's not just the

another system. Um it's not just the hallucinations that we have to care about. It's some of the intentional

about. It's some of the intentional things. So if if I wanted to be uh

things. So if if I wanted to be uh really sneaky, uh I would just infiltrate the AI model you're using to write your code, poison it, and then

have it do my injects for me. And then

um then you good luck trying to find that. So that's uh that's going to be uh

that. So that's uh that's going to be uh certainly another thing. Uh but

we'll we'll adjust as we always have.

>> Yeah. I think that, you know, don't bet against AI is a really good point, right? Because I think about when again

right? Because I think about when again Chad GPT, you know, came out 2022, it was really impressive, but people kept saying, oh, but you know, look at the videos it makes are bad, right? And oh,

you can tell it's look at how far we've come now, right? So you apply the same thing to the cyber attacks and it's like yeah obviously you don't want to bet against the AI but sometimes I wonder if

we bet maybe too much on the AI because another thing that's happened this year is we've seen an increasing kind of skepticism among some cyber security professionals about how we cover stories

particularly when it comes to AI and the thing that comes to mind for me there is I don't know if you folks remember but back in the spring MIT put out this paper about how 80% of ransomware

attacks are are involve AI somehow. And

then uh in the fall, some cyber security researchers including Marcus Hutchkins and Kevin Bowmont started to poke some holes in that methodology. They dug in and they realized that the paper was I

don't want to use too strong a term, but not really. It wasn't as good as they

not really. It wasn't as good as they claimed. And and that paper got pulled

claimed. And and that paper got pulled down. And this is a paper that was

down. And this is a paper that was widely cited after it came out. So I'm

wondering if we have a sort of sensationalism problem in the way that we cover AI and security. Jeeoff, I'd

like to get your take on that. Do you

think we're dealing with some sensationalism here?

>> So, we've got AI that hallucinates and now we've got security people that are hallucinating and making up stuff as well. So, now maybe we can't just blame

well. So, now maybe we can't just blame the AI for these kinds of problems. But, uh, yeah, I mean, there's there's always been an issue with this. Um, there's a a

a balance where we want to make people aware of what the threats are. But if

you go too far down the FUD road, the fear, uncertainty, and doubt, and play that angle up too much, well, then people start to tune out because it's like the the little boy that cried wolf.

First of all, nobody wants to hear that there's a wolf, but okay, if there is, we need to tell them. that if we're if we exaggerate these claims and we've been hearing about apocalyptic claims,

you know, for as long as I've been doing security that decades now. Um, and you know, so there the one thing that all the the these exaggerated claims have in

common is that, you know, they're they're not well researched. They're

probably well intended. I'm assuming

most people have good intentions, although I'm sure some don't. But we've

got to in the era of AI, the most important skill is critical thinking.

Whether you're using AI to generate that information and I just read a story this morning a colleague sent to me a major consulting firm doing a contract and

they had used AI apparently to do some of the the build the the case in the report that they were giving to their client and it turned out the AI was

hallucinating sources. So yeah, we we

hallucinating sources. So yeah, we we can't just trust everything that comes out just the same way that we haven't been able to just trust everything that comes out of the internet. So we've got

to have I I know this will be a shock.

Not everything on the internet is true.

So ju just remember that. So everything

that does come out, we need to put that through a critical thinking filter and say, okay, is it possible that something was reduced by a,000%. No,

mathematically that doesn't work. So,

we're going to have to look at that again and uh and make sure that we keep our heads fully engaged and um you know, look, we need to know about the warnings, but we don't need to

exaggerate them. In fact, I would argue

exaggerate them. In fact, I would argue we don't need to exaggerate any of this stuff because it's bad enough if you just tell the accurate the accurate story.

>> Michelle, I'm wondering if you have any thoughts on this kind of sensationalism issue that's been maybe plaguing us for, as Jeff says, quite some time. It's not

unique to AI, but what what have you seen this year? Yeah, absolutely. To add

on what Jeff said, basically we want to make sure that we are cautious and vigilant but not paranoid. Sometimes

that's easier said than done. And often

times as security practitioners, we also have a responsibility to not just say, "Hey, there's a thing uh but also say there's not a thing." And that's really

going to depend on an organization's cyber threat landscape. It's not one shoe fits all, right? It's going to depend on your industry, where you operate geographically speaking, and so

if you see something in the news and that might fit your profile, maybe you should raise your uh red flag, right? Um

but also partnering with a threat intelligence partner that can help you sort of make that decision and prioritize what are really the threats out there. And sometimes we just don't

out there. And sometimes we just don't know because all the facts are not on the table when something is first reported.

And it's not paranoia if everyone really is out to get you. [laughter]

>> What about if every AI is out to get you? Um, no. Let's I feel like uh no, we

you? Um, no. Let's I feel like uh no, we can't end this segment without a a look at the cost of a databach report because this is, you know, IBM's annual report,

dare I say a landmark report comes out, everybody looks at it and they should.

It's got tons of great information about on what data breaches are like for organizations right now. So to round out the segment and and luckily enough I have two people who probably know this report better than anybody else. I was

hoping I could get your takes on, you know, this year in data breaches. What

do you think were the biggest kind of takeaways from this report this year?

And and what should organizations kind of keep in mind as we head into 2026?

And um Jeeoff, I'll start with you. What

do you think?

>> So I think there were a lot of interesting things that we could we could pull from this. Um this is the first report where we really started seeing some AI related information come

out and you know it it wasn't great. You

know we're finding you know on the order of 60% of organizations have no AI governance and security policy in place.

Okay that's not a good thing if this is going to be something that's going to be core to the business and we don't yet haven't we have not yet defined how it's supposed to operate. What are the

boundaries? What are the ways that we

boundaries? What are the ways that we ensure that it's operating within those boundaries? Well, then we're just asking

boundaries? Well, then we're just asking for for a mess. Um, that's I think in terms of because I like to run. I've got

a race tomorrow. I I hope I know where the finish line for that race is. But if

they, you know, just uh fire the gun and say, "Everybody run until we tell you to to stop." I'm not running that race. So,

to stop." I'm not running that race. So,

this is what it's like if your organization has no defined this is what success looks like for us. And without

governance policies, without securities policies in place, you don't know what it looks like. You're running and you have no idea what it looks like in the end. And and by the way, I I I want

end. And and by the way, I I I want Michelle to answer and then I I want to come back and say one thing about what I I actually built a time machine, went into the future and was able to read the

future cost of a data breach report and I want to tell you what I found in that.

>> I can't wait to hear about your trip from the future. Michelle, give us your take. What do you what what did you pull

take. What do you what what did you pull away from the report this year?

>> I can't top that. And Jeeoff, you've got so many things going on between your race tomorrow and jumping into the future, but I do have to second Matt your point about this being a landmark report. And I think one of the things

report. And I think one of the things that makes this a landmark report is that every year we can count on it telling us what are the top things that increase the cost of a breach and what

are the top things that reduce the costs. Uh and great news guys globally

costs. Uh and great news guys globally average speaking uh the cost of the data breach went down that's the first time in 5 years so what are organizations

doing right and that's always what we hear from sysos and the seauite what are other organizations getting right we want to do the same thing well now we

have and have had now over a decade this report that tells us that and what I I think are the big tech takeaway is the um increased use of AI and automation to

detect breaches faster to detect them and to contain them quicker which then equates to reduction in detection costs.

Now, of course, regionally speaking, um we have some areas that we're going to find higher cost because of uh regulatory costs like in the US um which

is understandable, but overall on average costs have been reducing um and that's some really great news to share this year.

>> Now, future man Jeff, tell us what you found.

>> Yes. Yes. So, I I built my time machine and I jumped into the future and I actually found a copy of the cost of a databach report and and what it said was

now that we have quantum computers, people have been breaking our crypto and reading all of our secrets and the the data breaches are occurring because

quantum computers can read the stuff that we didn't we didn't uh we didn't make quantum safe. And so, it's a cautionary tale. Now, the only the the

cautionary tale. Now, the only the the thing I wish I had done is I forgot to look at a calendar while it was in the future. So, I can't tell you exactly

future. So, I can't tell you exactly which year the report was from, and it didn't have a date on it. So, I don't know exactly what year that's going to happen, but I'm just going to say that

it's the dye has already been cast. One

day, we're going to have a cost of a data breach report where one of the the contributions to data breach will be the cracking of crypto. Um, and that's something we really haven't had to worry

about in the past. So, there you go.

>> That's an extremely good point. I think

that for a long time the the kind of harvest now decrypt later threat was just kind of a fun thought experiment, but we have seen a lot of quantum advances this year and we are getting pretty close to it. Uh, that is the end

of our segment. That's all the time we have for today. I want to thank you Jeff and Michelle both for being here, not just for this segment, but for all the expertise you've shared on our pad podcast over the past few months. and I

really hope I see you both again on the show many many times in the new year.

And uh one last thing before we go, Jeff, I know you have a video coming out with some uh predictions for 2026. You

want to tease that for the audience here before we end?

>> Yeah, sure. So, this has become now an annual uh thing for me coming out with a video on the IBM technology channel YouTube um where I talk about the first

of all I I make these predictions. So I

look back and see if my predictions were correct from the previous year and this is about the third or fourth year that I've done this now and then go looking forward into the future. And so I've

actually touched on some of those here.

Um AI is going to be a huge part as it was last time uh in terms of the predictions that I think will be coming.

Now since I have the time machine, you know, I can actually go into those and see what they are and and I know they're all going to be 100% accurate. So that

there's that. Oh, I love it. I can't

wait to see it. Uh, but again, thank you both folks for for being here today. My

>> pleasure. Thank you.

>> What was the biggest cyber security incident of 2025?

Not exactly an easy question to answer considering that any given week is bound to give you at least one candidate. I

mean, off the top of my head, we had the JLR attack. We had the the Tapp data

JLR attack. We had the the Tapp data leak. We had uh scattered lapsis hunters

leak. We had uh scattered lapsis hunters doing all kinds of things. And so it's really difficult to narrow it down, but also I'm not the cyber security expert.

I'm just a humble podcast host. And

today I do have two experts with me, familiar faces to those who have been watching the show. We've got Dave Bales and Nick Bradley, both of X Force Incident Command and hosts of the Not

the Situation Room podcast, which means they are very well positioned to survey what has happened in 2025 and tell us what were the biggest incidents, the biggest stories to pay attention to. So,

I'm going to open up the conversation to you folks here and maybe we'll start with you, Nick. When you look back at everything you folks have covered this year or seen happen, what sticks out to you? What are the stories that you feel

you? What are the stories that you feel like defined 2025? That is not so easy for 2025. You know, usually when we have

for 2025. You know, usually when we have a given year, there's some some given nasty event that happened that everyone will remember for all time. This one

doesn't really have one specific one, but I guess if I were to if I were to just reach into my grab bag and pull out the ones that are going to stick with me for this year is Clickfix, supply chain

attacks, and then everything AI.

>> That makes a lot of sense. Yeah, I was going to say those those three the those are the ones that that come up constantly through just about every open source intelligence review weekly. One

of those is going to be in there somewhere. And then and then if we want

somewhere. And then and then if we want to talk about, you know, not so much security events, but things that really affected us, and that's the the too big to fail that proved us wrong between AWS and Cloudflare. So, uh there you have

and Cloudflare. So, uh there you have it.

>> Absolutely. Dave, how about you? When

you look back at the year, what are the ones that kind of pop out to you? shiny

happy spiders. [laughter]

That's what we took to calling them. Uh

the shiny lapsis uh hunters uh was probably the the biggest story to me of of the year because it just it went away and it came back and it went away.

>> They threatened to go away a couple of times and then they just came right back anyway.

>> Promises, promises. I I think I think the whole thing was just meant to seow FUD, you know, fear, uncertainty, and doubt. I'm pretty sure that's all it was

doubt. I'm pretty sure that's all it was for. Don't ever listen to anything they

for. Don't ever listen to anything they say.

>> Yeah. I mean, you know, I when they I remember when they announced their kind of retirement, I was like, "Oh, maybe they are actually done." And and you two both are like, "Yeah, don't hold your breath." And sure enough, you were

breath." And sure enough, you were correct about that. The very next day, I feel like they were like, "We're back."

You know, >> it pretty much was that to that level, right? It was like they just decided to

right? It was like they just decided to pout because they had some of their infrastructure taken away. Like, fine,

we quit. But we don't. Not really. But

but [laughter] >> they were taking their ball home. You

know, >> this time we're not going to attack hospitals, except we did.

[laughter] >> It's like you said, Nick, there's so many different things that popped up and and you mentioned ones that were also were were were floating around in my head, too, right? First up is is ClickFix, right? Cuz I feel like you're

ClickFix, right? Cuz I feel like you're right. I saw this everywhere and we even

right. I saw this everywhere and we even saw, you know, uh um evolutions of it like I think File Fix was one and and there might have been one called Jack Fix or maybe I made that up. Let's let's

let's dig into ClickFix a little bit.

You know, what made that such a I don't know an important story for you this year aside from the fact that it just showed up everywhere. What are your thoughts there?

>> So, it was it was crazy successful and I don't really get why, right? because

there's a lot of steps to it. It's not

it's not that easy. I mean, what is ClickFix? Right? For anybody that might

ClickFix? Right? For anybody that might be sitting there going, "Please explain to me what is it?" Clickfix is a type of cyber attack where you trick the user into running the malicious commands on

their own computers for you. And in like in some cases it'll start with like a fake error message or security alert that that you know convinces them to

take action and then gives them a way to fix it and it may go all the way of providing them the malicious script to go to to go run on their own machine and

people do it. I [gasps] so I I'm not sure why it was so successful. That's an

answer I can't give you. But I can tell you that it was very successful. Yeah,

there was there was more success in that than I thought there was going to be as well because like Nick said, it it was

30 steps to get to one issue and you kept looking at it going, uh, when are these people going to fix this?

>> It relied on having people just like run scripts on their computers, which most of most most users don't even know what that means, right? It's like the fact that it worked so well that people are just willing to open up like a part of

their computer they never go to and just copy paste something because I don't know they read like a YouTube video that instructions that said they should do that. It's a little bit crazy to me, you

that. It's a little bit crazy to me, you know. Um I just I don't know. It feels

know. Um I just I don't know. It feels

like maybe our our security education doesn't work super well, but has it ever worked super well? You know what I mean?

It >> just when you think it's safe to go back in the water, right, Matt? Cuz I mean we're thinking we finally got people to figure out don't click on stuff. Don't

click on stuff. Don't click on stuff.

But run malicious code for the bad guy.

I'm all about that. [laughter]

>> Well, you know, I mean, technically, you're right. They're listening. They're

you're right. They're listening. They're

not clicking. They're hitting controlV.

That's a little bit different, you know.

So [laughter] now we have to tell them, don't click on things, don't press buttons, maybe just don't do anything. No, I'm kidding. Um,

but yeah, the the Click Fix was an interesting one for that very reason. I

I wouldn't expect such a complicated social engineering scam to work so well, but it did. Right

now, you also mentioned supply chain attacks, which yeah, I feel like every single week there was some kind of crazy supply chain attack that we saw. We had

the sales law of drift breach, which turned into the gains site breach, etc., etc., etude is back now again, ripping through npm registries or whatever. So,

let's [snorts] let's talk a little bit about the supply chain angle here. You

know, the state of supply chain security. I mean, did we see more

security. I mean, did we see more attacks of that kind this year or was it just they were more prominent? You know,

why did why did it stick out for you folks? Well, I think it was inevitable

folks? Well, I think it was inevitable because we were watching continuously multiple supply chain attacks be it

GitHub repo, npm what you know whatever the the storage medium is. It was the the supply chain kept getting hit over and over and we just saw these stories

building and building until eventually finally shy hallude manifested itself out of this and now we have the self-propagating malware that's taking advantage of the supply chain attack.

So, I it was coming. It was It was meant to be. We saw it on the horizon and then it

be. We saw it on the horizon and then it showed its ugly head, >> and it's not going to go away. It's it's

it's going to stick around. It is going to be probably one of the larger attack surfaces in 26 to look out for because

it's it's relatively easy to do and it's a it's a target base that is going to be around. supply chain isn't going

around. supply chain isn't going anywhere, so it's always going to be available to attack.

>> It was a surface that people trusted too much, if I could say it that way, is is they they felt like they were in a safe environment. Everybody, you know, that's

environment. Everybody, you know, that's contributing here, I guess, to the GitHub repos or whichever repo we're talking about or there was too much trust and then it came back to bite us. Yeah,

I'm glad you folks brought up this trust angle and this kind of, you know, this angle of this attack service not going away because it hearkens back to something that Dave you said. We

recorded an episode yesterday and you were talking about, you know, I asked our developers the new front line and you said, "No, no, they're not the new front line. They're the new targets of

front line. They're the new targets of these attacks, right? Like they're who we're going after." Wondering if you could expand on that angle a little bit for our listeners who maybe didn't hear that episode, but I also just think it's

a very important thing to say. Yeah, it

it the developers have a reputation to uphold and when they get hit with these uh shy hallude attacks for example, the developers reputation is what takes a

hit and with shy hallude they're impersonating the developer. So now

you've gotten to the point where you don't know who to believe anymore. Uh

the developer can say hey I fixed the code but did that actually come from the developer? You don't know. So now

developer? You don't know. So now

developers have to work extra hard to rebuild their reputation, to regain that trust that they lost because of something that they had absolutely no control over. Thinking back then about

control over. Thinking back then about all of these stories we've just discussed, you know, your click fixes, your supply chain attacks, your scattered lapsis hunters, I I I know it can be difficult, but if you could kind

of distill all of this down to some key themes for the year, you know, and maybe things like developers and the new targets, I don't know, but what do you think the key themes are that we need to carry forward with us into 2026? Any

thoughts there? Let's start with you, Dave.

>> Watching out for the largest breach in history. [laughter]

history. [laughter] Every I'm not kidding. Every single week we got one more story that was the largest breach in history. And you know

just some examples the Chinese surveillance network was a 631 gigabyte database theft um labeled the largest breach in history. And then we had one

that was 2.9 billion records. That was

the largest breach in history. And then

we had one oddly enough that was like 1.6 six uh terabyte or billion records.

That was labeled the largest breach in history, but obviously 2.9 is larger than 1.6. So, we can't figure out what

than 1.6. So, we can't figure out what the largest breach is actually going to be, but it's going to come back.

>> Don't forget Alliance Health, Dave. That

one was almost 1.5 million social security numbers. So,

security numbers. So, >> and that's a big number.

>> Yeah, you don't have to put a B behind social security numbers for that one to be important. Yeah, you you put an M

be important. Yeah, you you put an M behind that one and it's still a big number.

>> The other thing I want to throw out is uh that we didn't mention is browser extensions because browser extensions were a bit of the soup dour this year as well. We saw those come up a lot and uh

well. We saw those come up a lot and uh I almost forgot to mention it, but that still builds on the same issue we just saw with the supply chain uh compromises, right? Because it builds on

compromises, right? Because it builds on expected trust. people enabling browser

expected trust. people enabling browser extensions for uh to help them with whatever they're working on and don't realize that you have just granted permission to something that you really don't know exactly what it does and that

one ends up stealing credentials, stealing data, stealing more PII, and the circle goes round and round.

>> Yeah, you want to get that YouTube video downloaded, you want to pull the sound from it, you install the browser extension, and all of a sudden you're on a botnet. It it hearkens back to

a botnet. It it hearkens back to something we were talking about again yesterday, which is this this we use all these things as proxies for trust sometimes, right? Where we're like, "Oh,

sometimes, right? Where we're like, "Oh, I downloaded this off the, you know, the the extension store. It must be fine."

You don't know that. You know what I mean? Just because it's there. I was

mean? Just because it's there. I was

reading today about, you know, this this this ring of hackers who they would upload a perfectly legitimate uh browser extension. It gets it passes the checks.

extension. It gets it passes the checks.

It gets in there and then afterwards they sneak some bad code into there and it goes undetected. You download that.

They're in your browser now. and and you trusted it because you thought it was legitimate. So, I feel like that's

legitimate. So, I feel like that's another important lesson too from the year is like you you have to do your personal due diligence, right? Like you

you can't just assume you can trust these things no matter where they're coming from. You know, Matt, there's a

coming from. You know, Matt, there's a word that keeps coming up here and we keep saying it and that word is trust.

It's so much implied trust. And before

this started, I decided I was going to go jump on our YouTube channel and look at our podcasts and see which were the most popular ones of the year, right?

And the very first one that popped up as the most viewed was our episode called Who Spilled the Tea? And that one's about the tea app that was used to uh to

for rating dates. I won't get into the details. Rating, that's with a t rating

details. Rating, that's with a t rating dates. And that one was breached. And so

dates. And that one was breached. And so

everything that people were were pouring into that app was disclosed. Too much

trust again. Because in most cases, that's something most people probably wouldn't share with anybody. But here

they were just sharing their heart out about how this date went last night, whether it was good or whether it was bad. And now that's out for the world to

bad. And now that's out for the world to read. And let's see, the other ones were

read. And let's see, the other ones were the next one is that's private or is it?

And that one was about the alliance data breach. Again, uh you trusted them to

breach. Again, uh you trusted them to handle your data and they didn't handle it so well. Uh let's see other episodes really quick because I know time is

probably short. Uh we had an episode

probably short. Uh we had an episode called space the final frontier and that one was about patch management failures and that had all to do with the SharePoint year old vulnerability that

blew up. They chained a couple

blew up. They chained a couple vulnerabilities together. That one got

vulnerabilities together. That one got ugly and come to find out that vulnerability had been disclosed multiple of them for well over a year and just not patched. And then lastly

was out of the top four was digital escorts. And that one is not what you

escorts. And that one is not what you think. And that was allowing contractors

think. And that was allowing contractors allowing contractors in other countries, less secure and untrusted countries to access US infrastructure because again we're trusting them.

>> That's going to be the running theme through 26 is trust. And and think about this. We're talking about breaches that

this. We're talking about breaches that happen to companies that have security measures in place. Think about what this is doing to the public at large. they

don't have the same training that we do.

So, I think it's up to us as security professionals to get as much information as we can out there doing things like this show uh and and letting people know, hey, you think that you can trust

your computer, but you really can't. You

really need to stay on your toes and follow all of this advice that's being given to you through news outlets, through podcasts, through uh readings

and and papers and things. I I worry about the trust uh that the public gives their computers more so than I do uh the security companies or you know just

companies in general who have security posture. It's it's going to get worse

posture. It's it's going to get worse before it gets better.

>> Absolutely. And I think that's an incredible way to kind of wrap up this segment. I want to thank you guys both

segment. I want to thank you guys both for being here, not just today, but for appearing on the show so many times and I hope we have you back a ton of times in the new year because you guys are great fun. And this is my personal pitch

great fun. And this is my personal pitch to all of our viewers. Please go watch not the situation room. Give them a follow if you haven't seen it yet. If

you like what we do here, you'll love what they do. They have even less of a filter than we have.

>> True enough. Thanks, Matt.

>> Thanks, Matt. Appreciate it.

>> Hello, Suja Visen and Shar Mupi. Thank

you for joining me today. I'd love to talk about the year that was um you know 2025 and the string of high-profile cyber security incidents that we've seen

that have been having some longl lasting effects on various companies um and just the world in general. Um Shar I would love to start with you um and just get

your take on on the on the past year um in terms of cyber security and you know how you think it went for you know in general and maybe even specifically. I

think for me um this was a year of silent AI sprawl right not just for u the attackers but also for the defenders. If you look at it from one

defenders. If you look at it from one perspective, we had some really big incidents like the salt typhoon which hit um not just uh you know the government aspect but also the private

sector like telos right on the other side um if you look at their defenders and folks like us u the proliferation of AI has absolutely exponentially

increased from wipe coding to be able to use AI for productivity. So that was really uh heartening to see but at the same time worries me that some of these

innovations are um ahead of security policies.

>> Yeah sure I can see I can see that perspective for sure. Um Suja how about you?

>> I mean see there are two sides of the coin with V coding comes V hacking right and then free trials are becoming zero day vulnerability these days. What we

have seen, we have seen anything from a big ransomware like Land Rover Jaguar that happened to uh a Dutch uh employee

in a windmill going and installing crypto mining stuff as part of it and then doing things. So we have seen both ends of it. All kinds of crazy things that we could have never imagined. I

think that's what technology always does. Always throws a curveball at us

does. Always throws a curveball at us and then keep us on our toes. It's a I mean that's a pretty smart way to start uh mining some crypto just go go straight to the source. [laughter] The

the other thing Patrick and Suja right deviating slightly from the AI story I know we've seen this this is the identity as the next parameter I'm

beginning to see session as a new parameter right we're beginning to see things like you know sales loft and salesforce situation or Gemini or even

um the echolaks right where just establishing an authentication and session is no longer sufficient right?

Um attackers are stealing these tokens or credentials and reusing them in the ways they're not supposed to be used and hence causing some devastating results.

Right. So we'll see that more and more in the coming years. But to me, you know, I wanted to talk about the session as a new perimeter right now.

>> Sure. Yeah. Um I mean I would love for you to expand on that shar if you if you can. Um you know what you mean by the

can. Um you know what you mean by the session being the new sort of perimeter that we need to establish in terms of security. I think what I mean by this

security. I think what I mean by this whole uh emphasis on session hijacking, right? If you look at the sales loft um

right? If you look at the sales loft um uh attack um the user has authenticated and established a regular or a legitimate session and in in general we

have been used for static security where you authenticate once you establish security and establish a session and you assume that things are okay and then you start conducting business. But that may

not be okay moving forward because attackers are now stealing those session credentials, right? And be able to

credentials, right? And be able to replay that um and and and and and use them in ways that they're not designed for. Hence, we start we need to

designed for. Hence, we start we need to start thinking about more dynamic security than static security. Right.

Sean.

>> Sure. I'm thinking of of um you know stuff like zero zero trust protocols to make sure that you are who you say you are and things like that.

>> Exactly. Right. SuJa, you know, I'm I'm wondering if you feel the same way. Do

we need to get more granular and more sort of um maybe more frequent on checking up on our session activity and see who's doing what um you know, all the time or or do you think we should

maybe keep it the way it is? I maybe

think you're on the former side.

[laughter] >> It's definitely need changing, right?

Whether how we with AI browsers, how we browse are changing, how we conduct businesses are changing from the consumer perspective as well as enterprise perspective because I might

log in and give access to my agent to go conduct business. What if somebody

conduct business. What if somebody poisons that agents and then able to go do it because people are doing it all the time like find me the best deal right so holiday scams are going in. So

people can steal the credentials.

Credential stealing is not something new. But what Shr is talking about is in

new. But what Shr is talking about is in today's world where you are delegating it to something else, an AI, an agent.

When you're delegating, then somebody can steal it. When somebody

steals it, it again cyber security is not a question of if, it's a question of when. When it happens, do we have our

when. When it happens, do we have our defenses in place to protect our business as well as as individuals? It

seems shar you mentioned earlier the AI the year of AI for all essentially um and Suja you were mentioning AI browsers and agents um it seems like we are sort

of maybe getting a little ahead of our skis in terms of giving these AI agents a lot of um power permissions without sort of establishing serious or or um

you know hard guidelines for them and we've seen that taken advantage of in the form of AI agent hacking and and you know agent hijacking essentially. Um do

you think that there is hope for the AI agent to sort of become more secure more helpful in response to these attacks against them?

>> We don't have a choice. I don't think there's hope. We need to make sure that

there's hope. We need to make sure that these are secures because a year if you ask me like 5 years or 5 years back that will autonomous car be a reality I would have said no no way they can co coexist

with humans. Now we are taking way more

with humans. Now we are taking way more instead of like Uber and then driving around. So it needs security needs to

around. So it needs security needs to come into play. Technology is always going to be running fast and we need to make sure proper guard rails are replaced and we have done that with data

right when social media and everything came in the regulation always comes later to catch up. the security comes later. It's a constant learning. So

later. It's a constant learning. So

that's why I talked about the question of when it happens, are we able to make sure that we are secure? Are we able to reduce the blast radius? When it

happens, are we able to observe and know what happened so that we can prevent it from happening future? Because without

making mistakes, you cannot make innovation. So there is always going to

innovation. So there is always going to be risk associated with it. But are we able to catch it fast enough before it becomes too late? I think Suja captured it really well. Right. So I mean I mean

my big takeaway Patrick for this year is our controls are predominantly designed for a world where tools and identities change very slowly

right um but that is changing that is no longer true right and that sets us that that requires obviously new innovation and all that but that's the biggest

observation for this year. I would love to talk about the external pressures that companies might be facing when it comes to cyber security and and you know cyber security preparedness um with a

lot of you know government and global regulation changes when it comes to cyber security. How do you think that is

cyber security. How do you think that is affecting companies who want to sort of stay at the forefront um of the cyber security landscape but also you know

maintain their maintain their ability to to uh you know be properly regulated and and adhere to government regulations or global global cyber security policy. I

think if you look at the you know the whole geopolitical situation right you know like you said you know there are big attacks that are not only impacting the government entities but also the

private sector we've seen that with you know some of the large telos with the salt typhoon right um so that basically means that you know the blast surface is

not just limited to state but limited is extending to private sector as well right so that kind of creeps into you talked about policy And so what what

that means is that you know some of the policies that are impacting will will impact the private and public partnerships right if if there are policies that are helping the cyber

security that's awesome but if they're not then that's going to slow down that doesn't mean that we give up on security right that basically means that we just

need to think about how do we now learn these things as a norm and build architecture ures and systems that are resilient for such changes right so

that's the way I would think about using these external pressures and external volatility as a norm and then design system to accommodate for that

>> I think see with these government regulations and everything things are always going to be changing right it's a seasonal it'll keep it will keep going up and down so for us the bigger thing

is how do we get creative in these ways what I see with the security industry is security and observability coming really

really together because the CISO and CIO and IT departments coming together to combat [clears throat] this because when you are deploying agents when you are

deploying AI in your in your uh enterprise you need to be thinking about hey what is happening that is one thing when how did it happen what kind of

process were done to get to us to this So when these things are coming together then the budget becomes bigger. It's not

just about the CISO budget now the CIO budget comes together the automation budget comes together to think about how do we protect at the same time make people more productive right so bringing

those are all the things we will get very creative when these things happen but the mission doesn't go away because we need when a sec cyber security breach

happens it's not just about oh something somebody somebody's data got lost or something bad happened it's about reputation of a company people you lose trust.

>> I like I like what you said Suja over there right the observability because to me you know one part that I was trying to say is that okay change is is norm

right and try to figure out how to not have a stable policy and endless hiring forever and react to that. The other

part that you're saying is which is really important is you know observability budgets could increase from one perspective. The other thing is also what we don't know what we we don't

know what we don't know right so having that observability helps in identifying what we don't know very quickly and be able to react right with precision and

accuracy right so that I I like what you said >> you know in the next year we're going to see a lot of new emerging threats um targeting companies you know public and private sector um businesses and even

consumers um suja I'd love to know what you think these threats might be um and how you and whether or not you think that we are currently uh prepared to deal with them.

>> I think one of the thing that I would say is about shadow agents and shadow AI because we are all using it dayto-day in every every uh new employee that comes in from college. They're all AI

literate. They are used to using these

literate. They are used to using these agents. So they're going to be using it.

agents. So they're going to be using it.

How does enterprises prepare themselves so that we don't inadvertently cause harm to ourselves? like this guy who was mining crypto mining using windmill. So

that is that is one thing. The other

part that I think about is static encryption is dead. We with quantum computing becoming a reality very soon are enterprises prepared to be agile

when it comes to their crypto posture.

Shr.

>> Yeah. No, I think I think I just want to double click on the agent part SUJA a little bit more right because I think you know the shadow AI and shadow agents

are already kind of here right this year right but I think if the impact of that right the fact that an agent may inadverently move data without the

knowledge may cause in a data loss which we didn't expect right imagine an agent which is happily doing some work and and

helping and optimize a um uh workflow, right? And as a part of that, it needs

right? And as a part of that, it needs to move data from cloud to on-prem to different SAS applications and and behind the scenes, the data is being moved, manipulated in ways that we

didn't plan for. So, as a result, you know, you may see data exposure that we didn't plan for, right? That's what

worries me. The other part, um just double clicking on that agent is the identity piece, right?

uh who has access to what and what they have done with it. Simple sentence, but it's hard to go and prove the accountability, right? Because agents

accountability, right? Because agents like you've seen with both Echolaks as well as with Gemini, it was a simple

situation in Echolaks where the chat was used to go and provide some cryptic instructions and behind the scene, the agent did what it's supposed to do. It

went and used a rag pattern to go get all the HR data and then encoded that into ways that you cannot even stop with a gateway and exfiltrated that. Right?

So that's happening because the agent was trying to do the right thing by obtaining privileges and expanding in what it can do to go get the data in a

rightful way. Right? So those

rightful way. Right? So those

accountability issues will be front and center. Um and then completely agree

center. Um and then completely agree with Suja and the static and Christian right it's like it's not about crypto is becoming so brittle that it is an

operational and a strategic risk right it's just not a theoretical one that hey quantum is going to come and take my job away right it's it is it is more of an operational thing that we have to worry

about some >> I would love to u touch on the innovations and sort of responses to these adversarial attacks coming in the next year. You know, you talk about

next year. You know, you talk about shadow AI and managing these these agents, you know, operating maliciously behind the scenes. Um, what are you most hopeful for in the in the coming year in

terms of improving our defenses and and maintaining that uh sort of proper cyber security posture? Uh, Suja, I can start

security posture? Uh, Suja, I can start with you.

>> We talked about crypto agility that is going to be the new resilience. We need

to make sure that the systems are resilient for the for the new world of computing and AI. So I would say resilience resilience are are the system

resilient to take these hits and then still operate as good as they should be.

So that is the innovation that I'm really excited about. I would also say that um this AI powered threat intelligence because human beings get

tired looking at so many things now. How

do we use this tool to make it much more uh like it's like bad the agents need you cannot fight agents with humans. You

have to fight agents with agents. So we

need to make sure the defense agents are ready to change based on what the attack surface is coming what the attack is coming through. I I mean listen I've

coming through. I I mean listen I've been I've been in cyber security for more than couple of decades right now right and always worried about insider

insider attacks right I think what I'm most excited about is the innovation that is going to help protect the AI right being able to treat every agent

like like a like we treat humans treat them as first class citizens right treat them as first class citizens so that you identify them you provide the right level of delegation

ensure that we have the know's point fine grain authorization and so that you can drive accountability right especially as agents are drifting

and agents have autonomous behavior driving that accountability I'm excited for that the other piece I'm also excited is um innovations in

observability right observability has been not a new topic not a new topic but agents are autonomous. Agents also

drift. So there will have to be new types of observability that will come up in in the next year or so that will provide the observability for the Asian

behavior. not just understanding you

behavior. not just understanding you know the shadow IT and the shadow sorry shadow agents and the shadow AI but also what they have done with it with the

appropriate agent behavior so that we can then figure out how do I minimize that data exposure how do I drive accountability how do I answer my simple

question of who has access to what and what they've done with it um and that will help even the crypto agility where where Suja was alluding to

notion of making it's not about the next new crypto cipher for we've seen that for time in memorial right but it's about being able to change that

dynamically right whether it's the regulation is saying it whether it's operational requires all that is something which is really exciting for 2026 >> cool yeah it sounds exciting I'm excited

to have an AI agent uh co-orker that I won't see at the uh at the office party >> but I want to add one thing Patrick where Shr was talking about a treat agents as first class citizen right but

what I would say is that treat them like humans the bigger problem is look everybody talks about the threats of AI what agents can do the bigger thing is

humans because how do we prevent humans from harming these AI and agents because that's what happens in most of these cases they're just doing their job the

AI and agents they are telling they're doing what we tell them to do but if how do you prevent the bad actors from manipulating them and then going do that. That's where the cyber defense

that. That's where the cyber defense come into picture big time. So the thing that we need to be working towards is how do we prevent humans from harming AI

and agents around good good stuff. It's

not just it's about agent and human cohabitants.

>> You just ask the agent to get their boss's permission. That's all. That's

boss's permission. That's all. That's

that's my that's my contribution. Shar

Suja, thank you so much for your time and for talking about the um cyber security for for the upcoming year. Um

thank you for for your time.

>> Thank you.

>> Thank you.

>> Okay, so [snorts] that's all the time we have for today. I want to thank all of our panelists, Nick, Dave, Jeff, Michelle, Suja, and Shridhar. And thank

you, Patrick, for for stepping in front of the camera. And you know, I hope we'll see some more of you on the show next year. You what what do you think?

next year. You what what do you think?

Maybe.

>> Yeah, why not? That sounds great. Thanks

for having me. I had a real blast.

>> Wonderful. I'm glad we did, too. And I

want to thank the viewers and the listeners for sticking with us over the past few months, for giving us a chance to start a new cyber security podcast.

And I hope to see you all in the new year. Let's go ahead and stay safe out

year. Let's go ahead and stay safe out there.

Loading...

Loading video analysis...