Social media and the age of AI misinformation | Aishwarya Reganti | TEDxJacksonville
By TEDx Talks
Summary
Topics Covered
- Fake Pentagon Image Triggered Real Market Panic
- AI Image Generation Exploded in Just Five Years
- AI Plus Social Media Is a Deadly Combination
- Detection Accuracy Is Collapsing as AI Improves
- Seeing Is No Longer Believing
Full Transcript
May 22nd, 2023. It was a typical Monday morning
2023. It was a typical Monday morning with people easing back into their routines. But then something completely
routines. But then something completely unexpected happened. A report surfaced
unexpected happened. A report surfaced on Twitter supposedly from Bloomberg's news account and it read, "The Pentagon is on
fire with this image of thick black smoke rising into the sky." It was terrifying. And of course, with such a
terrifying. And of course, with such a major event unfolding, social media erupted. This image was shared so many
erupted. This image was shared so many times that it reached hundreds of thousands of people in a matter of minutes. And what followed was the
minutes. And what followed was the dramatic response from the stock market.
It took a no step, erasing billions of dollars in value in just a matter of hours. Now, here's the catch. None of
hours. Now, here's the catch. None of
this was real. The image was generated using artificial intelligence or AI. and
Bloomberg's news account turned out to be fake. But who would do something so
fake. But who would do something so terrible? Turns out that our world
terrible? Turns out that our world doesn't have any less of bad actors.
Their models stretch far and wide with some of them just wanting to make a quick buck, the others wanting to sway political opinion, and then the others
who just enjoy creating chaos and watching things fall apart. In the Pentagon incident
apart. In the Pentagon incident specifically, these bad actors left no trace. But the local media and
trace. But the local media and authorities stepped in pretty quickly and the panic came down soon. And yet
this incident reveals something far more unsettling about the world we live in today where the boundary between reality and deception is getting increasingly
unclear.
Misinformation has been a persistent problem with bad actors eager to spread it. But just over the past few years,
it. But just over the past few years, two powerful technologies, artificial intelligence and the widespread use of
social media, have advanced and overlapped at an unprecedented rate, making misinformation spread wider, faster, and
more convincingly than ever before.
Essentially, this overlap gives bad actors a disturbing edge. It allows them to serve us with the information that they
choose. It's ironic really that we even
choose. It's ironic really that we even call it social media feed because we are being fed their agenda bit by bit,
scroll by scroll.
I am a researcher working on identifying AI generated misinformation on social media and my work harnesses the same AI
technology for the right purpose to identify and flag misinformation and let me tell you just in the past few years it's gotten
significantly harder for us researchers to tell apart AI generated and real content but don't worry I'm not here to give you another media literacy lecture
today. You've probably heard enough of
today. You've probably heard enough of it and you're even bored at this point.
As someone who's deeply involved in this space, I just want you to see how far artificial intelligence has come and the challenges that lie ahead as it merges
with social media. We're entering a new era where seeing isn't believing anymore. Because this powerful overlap
anymore. Because this powerful overlap is designed to show you what you already want to believe. Let me repeat that and
let it sink in. This powerful overlap is designed to show you what you already want to believe. And as consumers of this modern technology, we must be aware that this phenomenon is only starting to
take off and must be approached with informed caution.
Artificial intelligence or AI has been around for a while now, but generative AI or the kind that both understands and generates information has only truly
taken off in the last 5 years. And this
has happened because of the lower costs of computing as well as significant advancements in research.
Now, back in 2019, if you asked a typical AI chat system to maybe write a story about a dragon who discovers a secret kingdom, this is what it would look
like. The dragon went to the
like. The dragon went to the [Music] kingdom. It was a secret. The diagon saw
kingdom. It was a secret. The diagon saw the kingdom. Blah blah blah. I mean,
the kingdom. Blah blah blah. I mean,
it's almost like a kid trying to rephrase the same sentence over and over again, hoping that no one realizes they're lost. Fast forward to 2024. Today, if
lost. Fast forward to 2024. Today, if
you ask a typical AI chart system exactly the same question, this is what it would generate. Notice how the storytelling is
generate. Notice how the storytelling is super immersive, has such great attention to detail, even with just the naming of the dragon. Archonis feels
like a character from Game of Thrones, right? Super impressive. Let's take this
right? Super impressive. Let's take this up a notch. This is how AI generated images looked like back in 2019. And I
know most of you are having a hard time telling what these are. They're supposed
to be images of dogs. Take a closer look and maybe you'll find one. Even back in 2019, there was
one. Even back in 2019, there was slightly better image generation systems, but they required deep education in the space to use them and were also
expensive. Now, fast forward to 2024.
expensive. Now, fast forward to 2024.
Today, this is what AI generated dogs look like. High definition and super
look like. High definition and super cute. And you can literally generate
cute. And you can literally generate images like these for free. And no, you don't need to be a software programmer to do it. It's funny how all of these AI
it. It's funny how all of these AI capabilities can seem so impressive until we encounter unsettling incidents like the Pentagon.
Not just that, just in the past two years, there have been so many incidents of AI generated misinformation like this image that went
viral last year, showing how AI could disrupt the KYC or know your customer process, which is used for ID verification by a bunch of different institutions. Both the women here and
institutions. Both the women here and the ID are AI generated.
And if you've been following the 2024 elections, you've probably seen tons of AI images like this one. This one
specifically shows President-elect Trump being pursued by the police. Not just that, AI can now
police. Not just that, AI can now generate fully realistic short videos, something that wasn't even possible just
a few years ago, two years ago. Let's
just pause at this and move to the other piece of the puzzle, social media. the
engine that fueled the spread and amplification of misinformation. Now, as of January
misinformation. Now, as of January 2024, 70% of the US population use some form of social media and notably 54% of
the population use social media as their source of news. Now, what that means is more than half of the population in the United States use social media as their
source of news.
And social media platforms have thrived largely due to one major factor, personalization. Simply put, it shows
personalization. Simply put, it shows you exactly what you want to see and believe.
Now, social media platforms gather your information like age, gender, location, biases preferences beliefs and everything else that you might not even
know to fuel personalization algorithms to show you the content that you're most likely to engage with. Now, really think of it. We have this amazing technology
of it. We have this amazing technology to create content and with social media as the platform to deliver it to the right audience, those who might be
influenced. This is a deadly
influenced. This is a deadly combination. Today almost anybody with
combination. Today almost anybody with the intention can use cheap accessible AI systems and create misinformation. It
can be textual or graphical. and social
media platforms amplify this misinformation and show it to targeted audiences at a large scale. It's no
wonder that misinformation was considered the top global risk for the next two years by the World Economic Forum. Now, here are the pressing
Forum. Now, here are the pressing questions. How should we tackle
questions. How should we tackle misinformation and who should really take up the responsibility? And should
we tackle misinformation itself or the bad actors who are crafting this? Turns
out that going after the source in a world where anonymity is so easy to maintain can be super challenging.
Tackling misinformation at the content level proves to be a far more practical approach. Now AI companies as well as
approach. Now AI companies as well as social media platforms have their own set of initiatives to combat misinformation and independent AI researchers like myself have been
working on reverse AI systems to combat misinformation. We're also developing
misinformation. We're also developing watermarking methods which involves having hidden markers in AI generated content so that they can be identified early on. Over the past four years, I've
early on. Over the past four years, I've had the privilege of working with some of the best minds in academia to host a global workshop where we invite submissions for AI systems that can detect
misinformation. We have made some great
misinformation. We have made some great strides and some of the systems that we built were super robust. But here's the worrying part. Just last year and more
worrying part. Just last year and more so this year, our detection accuracies have gone down significantly.
This only goes to show how hard the problem of misinformation is becoming and how easy it is becoming for AI to generate
misinformation. Now, as researchers and
misinformation. Now, as researchers and companies work on combating misinformation, one thing is clear that this problem is here to stay and it's
exponentially exploding.
And regardless of how good detection systems get, the problem can fully never be solved unless all of us, the consumers,
play active part. Luckily for us today, we're still at a position where AI generated content can be identified only if we took a closer look. In the first
image, the lamp post looks kind of off.
It's both inside and outside the fence.
And although the building might look like the Pentagon, the windows and the shape are completely different. And in
the Trump example, turns out that the police are chasing him, supposedly chasing him while not even looking at him. Clearly, that's not how real chases
him. Clearly, that's not how real chases work. You can also use popular news
work. You can also use popular news verification websites to identify if a piece of news is indeed true.
Now, while all of these solutions might work in the short term, we're slowly approaching a phase where it might be super hard to tell the difference. And
keep in mind that these bad actors are often as smart or smarter than we are.
They craft content with such precision that it might be hard for us to question. We are so caught up in how
question. We are so caught up in how this content speaks to us that whether or not it's true becomes secondary. And that is why it is so
secondary. And that is why it is so important to change our mindset. Seeing is not believing
mindset. Seeing is not believing anymore. We need to move to a mindset of
anymore. We need to move to a mindset of actively questioning, verifying, and analyzing the content that we see.
We're entering a new era and the choice is ours. To be passively influenced by
is ours. To be passively influenced by synthetic targeted content or to take control and question what we consume.
The choice is still truly ours. Thank you, Jackson. Will
ours. Thank you, Jackson. Will
[Applause]
Loading video analysis...