LongCut logo

AI writing is "bad"... so now what?

By Mina Le

Summary

Topics Covered

  • AI Writing Criticism Is Misplaced
  • AI Exposes Broken Systems, Not Broken People
  • AI Depletes Cognitive Capacity
  • Humanness Outweighs AI Quality

Full Transcript

- This video is brought to you by Squarespace, an all in one platform for building a brand and growing your business online.

Happy New Year.

Okay.

By the time this video comes out, it will be long past new year, but this is the first video of the year.

So first things first, thank you all so much for supporting me watching my videos over the past six years.

It's been an absolute blessing, especially as I've grown a lot and subsequently changed a lot over that time.

Welcome back, everyone, and today we're gonna be rating Disney Princess dresses.

I'm just so grateful for anyone who's been here and continues to be here.

- Thank you. Thank you all of my dear, dear friends.

- [Patron] Oh.

- Some project based updates.

I recently started uploading shorts.

Uh-oh.

I specifically have launched a series called Iconic Costume that I'm really excited about because it's a call back to my fashion history origins.

Check it out if you're interested.

I also cross post it on Instagram and TikTok, but it's here on YouTube too.

With this short form series, I plan to dissect a cool movie or TV costume.

with each new episode.

I also run a Substack blog that I'm working to pair more directly as a resource for additional content for my videos, as well as to continue to share updates on what I'm reading and watching.

Okay, let's get started.

(birds chirping) (cat meowing) So I decided not to do a 2026 ins and outs lists, but I did have lots of fun looking through everyone else's.

One of the major things I noticed on people's outs lists was generative AI and specifically AI writing.

As I said, I'm on Substack and I feel like every day people on there are AI investigating someone else's writing, trying to perform a gotcha when they see too many em dashes or just using the, this sounds like AI as a dunk on someone's skills.

A part of me fully understands the anxiety.

I am a writer, and so I'm worried all the time about AI normalization in the writing space.

And I'm also a researcher, so I'm also worried about the inaccuracies that ChatGPT touts as facts.

And also I pay for some Substack subscriptions, and I'd be really upset if I was unknowingly paying for someone who's just generating some bullshit.

I don't wanna get into the debate about AI overall because there's a lot of misinformation and propaganda circulating about it.

I have friends who are worried about AI taking their jobs.

I have friends who tell me how important AI is in the medical field.

I also know the water usage and environmental impact is much more complex than what the headlines say.

Hank Green released a really good video about that recently.

I'm leaving the science part to the people who work in science.

In this video, I'm purely looking at LLMs, AKA large language models that generate text such as ChatGPT, and the impacts they have on the brain, our education and our capacity to make art, because that's what I know about.

I don't wanna bear the lead too much.

I'm against using LLMs for writing.

However, at the same time, I think that the idea that AI produces bad writing is actually an overinflated consequence.

And I think trying to dunk on someone's AI coded writing is unproductive, and we need to reframe the way that we talk about it.

So let's get started.

Attention.

There's a new color theme on my Squarespace website.

You know, every so often I'm like, "Let's change it up.

I'm a little bored."

And thank goodness I'm using Squarespace because they make it so easy.

There's just so many customization options, color palettes, a fonts available.

I spent a few fun hours just kind of like tinkering and figuring out what aesthetics I am into at this very moment.

Of course, design is only one part of having a website.

So to aid you in growing your business, Squarespace also offers plenty of analytic breakdowns.

You can see who's checking you out and for how long they're spending on your page, where they're based, et cetera.

And if you wanna take your business to another level, Squarespace offers storefront capabilities to sell anything from physical products to digital downloads to memberships.

Check out Squarespace for a free trial, and when you're ready to launch, go to squarespace.com/minale

to get 10% off your first purchase of website or domain.

- [Announcer] Begin the day with Able Mable.

She'll wake you at your preset time.

(cheerful music) - Can a robot write a symphony?

Can a robot turn a canvas into a beautiful masterpiece?

- Can you?

- Okay, the first thing is, I think we all need to get on the same page about what the hallmarks of AI writing are.

Sam Kriss wrote a viral article called "Why Does AI Write Like That" for The New York Times last month, which sums it up really well I think.

Em dashes, sentences that follow the phrasing, it's not X, it's Y, the verb, delve, which in 2022 only appeared in roughly one in every 10,000 abstracts collected in PubMed, But by 2024, usage had shot up by 2,700% and the rule of threes.

Kriss describes the rule of threes by using the example of a feel good story about an abandoned baby that was circulating on Facebook and LinkedIn, a story that he suspects as being AI written.

Here are a few sentences from the story, "No family. No calls.

"No family. No calls.

Just silence.

She was too young. Too single.

Too inexperienced."

He also puts it this way, "When AI wants to be lightheartedly dismissive of something, it will almost always describe that thing as an X with Y and Z.

If you ask ChatGPT to write a catty take down of Elon Musk, it'll call him a Reddit troll with wifi and billions.

Tell Grok to be mean about koala bears and it'll say that they're over-hyped fur balls with a eucalyptus addiction and an Instagram filter.

A lot of the time, one or both of Y or Z are either already implicit in X, which Reddit trolls don't have wifi or make no sense at all.

So how did these AI chat bots come up with this way of writing?

According to Kriss, em dashes are present in a lot of high quality writing that AI is trained on, and so AI overproduces them when it's creating sentences.

There's a technical term for this called overfitting, which is defined as a model that fits too closely or even exactly to its training data.

As for the word delve, Kriss hypothesizes that AI can generalize one region's dialect and apply globally.

So for example, in Nigeria, delve is used a lot more frequently in speech.

Kriss writes, robot behavior might actually just be another human culture refracted through the machine.

Nathan Lambert also adds in his Substack post Why AI Writing Is Mid, that once a word or phrase becomes part of an AI chat bot's linguistic repertoire during training models might continue using it because it's the least demanding way to accomplish a given task.

And in general, for all the weird stylistic decisions that AI makes, Lambert suggests that it's because style isn't a leading training objective.

It's harder to measure and determine what good style is.

People disagree what good writing is all the time, and probably most importantly, style can't be optimized well.

Good writing takes longer to process because there's more friction involved.

He also notes that the training exploits subtle signals for sycophancy including length, therefore, answers are encouraged to be longer.

This results in AI often coming across as unnecessarily verbose.

As for the machine, like uncanny vibe we get when we read AI writing, it's in part the force neutrality.

Most people associate good writing with having a strong voice, but AI is trained to be factual and broad.

Good writing also tends to be more colorful and writers will often exaggerate facts to get their point of view across.

For example, I'll pull up my electricity copy of Nora Ephron's, "I Feel Bad About My Neck" to make this point.

Nora Ephron is one of honestly the best writers in my opinion, and even if you disagree with that statement, I think what can be said is, she had a very strong point of view.

"I Feel Bad About My Neck" is a collection of essays she's written, and here's a line from one of them titled "Serial Monogamy: A Memoir."

"Just before I moved to New York, two historic events that occurred, the birth control pill had been invented and the first Julia Child Cookbook was published.

As a result, everyone was having sex and when the sex was over, you cook something."

Okay.

Is the first Julia Child Cookbook being published a major historical event?

For some maybe, but I'd say that an AI bot might peruse some statistics of how many news newspapers were covering what events and rank this cookbook.

Severely below JFK getting inaugurated, the first astronaut traveling to space and the Bay of Pigs invasion.

But Aron picks this event because the rest of the essay is about how she became obsessed with cooking, and specifically her obsession with Chef Michael Field and how this era coincided with her getting a divorce.

All these experiences are interesting, personal and objective and not something AI is able to experience because AI is not a human.

Sam Kriss continues to go on to say that it's AI's lack of humanness that prevents it from being able to create metaphors derived from sensory experience.

He writes, "No AI has ever stood over a huge windswept view, all laid out for its pleasure or sat down hungrily to a great heap of food.

They will never be able to understand the small strange way in which these two experiences are the same."

As a result, AI resorts to using cliches to be descriptive such as with overhyped fur balls with a eucalyptus addiction and an Instagram filter.

What I found interesting though is that Max Reed argued in his Substack post, "Will AI writing ever be good?"

that AI companies haven't tried to make an AI that's a good writer in part because they're not economically incentivized to do so.

Maybe it actually would be possible to create an AI chat bot that makes good writing if programmed right?

And that's why I'm less compelled to fixate on the AI writing is bad and therefore we shouldn't use it argument.

Sure, I'd love to read less machine written drivel, but humans can also be naturally bad writers and that doesn't mean that they shouldn't write.

I came across this TikTok posted by user Chickyvan sometime ago that I loved.

- There might be cases where Che is doing a better job than a person can.

We should reject it anyway.

This question of, is ChatGPT writing bad, seems like it's almost the wrong question to me because it seems like it's trying to get at whether human beings have any kind of market advantage over the robot.

But we as the anti AI people, the leftist, the humanists, whatever, we believe that human rights and human dignity come above the priorities of the market.

So like, is ChatGPT writing bad?

It doesn't matter.

It should be rejected anyway.

- I totally agree.

I believe that bad human writing is still better than good AI writing, and that just comes from my own ideologies on artistic integrity and also the importance of friction, something that we are gradually losing.

I define friction by the way, as like the obstacles we have to face to complete tasks.

For example, I have to make a vet appointment for my cat, and the vet clinic I wanted to schedule an appointment with only books over the phone.

This is friction for me because I've become used to booking all my appointments online without having to talk to a human.

However, 100 years ago, it would probably be amazing to be able to find a clinic and their phone number online.

Friction back then would be driving around town, looking for a clinic and probably sitting in a waiting room for an available appointment.

When it comes to writing, while most writers are not using AI currently, it turns out that a lot of writers are actually using AI tools to reduce friction.

According to study by Gotham Ghostwriters and Bernoff, which surveyed 1,481 people who hold a wide variety of writing heavy jobs, 61% reported using AI tools.

That doesn't totally surprise me because a lot of writing jobs aren't like the most creative.

However, I was surprised to read that even when it comes to novelists while quote, "Only 11% of fiction writers use AI to create a publishable text, 42% say that they use it to brainstorm, search, or find the right words or phrases."

While I understand that for some people using ChatGPT to search things feels equivalent to searching on Google or using thesource.com.

In my opinion, over-relying on AI to research also dilutes the quality of the actual written work, or at least it dilutes like the voice you have when you write your work.

We can't divorce research from writing, we can't divorce friction from writing.

For instance, the way that I outline a video on the topic of AI writing is going to be different than the way another commentator or essayist on this platform would.

The way that I outline a video comes organically from the resources that I read first, the way that my own experiences have related to the topic at hand, my own analytical ideologies and the discussions I have with my co-writer, the best points I can come up with tend to arrive outta friction from hours of staring at blank screens, of debating on the phone of running into dead ends in the research.

And I don't wanna sound like I'm stroking my ego.

Most video essayists on the platform do this.

That's what makes all our voices on here so different.

And also why I don't care if someone makes a video on the same topic as me, because unless they're straight up plagiarizing, the way they come up with their conclusions will probably be unique.

Like for instance, my next point will be relating to my experience covering fashion history, something that is personal to me as an individual and probably not a topic that ChatGPT would suggest linking into an essay about AI.

So here we go.

One of my biggest pet peeves about ChatGPT is how surface level the research is and how it's often unsourced.

Whenever I look into fashion or costume history, I have to go deep into out of print books that may have never been scanned onto the internet, and then I'm following that book's list of citations and its bibliography to try to find an obscure magazine scan from 1943.

Years ago, years ago, you qualify for a senior discount if you remember this video, but I did a video on Anastasia, specifically a costume review of it, and I had to consult with a Russian historian because a lot of the historical references are not translated over to English, and they're also in some obscure database only accessible if you have the right academic credentials.

Meanwhile, there's tons of misinformation about fashion history online, a lot of like, you know, old wives tales or whatnot that are reposted onto blogs and even in outdated academic articles.

And ChatGPT is synthesizing all of these things when they come up with answers.

Last year, BBC and EBU published a study that found that around 45% of AI news queries to ChatGPT, MS Copilot, Gemini and Perplexity produced errors.

Also, you know, we tossed around that phrase, history is written by the winners all the time, and if ChatGPT is not properly sourcing where they get information from, maybe the facts about Chinese culture you're searching for were sourced from a sinophobic conference paper from 1961.

Of course, maybe you don't need to be totally accurate at writing a fictional story, but I should mention that from that Gotham Ghostwriter survey, the heaviest AI writers are content writers, AKA writers who definitely should be fact-checked and edited, especially when they're representing brands and organizations.

These include thought leadership writers, PR Comms professionals and content marketing writers.

At the same time, I'm not surprised, this is the biggest demo of AI users because these professions produce work at higher outputs.

(upbeat music) In my video, "You Don't Need to be Productive," I talked about productivity propaganda in general, and within that video I mentioned the marketing of AI tools to better help your productivity, and I argued that it's a facade, it's a repackaging of like inbox zero theory.

Oliver Bergman talks about this in a chapter "Four Thousand Weeks."

He explains how inbox zero is this mystical concept that knowledge workers try to achieve, but the reality is that the minute you clear your inbox, the replies will start coming in.

So you're back to dealing with 100 correspondences.

Plus if you gain a reputation for being fast at answering emails, suddenly you become the designated point of contact where people will choose to email you over your slower coworker, thus you end up creating more work for yourself.

It's a false promise that you can get ahead of work.

AI falsely promises a similar thing.

Unless your boss is reducing your hours, you'll eventually be expected to increase your output beyond human capabilities.

Rather than sending 100 emails in a day, you should be sending 300.

I was thinking about how smartphones have totally infiltrated daily life, where now it feels almost impossible to navigate the world without them.

Almost every banking platform requires two factor authentication.

You need to download apps to access services or to enter venues.

It's hardly a choice now.

And I'm just concerned about AI becoming ubiquitous in the same way.

Also at this moment, AI tools are not good enough to effectively streamline work.

A study from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in AI technologies, and this is because employees use the AI tools to create low effort passable looking work that ends up creating more work for their other coworkers to interpret correct and eventually redo.

But I understand the pull to use AI tools.

Of course, people are overworked and looking for any lifelines available.

A 2025 Microsoft and LinkedIn survey found that workers are using AI because they're feeling overwhelmed.

68% of people say they struggle with the pace and volume of work and 46% feel burned out.

The study additionally reported that 85% of emails are read in under 15 seconds and the typical person has to read about four emails for everyone that they send.

I don't have an email job.

I'm not expected to answer 200 people per day or do highly repetitive tasks.

I'm very grateful for that.

And while again, I think the promise that AI will streamline work is a false promise, I also think that the way some creative workers talk about AI infiltration is just too narrow-minded.

Most of the people I know who are so against AI usage are artists and writers and people who value the humanness of creating these things because that humanness is integral to the medium itself.

But if you work for a company that you don't care for and you don't care about any of the people you have to correspond with and you know they're reading your email in under 15 seconds anyway, like it's not a high form of art, writing an email, then why wouldn't you use AI?

The problem is within the system of work itself, AI is just exposing these problems. It's the same case with education all the time.

I read about professors lamenting over reading college papers that are clearly written with ChatGPT.

- I'm a brand new professor and I am already exhausted with the use of AI.

- However, Julia Birdsall wrote for the University of South Florida's St. Petersburg's paper, "The Crow's Nest"

St. Petersburg's paper, "The Crow's Nest" what the actual attitudes of the student body are, which is that they are overwhelmed.

She explains in one example, "A senior who chose to remain anonymous uses AI in all five of their classes.

'For the classes that don't matter for my career, I kind of straight up just use it for everything.

I don't really learn anything, which is bad I know, but I don't have the time,' they said.

The student works two jobs outside of school and also runs a campus club.

Their schedule leaves a little room for free time or even sleep.

'if ChatGPT didn't exist, I don't know how I would do it,' they said."

If colleges in the US weren't so expensive, then this student wouldn't have to work two jobs, which I'm assuming they're doing because they need to pay for their education.

When I was a student, there was also this implicit need to busy yourself with clubs and extracurricular so that you could appear well-rounded when you're applying for prestigious internships or for jobs post-graduation.

For many students, college isn't an institution just for the love of learning.

It's about becoming a superhuman on paper so that you can have more advantages when entering the job market.

I would argue that high school is already like this.

Guys, like I'm trying to be really vulnerable here, I have never been a STEM girl.

Science totally went over my head and it's corresponding subjects were the bane of my existence.

So as an example, my high school offered Biology, Honors Biology and AP Biology.

You could choose to take either Biology or Honors, freshman year, but if you wanted to take AP Biology, which isn't mandatory, then you had to do Honors beforehand.

I took Honors Biology and it was really hard.

So I had no intentions of taking any AP science classes.

But then my parents took me to this SAT prep lady.

I didn't end up enrolling in her program, it was too expensive.

But I interviewed with her in my sophomore year and she looked all my transcripts and she was like, "You need to take AP science classes so that you can look more well-rounded for college applications."

Mind you, I knew I would never study anything science related in college.

AP biology took over my life.

I spent countless more hours on that subject because it wasn't something that I naturally could understand well.

So the additional downside was that the classes I did enjoy history classes, for instance, I didn't have enough time to really invest myself in those subjects, which made me just kind of hate learning in school altogether.

With that said, if AI existed when I was in high school, I probably would've used it all the time for my science classes.

I was already using Wolfram Alpha and SparkNotes to complete my other homework, not because I'm against learning, obviously, but because I felt like I was in survival mode.

I just didn't have time to do everything, especially when AP biology took so much of my time.

So in my opinion, AI doesn't ruin learning.

It just exposes the problems that already exist in the education system.

Education researcher and professor at SUNY Buffalo, Tiffany Noel backs this up.

She told a high school student reporter, William Liang in an interview, "AI didn't corrupt deep learning.

It revealed that many assignments were never asking for critical thinking in the first place.

Just performance.

AI is just the faster actor. The problem is the script."

Liang concludes and I agree with his assessment, "We're taught that grades matter more than understanding, so if there's an easy shortcut, why wouldn't we take it?"

When I was reading about the history of education in the West, I thought it was really interesting how the model was literally based on factories.

Joel Rose traces the history of the factory model classroom for "The Atlantic."

He describes the model as a publicly funded system where in every American classroom, groups of about 28 students of roughly the same age are taught by one teacher usually in an 800 square foot room.

This model was introduced to America by education reformer Horace Mann inspired by what he saw when he visited Prussia in the 1840s.

This model was apparently designed to build a common sense of national identity.

But as Rose explains, Mann's vision also made sense for the industrial age in which he lived.

The factory line was simply the most efficient way to scale production in general and the analog factory model classroom was the most sensible way to rapidly scale a system of schools.

Factories weren't designed to support personalization, neither were schools.

Leland and Kasten explained in their article, "Literacy Education For The 21st Century" how this school system was intent on civilizing young people with the primary goal to prepare them for cog jobs in the factory.

As H Klibard wrote in 1971, "Our schools are in a sense factories in which the raw products, children, are to be shaped and fashioned into products to meet the various demands of life."

Leland and Kasten also write that the public school system reinforced the status quo as quote, "Since the content of what an educated person should learn was assumed to be universal, all learners received the same curriculum and were expected to achieve the same understanding.

An empowered learner in this model was both unwanted and dangerous."

Grades, school ranking, test scores, curriculums, these all attempt to systemize education when in my opinion, intelligence and learning capability cannot be so easily quantified.

I watched Park Chan-wook's new movie, "No Other Choice" the other day, and it was one of my favorite films of the year.

The premise is that this father driven by desperation to take care of his family and to save his house from foreclosure starts to kill off the other candidates for this job he applies for.

I thought it was interesting the way he chooses his victims. He prints out all their resumes, lines them up, and then circles achievements that have bested his own.

He's effectively killing people off based on how they appear solely on paper.

But the irony is that his friend has implied how important the interview process is, and yet he doesn't know how any of these men performed in their interviews.

His story is a parable for many of the reasons, one of which is the lie of meritocracy, which leads us into quantifying ourselves and hoping we can reach the highest score because we're told this will lead us to become the most successful.

I mean, disclaimer is that I'm not sure what the job hiring standards are in South Korea, so maybe that's just something I'm interpreting through a western lens.

Now when we look at AI, this is technology that is designed to be neutral and commodifiable, so it makes sense why students would find success with it In a system that's all about maintaining the status quo.

I've seen teachers on Twitter and TikTok talking about inventing homework that cannot be used in conjunction with AI, but it seems like it's an individual solution they have to develop for their own classrooms and not one that the curriculum is addressing.

And even then, this doesn't solve the problem of students feeling overwhelmed.

However, I will say that it's unfortunate all around because even though studies show that students rely on AI when they're feeling anxious about coursework, overusing it actually leads to more anxiety in the long run because AI diminishes interpersonal skills and emotional intelligence which can lead to social isolation.

A 2024 Center for Democracy and Technology Report found that one of the negative consequences of AI for students is that it is hurting their ability to develop meaningful relationships with teachers as well as peer-to-peer connections.

Honestly, even though I suffered under this standardized education, one thing that I'm really grateful for were my peers.

I had really smart friends and we would regularly edit each other's papers at 1:00 AM the night before something was due.

I also had this one friend, Gabby, who wasn't even taking AP biology the same year as me, and she was just so good at science.

She later went to Cal Tech, but she would come over to my house, read the chapter in the textbook, and teach me what she learned.

She wasn't even in this class.

Yes, in this day and age, I could have gotten the same thing from an AI bot, but it's really beautiful to one, receive that kind of love and assistance from a real friend and two, feel inspired by someone's intelligence.

So I will say that one thing I was able to take away from my education was comradery, something that kids are no longer incentivized to develop.

(upbeat music) Another thing I don't like about the AI conversation is this holier than thou shaming tactic that people resort to because in actuality, it is natural for the human brain to want to use AI.

Neuroscientist Tim McGrath has written about how AI tools exploit our brain's desire to conserve energy and its tendency to take shortcuts when available.

He tells "The Atlantic", "It takes a lot of energy to do certain kinds of thought processes, meanwhile, a bot is sitting there offering to take over cognitive work for you.

Therefore, a compulsion to use it isn't a product of laziness, but because our brains are looking to be as efficient as possible.

The sinister part is that chat bots are engineered to further take advantage of this human tendency by producing compelling answers to any query."

You can ask ChatGPT literally any question, even once you know it can't answer and it will respond with like something.

However, an MIT study found that using AI, specifically in LLM in essay writing can lead to decreased brain activity and it can atrophy tasks that people used to be able to do in a process called cognitive offloading.

Cognitive offloading as defined by Risko and Gilbert, is the rise of physical action to alter the information processing requirements of a task so as to reduce cognitive demand.

A lot of words, but an example is using a smartphone to remind yourself of an upcoming appointment.

The major downside of cognitive offloading is that while it can increase immediate task performance, you can get to your coffee shop in an efficient 10 minutes or less using Google Maps, it can decrease subsequent memory performance for the offloaded information.

You won't remember how to get there on your own without maps.

That was something I actually wish I tried to do more when I was living in New York City.

I lived there for five years and my spatial awareness was very bad, notoriously bad, even with the grid system because I was always looking down at my phone to navigate.

Meanwhile, growing up in the suburbs, I actually knew multiple ways to get to my grandma's house and I also used to take offroad shortcuts through my neighborhood woods to get to specific landmarks with my friends.

These are things that a maps app would never recommend you to do, but it definitely leads to you feeling more tethered to your environment and also less cautious of like wasting time.

Nowadays, most of the time when I'm driving, I'm also like trying to beat the ETA on my GPS for some odd reason, like even if I'm not running late.

I also had a really good memory as a kid, I memorized all my friends' phone numbers because I would dial them up on my home phone.

Now I just scroll to a contact name if I wanna text or message them.

I also used to remember so many people's birthdays, but now I straight up forget unless I see someone posting about it on Instagram or unless I'm invited to a birthday party.

While these feel negligible in the grand scheme of things, I think there's a lot of potential for it to get outta hand.

For example, in Lila Shroff's "Atlantic" piece, the people outsourcing their thinking to AI, she describes one interviewee Tim Metz, who consulted with Claude to pick fruit at the grocery store.

He would snap photos of fruit and then asked the chat bot if they're ripe before purchasing.

Another interviewee, James Bedford, admitted to the impulse to ask AI what to do when helping a woman retrieve an AirPod that had fallen in between seats on a train.

Like it just gets to a point, you know, just feel the fruit, just reach down and grab the AirPod.

And even if your fruit is not that good, which trust me, I've had many bad fruit because I don't know the difference between a ripe and a not ripe one, it's just part of the experience of living.

You can never appreciate a good fruit if you've never had a bad fruit.

Beyond that, what I also think is incredibly dangerous is how over-reliance and LLMs diminishes critical thinking.

Studies have shown that students who use LLMs to complete writing and research tasks demonstrated poor reasoning, worse argumentation skills, and analyzed in a more bias and superficial way.

I'm concerned that the education system will instead of countering and enforcing stricter standards for education, just adjust to reflect a new educational standard.

Professor Naomi S. Baron refers to her work with literacy researcher Anne Mangen, and they found that faculty were already reducing the amount of reading they assign often in response to students refusing to do it.

While most people will blame social media for decreasing literacy rates, YouGov reported that only 54% of Americans had read at least one book in 2023.

I think AI is exacerbating this downslide.

There's a chat bot called Booksai, which not only provides summaries and analyses, but allows you to actually chat with a book, which just feels incredibly strange.

I'm no stranger to crash reading SparkNotes the night before quiz in English class, but SparkNotes could only get you so far.

At some point, you sort of had to read the book.

As Baron explains, referring to CliffNotes, which is a similar platform to SparkNotes, "If you're a student asked to compare Mark Twain's, 'The Adventures of Huckleberry Finn' with J.D Salinger's, 'The Catcher in the Rye'

with J.D Salinger's, 'The Catcher in the Rye' as coming of age novels, CliffNotes only gets you so far.

Sure you could read summaries of each book, but you still must do the comparison yourself.

With general large language models or specialized tools such as Google Notebook LLM, AI handles both the reading and the comparing, even generating smart questions to pose in class.

The downside is that you lose out on a critical benefit of reading a coming of age novel, the personal growth that comes from vicariously experiencing the protagonist's struggles."

I don't wanna harp too much on the importance of actually reading and writing because I think everyone is aware of those benefits already, regardless of whether or not they feel they have time or desire to actually do that.

I read Adam Kirsch's essay for "The Atlantic" "Reading His A Vice" the other day, and he argues that telling people to read because it's a social good doesn't actually convince anyone to actually do it.

Maybe some people, but it's not a compelling enough argument.

He explains, "Telling someone to love literature because reading is good for society, is like telling someone to believe in God because religion is good for society.

It's a utilitarian argument for what should be a personal passion.

It would be better to describe reading not as a public duty, but as a private pleasure, sometimes even a vice.

He goes on to argue how when reading was framed as transgressive, young people couldn't get enough of books, and now that reading is framed as this moral duty people aren't picking up anything."

I don't know if that's accurate, but I think he is correct in saying that no one, at least in America, is willing to shoulder the burden of democracy all by themself.

I've noticed there are two camps of people who are fighting against LLMs. One camp is about being morally superior.

They call people stupid for partaking in technology that is literally marketed towards us as making our lives easier and decreasing our stress.

And they're honestly just elitists, uninterested in the actual betterment of society, just interested in positioning themselves above everyone else.

And the other camp is about looking for realistic plans of action that feel accessible.

They look for ways to convince and mobilize people without being condescending because they know just sticking up your nose at someone is not going to lead to any change.

- People aren't just using AI because it's cool and convenient and they're lazy.

They're using it because they have to figure out a way to keep up with all of the expectations that are being demanded of them.

- You might find it ironic that the reason I discovered Kirsch's article was because on Substack someone, a self-proclaimed reader, had posted a note about it, which is like a tweet in Substack terms, this person had screenshotted a single paragraph that Kirsch wrote midway through the essay.

"Being a reader means cultivating a relationship with the world that by most standards can seem pointless and counterproductive.

Reading is not profitable, it doesn't teach you any transferable skills or offer any networking opportunities.

On the contrary, it is an antisocial activity in the most concrete sense.

To do it, you have to be alone or else pretend you're alone by tuning out other people.

Reading teaches you to be more interested in what's going on inside your head than in the real world."

And the Substack user, their summation of the article was, "Not only do I personally disagree with this, but it's patently false.

LOL.

Like, I'm sorry, you ran out of all your hot takes and now you're just gonna say reading is dumb and useless."

I mean, it's just incredible because like I said, the article actually argues in favor for the case of reading in an albeit tongue in cheek way.

That's a little like overdramatized, but it was completely taken outta context and many of the comments were trying to dunk on Kirsch as if he himself, a writer for "The Atlantic," a literary magazine hated reading.

And it's so ironic because this is happening on Substack, a platform where people apparently love reading and are constantly being self-aggrandizing about it.

It's just like that meme, "you are not immune to propaganda, like you are not immune to taking available shortcuts because that's what our brains are programmed to do.

And I can admit there's many times where I'll look at a screenshot taken outta context and get rage baited by the poster.

Why?

Because I just don't wanna read the full article right now, I'm too tired, which is totally fair.

But you know, if I'm feeling too tired to read something fully, I have to recognize that I don't have the authority to critique it, nor am I actually getting the full argument that the writer is trying to say.

(upbeat music) I wanna revisit the question I posed at the beginning.

Does it matter that AI writing is bad?

It matters in the sense that we are modeling our language off these LLMs. As Sam Kriss reports for "The New York Times," A recent study from the Max Planck Institute for Human Development analyzed more than 360,000 YouTube videos consisting of extemporaneous talks by flesh and blood academics and found that AI language is increasingly coming out of human mouths."

Language is really powerful, and yes, it's dystopian that technological companies are able to shape the way we speak to each other.

I don't love that, I'm not in support of that, but I don't think it's the right framework to get people to stop relying on it.

I think if we harp on writing quality, say that we shouldn't use AI to write because it produces bad writing, it might have the opposite effect where people will not wanna write at all in fear of being judged by their peers.

Gen Z especially is much more fearful and anxious about the way that they are perceived.

And it's a shame because in my opinion, bad human writing is way better than perfect AI writing.

As I've said before, I'm inherently interested in points of view.

I'm interested in lived experiences or ideas shaped by lived experiences.

I'm interested in a logical, jumbled thoughts that have no relation to each other.

I mean, I've worked with children before and reading a story that a 7-year-old has written, which is arguably not good, is still so interesting and valuable and most importantly, tells you so much about who they are while also bringing awareness to your own self at that age.

And when it comes to why you should write, which is really what I think we should be reframing like I think we should be arguing why everyone should write and everyone should read for the benefits that it brings for themselves and not for some like civic duty.

I totally agree with Kirsch on that point.

So when it comes to why you should write, I'm just gonna quote George Saunders, who is the writer, I write, Jane came into the room and sat down on the blue couch.

Read that, wince, cross out, came into the room and down and blue.

Why does she have to come into the room?

Can someone sit up on a couch?

Why do we care if it's blue?

And the sentence becomes, Jane sat on the couch, and suddenly it's better, Hemingwayesque even.

Although, why is it meaningful for Jane to sit on a couch?

Do we really need that?

And soon we have arrived simply at Jane, which at least doesn't suck and has the virtue of brevity.

But why did I make those changes?

On what basis?

On the basis that if it's better this new way for me over here now, it will be better for you later over there when you read it.

This is a hopeful notion because it implies that our minds are built on common architecture, that whatever is present in me might also be present in you.

There is a powerful dialogue between a writer and a reader.

The writer is not only trying to communicate to the reader, but they're trying to experience something with the reader when they are writing for themselves.

When an AI is trying to communicate with me, there is no experience.

It's just static.

Okay. That's all I have for today.

Thank you so much for listening to me talk as always.

My name is Mina and I hope you have a lovely rest of your day.

Okay. Bye.

Loading...

Loading video analysis...