LongCut logo

Student Experience in the Age of AI: Implications for Learning, Integrity, and Well-Being

By UC Berkeley Center for Studies in Higher Education

Summary

## Key takeaways - **AI designed for engagement, not education**: AI tools were not initially created with education in mind; instead, they were developed to eliminate friction, which is counterproductive to the challenges and struggles inherent in the learning process. [00:14], [00:28] - **Grades as currency: a pre-AI problem**: The emphasis on grades as currency, leading students to focus on outcomes rather than learning, predates AI and has contributed to a broken 'moral obligation supply chain' in education. [03:37], [03:54] - **Contract cheating: a billion-dollar industry**: The contract cheating industry, estimated at over a billion dollars, existed before AI and has evolved, with some services now advertising human-written content to evade AI detection. [14:26], [17:48] - **AI amplifies existing challenges, doesn't create them**: AI did not create the challenges in education, but it amplifies existing issues and shines a mirror on them, making it more urgent to address the breakdown in the educational supply chain. [01:18], [07:11] - **Students want ethical guidance and support**: Students desire clear communication about AI, explicit instruction on ethical use, and help resisting the temptation to misuse these tools, indicating a need for institutional support rather than just trust. [41:49], [42:00] - **Durable human skills are key differentiators**: In an AI-driven world, the focus should shift from content-based disciplines to developing durable human skills like critical thinking and problem-solving, which will set graduates apart from machines and add value to society. [47:07], [47:22]

Topics Covered

  • Grades as currency: What are students truly learning?
  • Is AI turning education into a diploma mill?
  • AI makes faking engagement easier than ever.
  • Why expecting students to resist AI is unfair.
  • Prioritize human interaction and experiential learning over content.

Full Transcript

learning integrity and well-being. And I

think it's really important for me to

start there because it does explain the

direction I'll take on this talk which

could have gone many ways. I'm

pessimistic because the nature of the

companies and the products they created

um

and I'm going to get into this a bit

more later, but they were not created

with education in mind and in fact were

created to do some of the very to

eliminate some of the very things that

are important to learning like friction,

the challenge of figuring something out,

the challenge of staring at a blank

page, the challenge of like really

wrestling with with things and coupling

my thinking with my writing. or my

thinking with generating images has now

been decoupled. Right? So I'm

pessimistic about our ability to

successfully shoehorn this 21st century

technology into a education which is if

we're honest with ourselves is built on

a 20th century sometimes 19th century

platform.

I'm hopeful because there's some

promising experimentation going on both

on the part of students and on

researchers and faculty that show

intentional and thoughtful integration

of AI that could be helpful for tackling

some of the challenges that we have that

AI did not create for us but shines a

mirror on. And I'm hoping that we all

have the necessary moral agency to not

simply acquies to what the Kool-Aid uh

porers would like us to believe that we

must do this. we must integrate it into

everything,

but that we pause and we take some

thoughtful deliberative

time to ensure that learning and the

engaging student experience stay front

and center in any consideration we make

about integrating AI into the academy.

I need you to bear with me for a second

because before I get into that, before I

get into the age of AI, I want to ground

us in a in a preAI world.

It's 19, sorry, not 19 yet, 2008.

And I'm teaching our after education

class to our students. So after students

violate academic integrity at UC San

Diego, we try and leverage the cheating

moment as a teachable moment, teaching

them how to fail forward from ethical

mistakes. So, I'm teaching this class

and a student comes up to me afterwards.

She says, "I'd like to tell you why I'm

here." I'm like, "Okay." And she said,

"So, I wrote my paper and then I sent it

home to my mom like I always do." And my

mom sent it back to me. It was really

different. And I said to my mom, "You

didn't go on the internet for any of

this stuff, do did you? Because my mom

loves to go on the internet." And she

said, "No, but Turn It In said it was

40% plagiarized."

And by this time, she has a tear, you

know, coming down her cheek. And I said,

"Okay, so what's the lesson learned?"

And she said, "Check the work my mom

does."

And she wasn't being sassy with me. It

turns out that all her life her mother

demanded that she share her papers with

or her assignments with her before she

submitted them to make sure they were

perfect.

And I said to her, "You know what? We

don't actually that's not the lesson

learned. We don't care if your mother

can write." And clearly she can't

because she plagiarized. Um, I said, "We

care about what you're thinking about

the readings. What new thoughts do you

have? Are you able to communicate them?

What are you wrestling with? How have

you grown from the last paper you've

written? What are you learning? We care

about you." And it was like that, you

know, proverbial light bulb that went

off over her head because it had been so

long since somebody had talked to her

about learning.

She was raised in an environment, not

just with her family, but with our

educational system, that grades are

currency. And that's really that's

grades that what's is what matters and

grades are what they should focus on.

Now the student's not wrong.

Grades are currency. They have been they

were 17 years ago when I talked to her

and they remain more so now. What's the

average I don't know what the average

GPA of your entering freshman here is at

Berkeley but at San Diego I think it's

4.2. So the average freshman at UC San

Diego is better than perfect.

When I talk to students, they said,

"Well, I've never turned in anything in

high school that I got less than 100%

on.

So, when they get their first A minus or

their first B+, let alone a C or a D or

an F, that's a real hit to their

identity. The one thing they're good at,

for many of them, we're not a big

athlete school. One thing they're good

at is school."

The thing is this, and this is what I

tell students, grades are only valued

because they're supposed to represent

something more significant, more

profound. Your learning, how you've

professionally and personally developed,

right? Your skills and abilities,

they're proxies. They're meant to be

proxies for these things. As are our

certificates, our degrees, right? For

decades,

hundred of year, hundred years,

employers in society have been using our

degrees as proxies to say this person is

qualified at the bachelor's level,

masters level, PhD level to do X.

Now, to do X might be math or CSSE, but

also there was all these other things

wrapped up in there, right? They're

going to be better communicators.

They're going to be better problem

solvers. They're going to be better

critical thinkers, so on and so forth.

So, we have a social contract with

society. We promise not just to

facilitate learning, but to certify it.

We promise that our graduates are going

to go out there with knowledge and

skills and abilities that they didn't

have before. That they're going to be

positive contributors to society. And I

would say that this moral obligation

supply chain to get us to that social

contract has been broken for a while,

much before AI.

So to get to that social contract, we

have to do a lot of things, not all of

which are on this slide, but I want to

narrow it down to a few things here

because I focus on academics, right? I

focus on learning.

This moral obligation supply chain

starts with instructors deciding what

are the learning outcomes for this

course and how do I design fair and

honest pedagogy and assessments that

will measure those that learning, the

achievement of that, the mastery of that

learning, those learning outcomes.

Students then have to fairly honestly

demonstrate their knowledge on those

assessments so that faculty can fairly

and honestly then do the uh validation

of learning. Right? If anything goes

wrong in the supply chain

then the school cannot certify

learning knowledge and abilities. If

faculty aren't paying attention to

learning outcomes or they don't design

fair and honest pedagogy and assessments

or valid assessments. If students cheat

on those assessments, if instructors

well here, let's put it this way. If

students use AI to faculty use AI to

create those assessments and students

use AI to complete those assessments and

then instructors you use AI to grade

those assessments, we're a diploma mill,

right?

This was breaking down before AI came

along and AI is making it a much more

urgent problem for us to address.

The problem is that the way we measure

things, the student might seem like

they're successful. They've got the GPA.

They're completing their degree in time.

Oh, maybe they've even reduced time to

degree, which is what we love, right? Uh

maybe they're graduating and they're

even getting a job right after they

graduate.

So, it seems like they're being

successful, but really it's broken down.

The truth is what we do here depends on

ethical engagement. When faculty and

students, again, I'm just narrowing in

on them. I'm not blaming them for all of

this. When they fail to engage in the

hard work of teaching, learning, and

assessing with integrity,

we lose our moral authority in society.

We lose our authority to certify

knowledgeabilities. And we're already

seeing this, right? How many employee a

lot of employers now there was a study I

think either done by LinkedIn or Indeed

that looked at the number of job ads

that no longer require a bachelor's

degree all of the colleges that are

closing across the nation people are

losing trust and faith that our their

investment with us is worth it

and since our focus is on in this room

is on the student experience I'm going

to zoom zoom in on the students for a

while so why wouldn't students fully

engage we were talking right before my

talk it's like you're at Berkeley it's a

physical in-person campus. You didn't

sign up for Western Governor's

University or University of Phoenix.

Couple reasons. One, you want the

Berkeley name, but two, don't you want

the Berkeley experience? So, why would

students choose to pay all this money,

move to Berkeley, and not engage?

Well, they might disengage because of

personal forces like moral disengagement

that I just don't see the value in doing

honest work.

I don't even recognize that what I'm

doing with AI is unethical. Perhaps I

don't realize that giving my friend my

paper so they can learn from a model

paper might be a violation of academic

integrity. So I've disengaged from the

ethics of it all. Definitely that

intrinsic exttrinsic motivation for

grades, especially in the UC system,

especially our students.

And this again is not their fault.

grades are currency, right?

And then low self-efficacy. I don't

think I can do it or I don't think I can

do it to the level I want to or to the

level my parents want me to, but I just

don't believe that I can. And my

animation is going to come up in the

wrong order here. So, I'm going to be

giving away the secret. students cheat

because they're human, which means that

they're also influenced by the

situational factors around them. The

course context we were talking before

about I wonder how many

uh remote or online classes we have

masquerading as in-person classes.

They don't require there's it's a

lecture hall of 400 kids, so there's no

engagement in the lecture. The lectures

are podcasted so I really don't have to

go and all of the assessments are online

a and unsupervised.

That's an online course even if it's in

the books as an inerson course. Right?

So course context matter. Are there more

opportunities to cheat? Are there more

opportunities to disengage? Can I find

it's just it's coming at me more so than

I'm looking for it. Uh peer norms. Do I

think everybody else is doing it?

I'm sure if you were if I asked you all

in the room, how many of you are more

likely to break the speed limit when

others around you on the highway are

breaking the speed limit? We'd all put

up our hands. Well, 99% of us, right? We

do change our behavior based on what we

think is happening. And this is even

more um

a stronger reason for stronger influence

for students because I think there's

still especially in the STEM disciplines

where there's grading on a curve. So if

I'm not cheating and somebody else is h

but also just competition for jobs uh or

grad schools, right? And finally,

instructor influence if I don't think

the instructor cares.

So I often have instructors say to me,

um it's not my job to police cheating or

prevent cheating. And I say to them, um

the students think it is.

So if you have not put any mechanisms in

place to try and reduce those

opportunities, students think you don't

care and that might in influence them

more to also disengage.

Now let's be clear, not all

disengagement looks like cheating. It

can vary, right? Some people disengage

and flunk out.

They just they give up.

Some disengage and just scrape by with

that 2.1 or whatever it is they need to

graduate. And so they just they barely

graduate, but they've, you know, maybe

they engaged in their major courses, but

disengaged from their GE courses or

whatever the case may be.

But some want to disengage without

flunking out. So they fake it to appear

to faculty, to their institution, to to

researchers um like yourselves to be

engaged because they're progressing in

their major, they have the GPA, they're

progressing time to degree, all of that

kind of thing, but they're really not

engaged at all. All three of these are

problems, right, for us as a university.

Um, they tell us that what we're doing

might not be working well for everybody.

But my expertise is on the faking part.

So, I'm going to zoom in on that one.

Students will fake engagement through

what we call cognitive offloading.

Right? This doesn't have to mean

cheating, but it could mean not

thoroughly engaging in the learning

opportunities that are available to

them. So, I might use Google Translate

instead of learning the language, right?

Do I really need to? I've got the

AirPods now that will do direct live

translation.

I don't really need to learn French. So,

in my French class, I'm going to write

my paper in English and give it to

Google Translate to to put into French

for me. Or maybe even just a paragraph

of that. Or it might just be something

as simple as I'm not going to class.

I'll just get notes from my my friend

who goes I might even have them like do

that one little log that one little

participation thing is where I scan the

QR code to answer the quiz at the end of

class to prove I was there. Might even

have them do that for me.

Copying from the internet or friends.

Do you remember cliffs and sparks notes?

Good old summaries. While they were done

by humans with the intent of accurately

conveying what the book was about, they

might not have been perfect. Now

students just use Chacha BT which is not

a great idea since it bullshits right it

makes stuff up. Um and so I'm allowed to

say that by the way because that's an

official scientific term in the AI

world.

So reading summaries instead of books

tell me what you know Moby Dick is

about.

Uh getting other humans to do the work

for them which is known as contract

cheating. How many of you have heard of

the phrase contract cheating?

a few. Okay. Um, it's a billion plus

dollar industry. I'll show you more in a

little bit.

So, point is students are human. We have

always offloaded. Those of us that are

in the room, if we think back, we can

think of times where we've offloaded.

Sometimes it was not cheating. Like we

use our cell phones so we don't have to

memorize phone numbers anymore. That's

offloading, right? And sometimes it is.

Sometimes it's lying, right? It becomes

cheating when it's a dishonest when I'm

presenting myself dishonestly. When I'm

presenting as if I have knowledge and

abilities or skills that I do not have,

that's when it becomes dishonest. But

we've all done it. We've all offloaded.

We've all engaged in this kind of fake

engagement. But it used to take more

effort, right? In the 20th century

before the internet, cheating was really

hard.

Uh I think I might have cheated in my

C++ programming class. In hindsight,

I do remember I found it really

difficult because I'm not a detail-

oriented person and if you forgot one

little thing, right, the whole program

didn't work and you couldn't figure out

why for the life of you. And so, I do

remember having this guy help me a lot.

I don't know. I don't remember if he

actually did the assignments for me or

taught me. So, I might have cheated, but

I had to find the guy. I had to ask him

for help. It's a little bit more

awkward. It takes a little bit more

effort. Um, or I had to like you had to

handw write notes on your your arm or

remember when you were told if you first

started teaching to turn your hat make

students turn their hats around because

they wrote on the brim where there were

videos about kids who who photoshop the

the labels um that's after the internet

but photoshop the labels on Coke bottles

all sorts of fun stuff.

My point is pre millennials were just as

tempted to fake engagement but we had

fewer opportunities.

it was just a lot harder. Then the

internet in the late 20th to early 21st

century came along and opportunities for

students to satisfy that itch, that

temptation became much more plentiful,

much more um it was easier, right? I I

joke with the kids that if I wanted to

plagiarize, I had to I'm from Canada. I

had to walk uphill in the snow both ways

to the library and like physically

write, you know, have the textbook there

and then physically write from the from

the book to plagiarize. And then it was

just control ctrl +v super easy. Think

about yourself being 19 years old.

Uh maybe some of you were when the

internet came along.

But then again, faking it was easier,

but so is detecting it. Turnitin came

along with their similarity detection

tool and the world was saved and nothing

bad has ever happened since.

Um, but because Turn It In came along

and started finding plagiarism, that was

what gave birth to the uh contract

cheating industry.

Now, if I can just

this video to go.

So, have have you has anybody ever done

this? Google this write my paper for me.

Now, this obviously I just did this the

other day, so it's going to reference

AI, but this existed for uh the the the

term contract cheating was coined in

2006.

Um, and it's we estimate that it's a

billion plus dollar industry and if you

think AI is going to put them out of

business, they are advertising

themselves as human written and

therefore they won't get detected uh by

uh AI detectors. But maybe I just don't

even want to take my class at all.

These advertisements are all over the

internet.

It takes me a little bit of conscious

effort to cheat, right, by going out and

googling to find one. Except

our international students tell us they

are bombarded with invitations from

these companies.

In fact, these companies will send

people into Discord groups that students

have set up for their class. Pretend

they're a student in the class. Kind of

just say, "Hey, is anybody else like

struggling with this paper coming up? Do

you want to get together and study? And

then they kind of like just seduce them

into that that world. You mean you mean

I can pay someone to do this for me?

They're helping me, right? Handle all

your online exams in a convenient

manner.

Don't lose your score. My colleagues in

Australia um take this much more

seriously than we do in America. um and

they uh have are able to identify

enrolled persons,

people who are enrolled in their school

but aren't doing a single assignment.

And how do they do that? Well, there's

IP addresses logging to that student's

account from Kenya, Ukraine, and India

all on the same day. They call it

impossible travel.

So that existed all before Chajbt came

on the scene in November 2022.

So what is this student experience in

the age of AI? So now all of a sudden

the opportunities for faking it are

built into everything and they're

everywhere all at once.

You don't even have to Google for it

anymore.

We've got goo. We've got extensions that

um are built into

that you can just that are in Canvas.

And so I can just u have it submit, you

know, answer all my questions for me.

I'm moving the the mouse around there.

Or even when I try and Google something,

right, to do some research, the AI

overview is the first thing that pops

up.

Remember when we used to be worried

about Wikipedia?

That was quaint. Now the AIO review

summarizes Wikipedia. We can't even read

Wikipedia pages anymore. Or if I've

installed the Grammarly extension. Have

any of you ever tried to install the

Grammarly extension?

It is invasive.

I I installed it to see what my students

are experiencing. I had to turn it off

because it kept telling me, "You're a

bad writer. You're a bad writer." Like,

"Fix this. Fix this." And I'm like, "I

am not a bad writer." It's very

invasive, I would say. Right.

So, it's just it's just there in your

face all the time. Don't do the work

yourself. Let us do it for you.

Speaking of, how many of you have

learned about the new Grammarly service?

So, they'll now predict students grades

for them.

So, um one it uh it'll write their paper

for them. So, for those that think

Grammarly is just a grammar and spelling

checker, it is not. It is an AI company.

Um, but either it'll write the paper for

the student or it'll fix the student's

writing or it'll just if I if I give it

my paper plus the instruct instructions

for the paper plus um the grading rubric

and the name of my professor, it'll go

online. It'll look up the professor on

ratemyprofessor.com or wherever else,

find out everything it can about Trisha,

and then it'll say predict what our

instructors might

>> skip him marketing to us. then I'm gonna

say um Professor Trisha will probably

give you an A minus and and here's how

we'll fix it so that you get an A.

Um it is

the big problem with this is not just

that it further entrenches students in

the intrinsic intrins extrinsic

motivation of getting the grade and like

producing a perfect product that will

get a grade. But it um further

reinforces that the product's all that

matters and it

tells them that there's a certain way of

writing which is the best way of writing

and I have read many of the Grammarly

produced papers

and students say well I just use

Grammarly to fix my writing and I said

well it broke your writing because now

you sound like a robot.

This poor student who's actually like a

data science student specializing in

machine learning was reported to my

office for using AI illicitly and she's

like I didn't I just used Grammarly.

Turns out her dad has a law firm. Turns

out she does a lot of her assignments at

the law firm. Turns out the law firm has

the pro version of Grammarly. So turns

out she uh went through and every time

Grammarly said, "Hey, here's a

suggestion." She just went, "Yes, yes,

yes, yes, yes, yes." And she said,"I

feel so stupid.

I special like my major is involved like

I'm learning about machine learning and

I had no idea grammar that's what

grammar like I didn't even think about

it because my dad uses it. Everybody in

the law firm uses it and she didn't

really think about it totally changing

her voice

and students have been targeted as a

core consumer

and I would say argue that the entire

educational system has been

intentionally entangled

um you have to do it. The students are

using it anyways. You might as well

integrate it in. If you don't integrate

it in, they're going to graduate without

any AI skills and then they're going to

fail as employees.

But they they kept um releasing things

free to students. I don't know if

remember last March or right around

final exam time for semester schools

Ch2PT released its free ver or its pro

version to students for free just for

that certain period of time. Then Google

Gemini said we'll one up you we'll we'll

give it to students for free for a year

and then Perplexity Comic came out in

September and released it to for free to

anybody with an edu address.

Have any of you used Perplexity Comet

yet?

Okay. The worst part about this,

I'll get to my hopeful side eventually.

The worst part about this um is that we

have no idea if these tools are h are

hindering or amplifying student

learning. Our students, including if any

of you have kids in the K through2

system, are massive,

massive

in a massive pilot project. They are the

proverbial rats in the cage and we're

seeing if the experiment is going to

work on them. Now, we've done this

before to students, but it's getting bad

because they're actually Perplexi is

actually putting out ads telling

students to use their tools to cheat.

This is the first AI company that has

admitted. So, I appreciate their honesty

and transparency. It's the first AI

company has admitted that they're there

to help students by p to disengage

from the the hard work of learning.

And again, this is just one example from

LinkedIn.

It's not that AI will replace people. AI

will replace people who don't know how

to use AI.

So this is perplexity comment.

This just came out about three weeks ago

and freaked people out because we've had

agents before these open these AI

companies have told us here's an agent

and they never quite live up to the

hype. Well, Perplexity Comet did. So I

had it log in and complete my ARB quiz

for me. Didn't tell it which course. I

have many courses in my Canvas um

account. Didn't tell it which quiz. So,

it found well, first

first it bypassed Duo Security, our

two-factor authentication. It handled it

for me. Uh, I told the IT guy this uh an

IT colleague this and he got it to log

into you. Well, this is being recorded.

Well, it's the truth. He got it to log

into UC online and complete his Furpa

certificate for him. Bypass duo

security.

So, it went in and it found the right

course. It then found the only quiz that

was published. And if like look, can you

see that? Great. Look at what it's it's

projecting an image of what it's seeing.

And it's projecting this whole inner

monologue that is using, you know,

energy, right, as we're as it's doing

this just to tell me I'm clicking. I'm

It's going to say reasoning soon.

I'm I'm reasoning. Then it'll say things

like,"Great, I've successfully answered

the first five questions. Now I need to

complete this." So it goes on and on

like this. At one point it realizes it's

in the instructor mode. And so it real

and so it said, "Oops, I've got to

switch to the student mode." Switch to

the student mode to complete the test.

And then Oops.

And then I just want to show you the end

here

where it gets to

um the quiz has been completed,

successfully submitted. I got these

right. Here's the points I got. Uh at

this point, I'm still thinking this

isn't true, right? I love how it says

your quiz was submitted success. You

scored 6.67 out of seven. You improved

from your earlier preview. And I still

thought at this point, it's lying.

There's no way it's doing all of this.

It's It's And yet, sure enough, I went

into SpeedGrader and there was the

submitted test for the test student,

6.67 out of seven.

That was about three weeks ago. And just

yesterday or two days ago, OpenAI

released its Atlas, its agentic browser.

So any online assessment in Canvas

that's unsupervised is not a valid

assessment of learning.

>> Oh, and it's being put in wearables.

>> I was thinking about this. The Meta

Ray-B band displays are going to make

cheating in school way too easy because

first of all, they look pretty normal

and you could get them with a

prescription. And then once you have

them on in class, it's too easy because

the AI assistant can see what you're

seeing and it can hear you even if

you're whispering.

>> Hey Meta, what's the answer to the

second question? And it would work. Or

if you just want to look it up the

oldfashioned way, nobody would know. You

can't see the screens from the outside

and you can control the screens with

your hand in your pocket or your hand

behind your back. If teachers didn't

know about them and if they weren't

$800, they would be the ultimate

cheating machine.

>> I listened to an interview with my

friend Zuck um on a podcast who was

talking about his amazing new glasses

and at one point I thought, "Oh my gosh,

he's going to say something ethical

because he he mentioned the word

privacy." I was like, "Great. I'm

worried about these glasses being around

campus and people just recording me 247,

right?" and listening to my

conversations. And so I was like, "Okay,

he's talking about privacy." But

>> I was thinking about this. The meta

rayband displays,

>> but for him it's a privacy issue if the

person that you're looking at can

basically see what you're seeing.

>> So it's a nice clear crisp display for

the person wearing it, but nobody from

the outside can see that you're seeing

anything other than them. and his band,

his neurob

he was saying. So now, you know, it's

kind of not socially acceptable if I

pull my phone out when I'm talking to

someone to look something up. So now I

can just pretend that I'm still

listening to them and he's got his band

on and it it detects micro muscle

movements. So I can be doing this the

whole time and doing something else and

sending emails to people and nobody want

nobody will know because it's not

socially acceptable to let them know

that. So if they know that you're

looking at something at all,

obviously that makes it a very good

cheating device, but I think we should

also be worried about things like

privacy. Um,

anyways,

these glasses are fairly obvious because

they're Ray-B bands and they look like

sunglasses, but pretty soon they will

look like your glasses. Uh, and there is

a company, I think it's called Halo,

that's working. So metal glasses, you

can see a tiny camera and an indicator

light comes on when it's recording. But

there's a comp at least one company, one

company I've heard of that was working

on that being totally obscure from the

viewer. And when they were asked about

uh you know privacy rights in two party

recording states like California, they

said that's up to the user to make sure

they have permission.

So how are students responding? Well,

we're going to hear from two later, but

before then, um well, they're responding

by using it. They're the biggest age

group to use it and it appears that

they're mainly using it from September

to April

and not over breaks.

Um, this was a survey just in July. You

know, they've used it in their

coursework. The majority of them, I

would say it's probably 100% or close to

that. Some students do have an ethical

objection to these tools and they're

refusing to use them. Now, this doesn't

mean they were cheating with them,

right? They're using them for all sorts

of reasons. some and sometimes they

might have been allowed but I've talked

with about 250 students who have been

reported for misusing genai and this is

what they've told me

have Google or Google Scholar to do

research okay

not smart because it makes stuff up and

when I've asked them why they don't use

a tool that's purposely designed for the

very task they're doing like Google

Scholar they look at me and they some

look at me and say what's Google Scholar

to check if my assignment addresses a

prompt so I've written the paper and

then I give it I give the prompt and the

assignment or maybe the grading rubric

to chache BT and it tells me how I did

or Grammarly

to create exam study notes again not

unethical but it makes stuff up so if

they're not using it in addition to

learning the actual material they might

be learning wrong stuff right

now we start to get into a different

category to brainstorm a response to an

assignment might be okay unless

brainstorming was part of the the

challenge of learning was part of the

was part of the objective objectives. I

had a meeting with faculty and one

faculty said, "I'm totally fine with

them doing that." And another faculty is

like, "No way. That's that's the heart

of what we're trying to get them to do."

We talked about summarizing the readings

instead of reading them. Make my writing

sound better.

I do not just hear this from students

for whom English is not their most

comfortable language. I hear it from

every single student.

Why? It's UC San Diego. I got to be

better than perfect. But for honest with

your ourselves. I don't know about well

I'll be honest with you when I was in

undergrad I would write my paper and

then I would have that big ro the rajor

I don't know what it was thesaurus and I

would read through my paper I'd be like

that word sounds dumb and I would look

up a smarter word and then place it and

I had a in third year I had a psych

professor tell me to knock it off you

sound ridiculous you're using big words

where big words are not needed right so

every student wants to make their

writing sound better and they think that

Grammarly sounds better. They think that

ChachiPT sounds better. There's

something about a machine's doing it, so

it must be better or must be right.

There's something in our psyche about

this to co-write my assignment for me,

to do my assignment for me. Um, it's

hard to tell, but the one that was easy

was he was supposed to do a research

project on the behavior of birds. Like,

he was supposed to go out for a behavior

site class, observe some birds. Like,

I'm thinking this is a cool class,

right? It's experiential. observe some

birds, take some data, write up about

it. He handed in his paper and it said

he forgot to take out the line that

said, but really you should collect your

own data and do the research yourself if

you really want to understand this. Um,

so that one was pretty easy and of

course to answer exam questions. So this

brings us remember my 2008 story about

mom who did the writing for the student

or fixed the student's writing. Let's

head to 2024

to the life of a sophomore,

19 years old, majoring in public health,

let's just call it, at a top research

university in California.

She's enrolled in four C courses that

quarter. Um, all inerson courses, but

yet at least for one of those courses,

all of her assessments of learning were

remote asynchronous computer-based.

She takes her final exam at home. She

finds the exam difficult. She gets

through most of it, but she's

struggling. She's tired. She's

frustrated. She studied so hard. She

doesn't understand why she's struggling

so much. And then just a mouse click

away, there's AI, and she can't resist

the temptation. She clicks on it, gets

an answer, copies it into the exam,

finish the exam on time, relieved, goes

on break, then she gets a notice. I

think you've cheated which IPT on your

final exam. So when she came in to meet

with me, she immediately took

responsibility. Yep, I did it.

She's embarrassed. She's ashamed. She

doesn't want to do it again. I also

found it pretty aware for a 19-year-old

and she said, "I'm taking four classes

in my major this quarter. All four are

in-person classes and all four have

remote asynchronous computer-based exams

only. That's the only assessments. Can I

please take my test in a testing center

with you because I don't trust myself to

resist the opportunity to cheat?

When I started in 2006 at UC San Diego,

I thought

we should be able to graduate people who

choose to make the ethical the right

ethical decision even when they're

tempted not to be. And I still think we

should work towards that. I also have

come to the awareness that we should not

expect our students to be superhuman.

I have a lot of faculty that say to me,

"I just want to trust my students." I'm

like, "That is so unfair."

Picture this. You're on a health kick.

How many of you have a sweet tooth?

Okay, so you're on a health kick, right?

You want to cut down on your sugar. Do

you fill your kitchen with donuts and

force yourself to stare at them because

you trust yourself that you're not going

to eat them? No. You clear your kitchen

of sugar, right? And yet, we say to

students, "I trust you. So, please go

home and take this online asynchronous

assessment and don't use ChachiBT

because I trust you. And then we're

disappointed when they don't do it. We

cannot expect our students to be

superhuman. It's unfair. All humans need

help sticking with their values. I'm not

saying all our students want to cheat.

I'm saying all of our students are human

and they'll be tempted. And there's too

many opportunities. And if I was 19

today, I don't know that I could resist

them honestly.

So, we should not expect them to go home

and sit and stare at the donuts and not

take them. We should help them stay true

to their values and to help them reach

their goals in an honest and ethical

way.

So, that's I think what students are

experiencing. But what's their what's

the implications of AI for learning and

engagement?

Well, as I mentioned earlier,

unfortunately, they weren't designed for

learning, right? They were designed for

engagement. If you work with them often.

You'll see you'll ask it for one thing,

it'll be like, "Would you also like me

to do this? Would you like me to do

this? Can I do this for you?" And it's

like, "No, I just wanted you to do this

one thing. Go away."

Uh, I do, you know, if I need an ego

boost, I just go talk to Chat GPT and it

tells me how smart I am. Brilliant

ideas. So it's creating a dependency and

really like um not only frict reducing

friction for learning and doing but re

reducing the friction it takes to have

relationships in real life. Like it's a

lot easier to just go hang out with a

chatbot than with real people.

Addiction. The new California law

requires chatbots to tell kids to take

breaks. Guess how long a kid has to be

on it before before they have to tell

him to take a break?

Three hours.

three hours is the law then open AI or

whoever has to say hey you should take a

break or it shuts off or something I'm

not quite sure

ease obviously to reduce eliminate

friction and automation its goal its job

is to automate cognition

and our students are worried

um this is a some quotes from a a

massive study um they're skeptical about

them help the tool is helping them

actually learn. They're worried about

privacy issues. This one doesn't say it,

but I know they're also worried about

the ethical issues, the energy uses, the

the theft of intellectual property, the

human exploitation, the exploitation of

human labor that they use to to train

the the data. And the students still

prefer human interaction and they're

worried that they'll get so sucked into

these chat bots that they'll get

isolated.

They're also worried that it'll have a

negative impact on their critical

thinking.

As we wrestle with whether we should

integrate AI into higher education, I

think we have to think about not just

what it the implications for learning,

but also their physical and mental

health.

This is the last downer before I get to

what we can do. I promise. I just think

it's so important to talk about this

because there's so much hype out there

about why we should all be doing it that

I I really do feel like we need this

countercultural narrative.

Students are turning to AI people

people are turning to AI chat bots for

friendship and emotional support.

And so if we're going to encourage their

use or if we're going to purchase

subscriptions

and give them away to our employees and

our students, then I think we have to

worry about this because we've now

provided the tool

and we have no idea yet. We're starting

to learn about the full impact of these

on student cognition, mental or physical

development, let alone their student

experience.

and it could have dire ends. This

student who was taking all of his

schooling online because he had a health

issue.

He started using ChachiBT for schoolwork

and within just I think four months he

took his own life.

At the advice and encouragement of

ChachiBT,

Adam said, "I want to leave

I don't think I can read this.

It's not good.

It's not to say that chatbots are always

bad, right? So, there's some users

reporting great psychological benefits

from using it, but I think we have to

think more broadly about this. So, what

should we do?

Sorry.

We have to research the impact of this.

We cannot drink the Kool-Aid that just

says it's wonderful. It's beautiful. We

have no choice. We got to do it. We have

to research it and we have to listen to

our students. So, thank you.

I've never cried before in a talk. Okay.

Um, listen to our students. This is what

this is what at least a thousand that

talked to I or filled out a survey for I

want. They want clear communication

about these things. They want to be

taught how to use them ethically. They

actually want some some help resisting

the tempt resisting the opportunities.

Design assessments are harder to

complete. Limit tech use in some

classrooms. There are a lot of K

through2 schools that are going back to

no cell phones in the class.

I've heard that the people who invent

these tools send their kids to schools

where there are no devices at all.

So, we need to listen to our students.

We need to lean into offering engaging

humanto human interactions. Students do

not need us for content. They do not

need us for knowledge delivery. They

need us for structure. They need us for

for opportunities. They need us for

mentoring and coaching. They need us for

experiential learning. I left University

of G in in Ontario, Canada in 2000 from

a phenomenal cooperative education

program and they were everywhere and

they're still everywhere over in that

side of the continent. They don't exist

on the West Coast. We have students

graduate without any work experience

whatsoever.

And we know that AI is going to take

those entry-level jobs. So what are our

what are our students going to do when

they graduate? It's not about teaching

them AI so they can get those jobs. It's

about getting them experience and and

and real life like we do with nurses and

doctors. We we get them into the field

while they're in school, right? More

flipped engaged classrooms where people

are doing the work of learning in class

with each other with the professor. Not

coming to class and listening well like

this. Not coming and listening to

somebody blab at you for an hour and

then going home and and doing the hard

work of learning on your by yourself,

but doing it together with other people.

So learn it from AI. I don't care. Learn

it from YouTube. learn it from the

textbooks I I supply to you, but you're

going to come to class and you're going

to apply that knowledge to real

problems.

We really do need to decouple online

asynchronous assessments from teaching

modality. We keep talking about uh

online learning versus inerson learning.

There is no such thing.

People learn online 247 all the time.

There's a difference between online

asynchronous assessments and supervised

assessments though and we need to really

take seriously our responsibility for

certifying

learning not just facilitating it. This

is um from Western Sydney University.

It's a it's an inspire and assure model.

It's adapted from the University of

Sydney's two-lane approach.

But really they're talking about there

there are activities that inspire

students to learn and there's activities

that ensure that students are learning.

And sometimes in that top right uh

quadrant it does both.

I'm just going to quickly do that

because my animations are all messed up

again. So the unsupervised is in the

left hand side

and in the top left quadrant they're

unsupervised but they motivate students

to do the work of learning because

they're authentic. They're I've I've

given the students some choice control.

They're really going to get into it.

Right on the bottom is all that work

that doesn't motivate them and doesn't

allow us to ensure they're learning.

Then we've got the bottom right quadrant

that assures they're learning. Not very

inspiring, but that's okay. They need to

know how to do basic math or whatever.

And up above is like our where our flips

active engaged classrooms live.

motivates the students to do the working

and we know they're the ones doing it.

You have or you have or you will have a

computer-based assessment here. This is

the assessment center task force that

Igor mentioned that I'm chairing for the

University of California system.

Um these these things are going to be

helpful not for every class, not for

every assessment, but

um I do think that there There is a

place for assessment center where

students go. They know exactly what to

expect. Um they it supports mastery

based learning or sorry assessment

frequent mastery based assessments which

we know are better for learning right

and enables the faculty and and TAs to

it frees up their time because they're

coming to us for testing. So it frees up

faculty and TA's time to do more maybe

they're doing oral assessments with

students. Maybe they're doing review

sessions. Maybe they're doing coaching

or tutoring of the students who are

struggling and it assures academic

integrity. But it doesn't have to look

like a computer-based testing center.

This was an interesting example.

I forget whether it was in IIE or CE,

but he did a multi-day in-class essay

using computers but locked down browser

because he he didn't want to give up

this the struggle, you know, the

thinking that goes into writing an

essay, but he knew that students could

offload it and fake it with ChachiPT or

Perplexity or whatever. And so he folded

it into class where they're doing it in

class. So they're still getting some of

the same experiences that they were

getting before, but he can be more sure

that they're actually getting the

experience. So we can be creative. We

can be uh imaginative about what secure

assessments look like for our context,

for the size of our classes, um you

know, for every situational factor

that's involved there.

I think we need to unhide the durable

curriculum. No more talk about soft

skills.

No more talk about the hidden

curriculum. If it's important, it

shouldn't be hidden.

We need to really focus on human,

durable human skills because this is

what's going to set our students, our

graduates apart from the machine.

And this is what's going to have them

bring added value to the workforce and

to society. We need to revisit what we

teach and why.

I don't know. Do we have to be centered

around contentbased disciplines?

Nobody, nobody. Uh, I'm gonna get nasty

letters for that one. Or should we be

centered around these durable human

skills?

Um,

and we must intentionally integrate and

assess them. So, at the University of

California, San Diego, we do have our

co-curricular transcript where students

could log their experiences and and I

log the experience and then we certify

the student did it. That's great. But we

have to get away with this from this

idea that we teach teamwork because we

throw students into groups in a couple

of classes without actually teaching

them how to do that. Or that we teach

communication skills because they they

handed in a research paper, an essay

that we have no idea if they actually

wrote.

Um, another way to do uh secure

assessments, a group of engineering

faculty at UC San Diego did oral a big

oral assessment project with as many as

250 students in a class. And not only

did that uh provide academic integrity,

but it engaged the students more because

they knew they had to come and face a TA

or an instructor and tell them what they

knew, just like they're going to have to

do in the world of work. You're not

going to get away with telling your

boss, "I'm sorry, I'm not available for

a synchronized meeting to tell you about

this project. I'll send you an email."

And yet there's some universities and

colleges that prohibit faculty teaching

online asynchronous classes from

requiring a synchronous meeting with

their students.

And I think we need to ethically and

intentionally integrate AI tools into

the curriculum.

I think we can do it to assist the

learning process with 247 support. We're

doing that Triton GPT. I believe

Berkeley hopped onto this um and got the

bear GPT or something. Have you heard

about it? Okay. So, they've now created

um it's just a regular GPT, but they've

also created I I'm piloting this that um

a way I can train it to be a student a

teaching assistant basically for my

class to tutor my students to help my

students without helping them cheat. We

need to teach responsible use. So we've

got a Canvas module at UC San Diego on

AI literacy.

And then okay, this can't be left to

individual faculty at a program or

department level. We have to figure out

um what is the foundational knowledge

that we need to teach. What do people

need to know in order to be able to

effectively and responsibly use these

tools? I.e. I have to be able to

evaluate the output of these tools and

know whether it's

true or not.

Then how do we scaffold after they've

we've assessed we've taught and they

we've assessed that foundational

knowledge then how do we scaffold in the

cognitive offloading and I would suggest

it's not with a general purpose tool but

in data science it would be with a very

specific narrow tr tool that data

scientists are using out in the field

that we integrate that into the

curriculum. We scaffold student learning

to that tool. Nobody's using general

purpose tools for real tasks. They're

using them for email and stuff. They've

got specific AI tools in data. And I'm

pointing at them because that's their

major. They've got specific AI tools

that they need to learn how to use.

And this is my last point.

We can't keep asking faculty to rebuild

the plane as they're flying it.

They can't teach the class and redesign

at the same time. And they shouldn't be

expected to redesign it for free at some

on during the summer and on weekends and

at nights. We have to figure out a way

to um provide faculty time, training and

support and to actually demonstrate that

we care about teaching and quality

education and the facilitating and

validating of learning maybe even more

than we care about publications.

>> So thank you

[Applause]

you want to come up. Um, there is a

podcast that goes with my book, so I'm

shamelessly plugging that. Um, and um,

my AI disclosures about how much energy

I might have used to create images for

this presentation.

>> Yeah. So, now I think you're each going

to give your responses and share your

experiences and then we'll open up for

questions.

Hello. Hi everybody. My name is Lynn.

I'm currently a senior here at UC

Berkeley. I'm an undergraduate student

studying data science and cognitive

science. And thank you so much for

sharing your um talk, Trisha, because I

feel like a lot of it really resonated

with me to begin with. when you said

grades are currency, I feel like that

really kind of brought me into this idea

of like why students are starting to use

AI in the first place. So, just for some

context, I come from Las Vegas, Nevada.

Our education system is not super great,

but it did end up me like being very

wanting to come to California and

achieve this higher education. And

grades were really important to me back

then. And I really invested so much time

into learning everything that I could so

that I can come to UC Berkeley. And now

that I'm here, I remember entering for

the first time and being surrounded by

so many ambitious students. And at first

it was very intimidating. And I really

thought to myself that I maybe wasn't as

knowledgeable as I thought. Maybe I

wasn't as skilled as I thought I was.

And so it kind of led to a lot of points

of struggle where you constantly compare

yourself to other students. And I still

even remember my freshman year, which

was back in 2022, there was no chatbt

yet. Um, so I was still there trying

really hard to understand my CS courses,

my data science courses, because this is

something that was super new to me. And

I haven't used AI yet. So I remember

when my friend introduced me for the

first time saying, you can give this AI

a prompt and it'll give everything to

you. You can ask it to code you this

program and it'll do it. And I was so

shocked by this. And I thought this was

an amazing thing. And I was somebody

that was at office hours for like six

hours a day trying to get help, getting

my 15 minutes of time with a teaching

assistant and not actually getting as

much exposure because it's hard to debug

your code and it's hard to really get

that kind of help with only 15 minutes

of time. Like they can't understand your

entire program within those 15 minutes.

And so I would begin using AI as that

tool to help me out. And it ended up

being much faster experience. I would

ask AI, why is my code not working? And

I agree that it does become some sort of

codependency because you start to think

why would I need to go out of my way all

the way 20 minutes across campus to

attend office hours only for a couple

minutes and get maybe some help when you

can just ask AI to help you debug your

code or try to understand a concept. And

so that's what a what I think a lot of

students here at Berkeley that their

kind of mindset is now. And I've been

noticing a lot from my peers that people

are attending lectures less. they're

attending office hours less just because

they can just ask chatbt to either

summarize the lecture notes or explain a

concept to you or debug your code and

that kind of goes a lot in line with

what Trisha was mentioning earlier about

courses that kind of relate to your

major and those that don't your general

requirements and I spoke to a lot of my

peers and a lot of peers use Gemini all

those AI tools to write essays for them

and to kind of complete

those tasks because those aren't the

skills they feel like are going to be

relevant for their career. So, as data

science majors, yeah, we're going to be

learning a lot about machine learning.

We're going to need to know how to do

Python, but they might not feel like

they they have the need to learn history

or to write an essay or to write a

paper. And so, that kind of lack of

emotivation,

lack of self-efficacy, like they they

don't feel motivated to really learn it

themselves. And so that's another reason

why I feel like a lot of students kind

of gear towards using AI for those those

methods and efforts. And I really agree

with the point also about having to find

ways to st for students to get more

engaged in the classroom. I'm currently

on course staff for a course here on

campus called data 144 and we have been

completely virtual for the past five

years and this is the first semester

where we finally have proctored exams.

So, we require students who are taking

the exam online to have their cameras

on, but we also offer an in-person exam.

And I have friends taking this course as

well, and they've told me, I also want

to take this in-person option because I

also don't want to be tempted. I don't

want to be tempted by having my computer

next to me and knowing that I have the

option to go to Chapter PT and answer

the questions on BC courses. And I

didn't even know there was that um

extension that answers questions for

you. feel like that's kind of crazy, but

I really appreciate that from some of my

peers. They were like, "Yeah, I know

that I probably won't be able to to

resist that temptation either." So, I

think that that's really interesting.

And talking to other friends on data

science course staff here at Berkeley,

there have been plenty of people who

have started implementing in-person

exams, inerson quizzes. You have to sign

up on a schedule and attend in person

and in front of a proctor or supervisor,

you know, complete your quizzes. And

that's proof that you do learn. And I

feel like that's really important

because if a class is completely online,

there's no evidence that you actually

know what you're talking about. But if

you're in an in-person quiz and you have

to write an essay right there, you have

to answer problems. That is the true

indicator of how much you know. And so

for me, I like to use AI more as a tutor

as a asking questions repeatedly like,

oh, you're explaining this to me like I

don't quite understand this this section

that you were saying. Can you explain a

little bit further? And in that sense, I

feel like AI can be very useful. and a

very meaningful um tool for students to

use to learn. But at the end of the day,

I do think classrooms implementing these

in-person assessments have been very

meaningful and useful for students to

learn. And I think it's just starting

where things are going to get a lot

harder to manage, but um I think that

everyone's adapting and still learning

about the best way to integrate AI. So

that's something that it's very

interesting to think about. Thank you.

[Applause]

Hi, is this on? Okay. Um, hi, my name is

Smarti and I'm a second year studying

data science and political science. So,

I actually completed high school and

middle school and most of my schooling

in India. And I kind of wanted to talk

about um how the education system there

differs from here a little bit. So,

Chachi PT came out I think when I was in

10th grade and in India all of my

assignments or everything that I was

graded on was completely like in-person

exams, in-person assignments. Nothing

was virtual and so I don't think I ever

used Chad GPT for anything in school.

But my first introduction to Chad GPT

was an assignment that my brother had

due and he was in middle school at the

time and for my school specifically um

middle school was more dependent on like

your specific school while high school

like assignments were more standardized.

So my brother had this like creative

assignment where he had to write a poem

and um submit it and he showed me that

he just like put the prompt to chachi

and it like generated a poem for him and

I was surprised because I really like

creative writing and so when he told me

that this is his assignment I was like

excited. I was like, "Oh, you should

like write it like this or do it like

that." And then he showed me the prompt

and I found myself actually liking the

poem that it generated. And then I was

like thinking to myself like, "Is this

really a machine? Like is this really

something that it generated from scratch

or is it like drawing from an ex

existing poem?" So I think that was my

first like introduction to Chad GPT. But

I had like no use for it at school at

all because everything that I was graded

on was an in-person physical exam. So I

never found myself like needing to use

it for school. And so I don't think I

really like explored Chad GPT or any

other AI tool like while I was in high

school. But my first introduction to it

like in an educational context was a

class I took here at Berkeley my

freshman fall. Um it's the introductory

CS class. So it's called CS61A and I'm

actually on course staff for it now. But

as a part of this class, there is an

in-built AI bot um that is like created

by the professor of the class and it um

helps you like debug your code. It like

tells you in natural language like maybe

like consider changing this but it

doesn't actually give you any code to

like help you solve the problem. And I

was surprised by it because I was um a

little surprised that they had

integrated this AI tool into like the

course itself and we were allowed to use

it. But they had also like strictly

prohibited the use of any other AI tool.

So I found myself like using that bot a

lot like just being like does this code

work or can I change or like I used to

like change aspects of my code and then

like rerun it to see what the bot would

say. But um another thing was the bot

was only used for or only implemented

for certain assignments and there were

other assignments where the bot wasn't

implemented at all. And I knew a lot of

my peers would use AI like other AI

tools for those assignments. But I still

found it helpful that the course allowed

us to use AI for some aspects of it

while still having physical exams and

physical like final exams to test our

knowledge. And I think that was um my

introduction to how AI can be integrated

into classes to assist with some aspects

of the learning experience without like

taking away from learning completely. Um

another example I wanted to mention was

a more creative class I took at

Berkeley. Um it was titled it was hosted

by the German department but it was

completely in English and it was about

um literary AI. So the idea of the

course was to introduce us to more

philosophical like ideas surrounding AI

in the literary space. So can AI be an

author? If AI author is a piece of work,

does the AI like claim ownership? Is

that an original piece of work? But we

also had a segment of the class where we

had to like generate our own digital

literature. And what that means was like

using tools, whether that's AI or like a

Python program to generate literature.

And the class was open to anyone from

like English majors to EEKES majors. So

there were a lot of people who had like

no coding experience. And the professor

allowed us to use AI to like um for for

our final project we had to like

generate a piece of digital literature.

And the professor told us you can use AI

to actually like write the code for what

you want to do because I know a lot of

you like don't have coding experience

and you may have creative ideas but you

may not have the technical skills to

implement them. So, I'm okay with you

using AI to actually implement your

ideas, but I want your ideas to be your

own, and I want you to explain to me why

you chose this certain idea. And I think

that was also an effective way to sort

of show that AI can be used

productively, but doesn't have to like

take away from the learning process. Um,

in my like personal day-to-day

coursework, I definitely do use AI

tools, but I think as Lynn mentioned, I

mainly use it like a personal tutor and

to like help me understand code. I think

it's really effective if I like paste a

code segment into AI and be like, I

don't understand why they like solved it

like this. Can you explain to me um if

there are any other approaches I could

take to solve this? And I think it is

useful for that. But um I do not use it

for any sort of like writing or creative

work because um personally I really

enjoy the creative process but I also

initially used to use AI for feedback

and like Trisha mentioned it wouldn't

just stop at feedback. It would be like

how about you rewrite it like this and

then it would give me a paragraph and I

would find myself drawn to like maybe

using one or two sentences from that and

then um like I realized that this was

sort of like taking away from my own

voice. So I stopped using AI for like

writing assessments altogether. And I

think in general what I've noticed is

that there are a lot of places that AI

can be implemented. But I think there

are some still some disciplines where it

would be better to like restrict the use

of AI or like limit it to really minimal

things to sort of give us the

opportunity to go through the own like

difficult process of trying to come up

with an idea. um thinking of how we can

implement that idea like searching for

tools or searching for resources where

we can learn more about implementing the

idea like even with writing I think a

lot of people do prefer to use AI for

brainstorming just because they want to

see um what a machine could give them

but I personally still really like like

talking to my friends about my ideas and

bouncing ideas off of each other and

even if my friends like don't have as

much knowledge on the like topic as a

machine would have I think there's still

something um personalized about talking

to like another person about your ideas

and seeing what different perspective

that they could offer that a chatbot

that is like designed to validate you

couldn't offer. And so I think in

general um I personally think AI can be

effective in disciplines where it's more

about like understanding why something

is like a certain way and when there's

usually like a right answer and you need

to understand like why the answer is

correct and then that sort of like

builds intuition for how you can arrive

at your own right answers yourself. But

I think um in more in disciplines where

there's a lot more subjectivity and we

don't want uh like a single right answer

but there's like multiple approaches

that people can take. I think there is a

lot of value in not using AI and still

encouraging students to sort of like

figure out their own process for how

things work and figure out their own

answers. And so in the future I hope we

can sort of see that in like different

approach for different disciplines.

[Applause]

Thank you so much to everyone on the

panel. We just had light bulbs. If you

could just show the image above our

head. We had like all these light bulbs

going going off. My question is like I

want students thinking in this way like

there is definitely a moral obligation

in terms of what how you're sharing your

stories and how you utilize AI. So has

any university contemplate changing what

their foundational courses are?

>> Um because we always say a standard

entry level English course may be needed

to apply to this program but how are we

integrating this knowledge as a

foundational course

>> the AI knowledge as a foundational

course?

>> Um I think yes there are some Is this

on?

There are some schools that are doing

that. Um

it's uh

there are lots of schools doing it.

Yeah, I think a first year experience

course should just all be you know

remember we used to do that like how how

does college work and helping students

figure things out and AI should be a big

part of that. What you heard her here

were two very self-regulated learners.

>> Yeah.

>> Two people who want to learn um and who

are dedicated to learning and using it

in the right way. So they're both

self-regulated learners as well as like

moral actors, right? They kind of say,

"Oh, I got tempted and I realized I

should pull back from that." Um, that's

uh, great to hear and isn't not always

likely between midnight and 3:00 am um,

when people are tired and they're

stressed and they're losing their minds

a little bit, right? There's much harder

to make good decisions at that point.

But I think you're absolutely right. It

depends on the learning object

objectives and we have to teach them how

to use it properly because they're

begging us for that because they're just

winging it at this point and some smart

ones can figure it out but but it's hard

to realize Yeah. How do you resist that

pull that it has?

>> Yeah. Um just just to add on to that, I

know that we talked Trisha before um

your presentation about how we're here

at UC Berkeley, which is a top

university. And so just knowing that you

want to take advantage of all the

coursework and just being able to engage

with other people, I feel like does

motivate people. And I from what I've

been hearing from other students too, a

lot of AI use and all that temptation

like I was kind of mentioning before

stems from courses where they feel like

they're not going to be knowing needing

to know that knowledge for their career.

Somebody wants to go into software

engineering. They might not need to know

how to write this perfect five paragraph

essay. And so I feel like a lot of it

comes from I don't know just knowing

that they want to learn all that

concepts themselves. something that they

genuinely need to know in the industry

and something they're genuinely

interested about too. And I just wanted

to also add I'm currently taking a class

right now called data 104. It's data

ethics and it's actually one of my

favorite classes I've ever taken here in

my four years. And just being able to

discuss with other students that are

going into data science, learning about

how AI is just really blowing up right

now and how to handle AI, how to use it

when they're in the industry, how to use

it ethically. I feel like that kind of

course is really meaningful and it might

not be really an introductory course

which I feel like there should be but I

feel like it's still it's still a

requirement for the data science major

and I think that's a really meaningful

course to have.

>> Yeah, it's um so a couple of things.

One, maybe we shouldn't be requiring a

five paragraph essay anymore. Maybe it

is a 20th century thing. It was a

vehicle through which we were trying to

teach those durable human skills like

critical thinking and problem solving

and persistence through difficult tasks

and tenacity and all of these things,

right? And and it's not really about the

five paragraph essay, but maybe there's

more 21st century ways that we can help

students develop those same skills that

isn't as easily or as desirable to fake.

We do this exercise with our students in

the after education program where we

give them a very typical like writing

assignment in a genai course like first

do this then do this and we have them go

through with the big wheel of durable

human skills and identify what skills

they might develop if they actually

engage fully in each part of that

exercise to help them see it's not

really about the five paragraph essay.

It's about the skills you develop in

doing the five paragraph essay. We need

to do a better job communicating that to

students.

>> I had a friend um not this university

but a different one who mentioned to me

that he was taking a statistics class

like this semester that was really

exciting for him because he couldn't get

like correct answers like from Chad GPT

um he had told me that I know that a

class is hard and going to be

challenging for me when Chad GPT doesn't

immediately give me the answer and I

thought that was really u like first of

all not the way I would approach it but

I thought it was interesting because um

first of all he was kind of revealing

that he was like using Chad GP for

everything but second it kind

framed the um like fault to be on

instructors for designing a course that

would like be really easy to use Chad

GPT on rather than like his own fault

for I guess using Chadik on it. And uh I

think that kind of got me thinking about

like how you can redesign courses to not

use GPT. And when I was like searching

for professors for like a class I want

to take, I saw this comment about a

professor uh on wait my professor um

like a student mentioned that this

professor like puts their questions into

GPT multiple times and then like edits

them until GPT can't answer that. And I

kind of found that interesting because

you are like making the class harder

obviously but you are I guess involving

students more to actually go search for

answers. And I just was thinking about

whether that can be something that's

sort of used to improve classes and make

them less susceptible to like AI

involvement.

Thank you so much for this excellent

keynote and panelists. I was taking such

rapid notes. I have been trying to

figure out how to formulate this

question. Something that I get asked I I

am the director of our center for

teaching and learning here all the time

is where is the AI policy and we get

this question from students and from

faculty and you know we have our center

came up with AI guidance that I think

boils down to each faculty member has to

understand what options there are but

has agency and ownership over how they

make those decisions. I think that's

right. I also think it's unsatisfying um

especially as I know folks are looking

and Trisha you said this and both of you

have said this as well that everyone's

looking for guidance and support and how

to center learning. So I guess here's my

question.

Is there a use for AI policy? And if so,

how could it most satisfyingly give

faculty the agency that I think they

need to own their courses successfully

while also creating the ethical

frameworks that I think we're all

desiring to continue making learning a

valid and meaningful experience.

>> So, it's a great question and I do think

that's the right policy uh because whe

whether AI is appropriate or not depends

on the learning outcomes for the course,

right? And it depends on whether the

students need to have that knowledge

first before they they could effectively

use AI. So we we do have to leave it up

to the professors. Uh I would say the

departments I I do believe that

departments should be talking about this

and they should be figuring out in our

coursework, in our major, where should

AI appear, where should it not, and be

intentional about it, not each

individual instructor trying to make

this up as they go along. So I think

that's the first thing. Um uh again that

two-lane approach would say uh there's

there should be a bit more satisfying of

an answer which is if it's an

unsupervised assessment you can't ban

AI.

So you don't have a choice professor.

This is these are assessments for

learning. You're gonna if if you're not

supervising them, you're going to have

to assume students are using them. But

but students have to disclose and part

of that is increasing their AI literacy,

their metacognition. They're thinking

about how they're thinking. What tool

did I use? Did did I find it helpful or

did I find it hindered my learning?

Would I use it the same way again? Would

I use a different tool? Because I found

Chad is way more sycopantic than Claude,

for example. And I would use CLA. I've

used Claude to help me do fiction

writing because not as intrusive as

chach is, right? So learning how to do

all that and disclosing that and that

also means being prepared to share my

prompts and all of my chat history with

my professor if they ask so that we can

talk about oh I see how you would

started to use it but see here this is

where you gave up and you let chatbt

take over for you and so we can treat it

as a learning moment not as a cheating

moment and then for those cases those

learning objectives where student has to

measure master it without AI that's when

we do the secure supervised assessments

as you were as you were mentioning. So,

I think our policy does need to say it's

up to learning outcomes. I think our

policy does need to say students, just

like before when you couldn't ask Joe to

do your exam for you, you cannot ask

like AI to do it for you. I don't care

what your professor says. No, you can't

have AI do your quiz for you. and right

because that's and and we have to pay

attention to what are the learning

outcomes that need to be measured

mastered and do they are they amplified

with AI or are they hindered with AI

>> sorry

>> yeah thank you so much for such an

engaging discussion

[Applause]

Uh we have a 10 10 minutes break and we

reconvene mean at 4 for our concluding

panel of the day.

>> If you don't want a break and you want

to come talk to us, we're going to sit

here for you.

>> That's less time for questions. Sorry, I

talked too long.

>> I stopped doing

Yeah. And I'm sorry for the tears.

That's never happened before.

Loading...

Loading video analysis...