AI in Education: Panel Discussion: How Does AI Affect How We Learn?
By MIT Schwarzman College of Computing
Summary
Topics Covered
- The "Better Eric" Problem: AI Forces Us to Compete with Superhuman Versions of Ourselves
- AI for Education: From Side Effects to Intentional Design
- Autodidacts Beware: Chatbots Can Foster Delusion
Full Transcript
All right, everybody.
Welcome to the afternoon session.
So this is going to be a panel.
The topic of the panel is, how does AI affect how we learn?
We have a distinguished group of panelists.
And maybe I'll just start off by asking each of them to say just a couple of minutes about themselves, their role at MIT, and how they think about and interact with AI in their research, teaching, et cetera.
So we have our five panelists.
They are not in the order listed up here.
But we've got the two Erics, Jacob, Yael, and Melissa.
So we'll start with Eric K here, just because he's the first one in line.
So go ahead, Eric.
Great.
Thank you.
And thanks to all of you for coming back after lunch.
I'm Eric Klopfer.
I'm a Professor in Comparative Media Studies and Writing.
I run our Teacher Education Program there, which is a program that prepares MIT undergrads who want to be math and science teachers at the K-12 level.
I run a group called the Education Arcade.
It focuses on educational technology.
And I'm one of the co-directors for MIT's RAISE initiative, which stands for Responsible AI for Social Empowerment and Education.
I'm along with Cynthia Breazeal, who's the PI, and Hal Abelson.
So a lot of my work lately focuses on both AI literacy-- so how do you teach kids about how to use AI effectively, ethically, responsibly, and for social impact around the world-- and then somewhat also thinking about how we use AI for education.
So there's the AI literacy, and there's AI-enabled education.
And so we do some of both of those kinds of projects working around the world, primarily on K-12, but a little bit now more recently with community colleges as well.
Great.
Go ahead, Eric.
Great.
Hi everyone.
I'm Eric So.
I'm a tenured professor at the Sloan School.
I teach a variety of courses, but one that's directly relevant is I teach a class on the business of AI and robotics, where we try to empower students to work with these tools.
I also help with a number of initiatives around Sloan, sort of propagating AI throughout teaching.
So I run some seminars for faculty and lecturers and to come learn about these tools.
And then I also am currently writing a book that's very much thematic to this.
It's currently called What AI Does To Us, and it's really all about all the ways that AI is changing the way that we think and we learn and we perceive the world.
And so a lot of my research and focuses on that.
And so yeah, turn it over to Jacob.
Hi, I'm Jacob Andreas.
I'm an associate professor in EECS and CSAIL and work on machine learning and language processing, and a lot to do with language models these days.
So thinking, I think, both from an educational perspective, what's the right way to teach about all of these technologies, given how quickly they're changing?
What's the right way to use these technologies for education more generally and not just AI education?
And then I guess on the more fundamental ML side, thinking about ways of actually changing the way we train and build modern AI systems so that potentially they're more useful for educational kinds of applications.
Yael.
Hello.
I am Yael Vinker.
I am a postdoc here at CSAIL.
I work with Antonio Torralba, and I'm working on enabling visual communication abilities in AI.
So this means, how can we go beyond the pixel image generation into models that can use visual elements also to think, communicate, and solve problems?
For example, humans use the whiteboard also in education a lot to explain and learn, but AI today cannot do that.
So this is my connection to education, and I'm happy to be here.
Hi everyone.
My name is Melissa Webster, a Senior Lecturer in Managerial Communication at Sloan.
I teach two communication intensive for the major classes.
One is Management Communication for Undergraduates.
The other is Communicating with Data for undergraduates.
And I began incorporating GenAI within months of ChatGPT 3.5 coming out.
So that's been consistent in my classes and an ongoing process of incorporating it and adapting.
I also have undergraduate research assistants, where we look at how professionals are using GenAI and how it's changing their work, how companies are implementing it.
And I teach within Sloan's Executive Education foundation skills for managers and executives in GenAI.
Awesome thanks.
So I think the way I'd like to run this, I'll ask a couple of warm up questions, and then you guys start getting thinking about what you'd like to hear from our panelists.
And then we'll open it up to the audience.
So maybe I'll start off just with the topic of the panel, which is, How does AI affect how we learn?
and ask each of you to reflect on that.
I think, as we saw from the morning sessions, this is a very complicated topic because, on one hand, probably we've all had the experience of interacting with an LLM.
And it's an incredible personal tutor, and you can ask it questions about things you don't know.
And you don't have to be embarrassed because forgot linear algebra or something.
And it's quite a remarkable experience, I think.
But on the other hand, it's a shortcut to learning in a lot of ways.
As we've heard, we talked about writing and coding this morning.
And it has the danger that-- I thought Shen put it really nicely that you think you understand something because you asked the LLM to generate the answer for you, and it made sense, but that doesn't mean that you've learned how to actually apply or synthesize that skill.
And a lot of what we struggle with teaching people, teaching people to learn, is how to put those skills into practice.
And there's just this concern about whether we're going to-- how do we actually teach people in this era?
So I'm happy, whoever wants to go first.
I don't know.
Somebody.
They're all very modest.
I mean, maybe we'll ask Eric.
Eric's writing the book about it.
So let's let Eric go first.
Terrific.
So I think this question is just so big and broad.
So it kind of depends a little bit on your particular orientation.
I can say, though, that it's very timely that this topic is being raised today.
Some of you may have seen, for example, that Anthropic released a study this morning, which was quite informative on this.
Maybe this is what you were referring to.
They brought in some of their junior software engineers, put them under normal time constraints, and had them work with coding tools.
And what they found was that the programmers solved these problems a little bit faster, but when they immediately had to stop, without using AI, and take a quiz about what they just did, they did significantly worse, roughly equivalent to about two letter grades.
And in particular, some of the gaps were quite critical in their learning in the sense that, for example, they really struggled for troubleshooting critical errors and bugs.
And so really what we're seeing is the development of these skills such that people, they can maybe get it to do things, but they don't really how to solve problems because they don't understand what's happening underneath.
And that to me is a real critical issue that I think we need to address as academics.
And I'm sure a lot of the narratives that you've talked about today are along these lines.
I would also just add to this one element that I've been talking a lot about recently, is how the competitive sort of arms race among students for the very best grades, for the top jobs, for the funding for their startup, creates an environment where it becomes very difficult to escape using AI.
And what I mean by that is, if you're competing with the person next to you for the highest grade or for more output, oftentimes that means you're being directly compared against the person next to you.
These are called rank effects in economics.
They're highly pervasive.
But when you enter into an environment where the person next to you may be using but you can't tell, it becomes somewhat reflexive to use these tools.
And even the idea that you may be using it puts a lot more pressure on me to use it.
And so some of my research, for example, shows that when you look at grading systems-- so for example, a curve-- we know that if you use a curve, and not everyone does, it implicitly puts a cap on the number of students that can get the top grade.
Well, what that does is it creates an environment where I'm forced to compete with the other Eric, the better Eric.
And to compete with the better Eric, well, I better use these AI tools which allow me to approximate superhuman intelligence.
And so the problem becomes one where I don't have time to even really think through the problem, simply because I'm trying to create more output.
So that's one of the many things that I'm focused on in my research and in my book but I think is very much salient as we're thinking about how to design educational systems. A lot of people focus on what to teach or what to have students assign.
But actually, the way that we put students within this environment, the way we evaluate them, is also part of the calculus that we have to consider as we're designing courses.
So I'll start with that.
Awesome.
That's great.
Thank you.
Whoever wants to go next.
Can I build on that?
Because that was great.
Maybe you're the better Eric.
And this echoes back to what Daniel concluded with this morning about helping students understand that they're the product, that that's the thing that they want to improve.
When you come out of here, it's not about the grade.
It's not about the number of programs you've written.
It's about you.
How are you coming out of this system?
And I think that's a really hard thing for students to understand, and I don't think we can just expect students to understand that on their own.
There's a lot of research out there on the impact of LLMs on learning, and the answer seems to be, when you look at hundreds and hundreds of papers, is that sometimes it has some effect, positive effect, on some people some of the time, except when it doesn't.
And sometimes it has the exact opposite effect.
And people are looking at all sorts of mitigating factors as to why this might be the case.
But I think that that idea of understanding your agency, understanding how it affects you, I think that's ultimately what's determining how this is affecting people.
And sometimes we get students here at MIT who have those abilities, have those skills, have that metacognition, understanding how they learn, have that motivation to want to make sure that they're really understanding it really well, and sometimes we don't.
But I think it's something that can be taught.
And I think we need to be thinking about how we teach our students those skills.
Maybe they'll be taught that in K-12, but I don't think we can rely on that.
I think it's something that we're going to think about how we really teach that deeply here.
And I think that's the most important thing.
The other thing that I think Daniel said also this morning was, there's a century or more of learning theory and learning sciences that can help inform a lot of this.
Sometimes we've ignored it because we haven't had to pay attention to it yet.
And I think, in many cases now, I think AI is just the forcing function that's forcing us to reckon with a lot of pedagogies and a lot of processes that we probably should have been paying more attention to for a longer period of time.
And now we're realizing that it's having a toll, that we're not having taught our students those things.
And now it's the time that we really can't ignore those any further.
Great.
Maybe Melissa, do you want to say something a little bit about-- I'd love to hear your perspective on-- you've been using these in classes in Sloan.
I'd love to hear your perspective on what you've learned about how people learn.
Yeah.
Well, of course, one thing we have to think about as educators is what they should be learning.
So not just how but what they should be learning, which I have been thinking about a lot because of, for instance, in my classes, the writing aspect, using GenAI as a writing assistant in the workplace, is a very reasonable thing to do and makes for good--
I would evaluate it as good quality output in many regards.
So now the students actually need to be better at evaluating writing.
So they've all now gotten an assistant.
So actually, I found this with my research assistants as well, that when, a year ago, the Deep Research function came out, I started using it and said, oh, this is the sort of thing my research assistants would do for me.
And so now I'm trying to teach research assistants how to manage research assistants.
So the "what" has changed in addition to the "how."
I think the one anecdote I'm getting from students is that they're less likely to do things in groups.
If it's not assigned as a group, they're less likely to work with each other to do the work.
So they're finding that they're more isolated than previously.
So my UROP students say, the p-sets that I used to do with students, other fellow classmates, I'm much less likely to do that now because I can work with an LLM to do it.
So I think that social aspect of how we learn is-- I think Daniel maybe touched on it earlier today.
That is something that, because I care a lot about our interpersonal skills, I'm thinking about that aspect as well.
So a few different angles there.
Great.
Thank you.
Jacob?
Yeah, I mean, I guess just to pick up also on something that Eric was saying, I think even if you have a student who understands that, in some sense, the ultimate goal here is for them to understand this material as well as possible, I think we're not in a state where it's easy for them to use one of these tools to do better, in the sense that it's very easy to say, here's my p-set problem,
plug it into a model and ask what the answer is, and then write down the answer.
I think it's pretty hard, if you're at some early stage of knowledge or understanding, to take that p-set problem and go to any kind of language model or whatever that you can get your hands on right now and say, construct for me a set of exercises or a tutorial dialogue or something that causes me to understand the complete conceptual system that this problem is ultimately designed
to force me to recognize-- sorry, reckon with or evaluate.
So I think there's a lot of room for us to, I guess, as educators, to be on the other side of that and say, OK, well, if we know, when people take these problem sets home, they're just going to start using chatbots to help them with them, what other kind of scaffolding can we provide, either to make those sorts of interactions
useful when you're doing this with your partners or when you're doing this alone?
And really trying to get better but maybe lack the context for that.
So I think there's a lot more we can be doing as the people who understand what it is that we're trying to teach to make people more effective users of these tools.
I'd love to come back to that, but maybe let's-- I want to hear from Yael.
I think what you said at the beginning, this how so much of learning has a visual component to it.
And I wonder if you could reflect a little bit on what you've learned about how people learn and how the visual component is important?
And then if you could maybe give a plug for your research about how you're teaching AIs to participate in that, I'm sure people would love to hear.
Yeah.
I think that AI has become part of education kind of as a side effect.
It was not really designed that way.
We have LLMs. They have this huge knowledge.
And apparently, they can teach us new things.
But it's not a tool that was initially designed for education.
And when I think about education, specifically, I'm a visual person, so I like to summarize and draw and write everything.
And I think it's not a coincidence that we have a whiteboard or chalkboard in every classroom, because this is how humanity have converged to teach and learn.
So I think the visual aspect is core to how we as humans learn and educate.
And this part is broadly missing in today's AI for education tools, maybe because they were not initially designed for that task.
And the gap is becoming bigger and bigger because the majority of research is focused on language.
And this is where the data and the technology is.
But when we think about the visual aspect of that, the visual communication side, we have a lot of gaps to fill up.
We don't have the data.
And it's a big open research question.
Cool.
It's really fascinating.
Maybe just come back to this thing that sort of struck me, that Jacob said, this question of, how do we develop the scaffolding to help people to-- I don't know what the right word is-- make it so that people actually do learn?
What is it?
Do you and your experience teaching or working with students-- are there things that you found that have worked well, or ways that you've changed your practice of teaching to make it so that students are actually learning the things that you want them to learn?
I think we often hear people talking about, you start with the learning objectives.
What do you want people to know?
And then you work backwards to how you're going to make that happen.
But I'm wondering how, when we have the thing that we've done for, I guess, hundreds of years in education, where we stand up and lecture and we give assignments and a lot of that is going to have to change?
So I'm wondering if you can reflect a little bit on things that have changed in your own practice, either learning things or teaching things.
I'll quickly say that I've leaned more into the flipped classroom.
So the classroom is more for the synthesizing and practicing and applying of things than the taking in of information.
So take in the information before you're coming to class.
And then we're going to be practicing with it during class time.
That's, I think, the thing I've used the most.
Anybody else?
I mean, I guess two thoughts.
So with regard how to do this in the classroom, I really am not sure.
And it's certainly not something that we've really done in any meaningful way in at least any of the NLP or ML classes that I teach right now.
I think part of the challenge there is that the technology itself is changing so rapidly that even what the best practices are, or what your mental model of what you can do with a chatbot or one of these multimodal systems or whatever, is going to be out of date by the time you get to the end of the semester, where you made a whole plan to roll these things out.
I think it is also the case that just with respect to the kinds of things that you can do right now that are of the form, tutor me, basically, or generate a whole sort of plan for learning some body of material, I think we're a long way from having models that are actually good enough to do this.
If you take even a GPT 5-type thing off the shelf right now and get it to role play as a student, or ask it questions about what it thinks you know and don't understand as a result of some interaction, it's quite bad.
And I think there's a lot that needs to change, actually.
We can talk more about reasons having to do with the training process, why that might be the case.
But I think there is a lot of fundamental technical work to be done before we actually really have things that are useful as tutors.
At the same time, I think when we think about designing-- yeah, maybe thinking about success cases, like certainly I, and I think even a lot of the grad students and postdocs in my group, have gotten a lot out of being able to say, hey, here's some new little bit of math that I need to acquire for this project or piece of the literature that I didn't know.
Talk me through it.
And I think, if you come in with a sufficiently sophisticated understanding of where you are and what it is that you're trying to achieve, then already you can do a lot by just asking questions or saying, ask me questions.
And I think the challenge right now is figuring out how to get, say, a first- or second-year undergrad to a place where they can negotiate those kinds of interactions effectively, even before they have all of the specific content.
I'll say a couple of things.
So one is, I think a lot of faculty teachers are using AI not necessarily directly with their students, but also in their own preparation of their teaching materials.
And I found that to be really helpful for me.
We saw Shen Shen talking earlier about getting feedback on slides.
I've done stuff where I'm like, OK, I want to do a role play activity in my class tomorrow, and I need to write 10 little dossiers for the characters that I want people to roleplay, but I only have a start on those things.
And so I can come up with a start for each of the 10 people.
And then, OK, now AI can fill them out, make parallel.
I can edit them.
It was really great.
Something that might have taken me a few hours and probably would have given up on for the next day I was able to do with the help of AI.
So I found I've been able to do some more ambitious curricular things on short times in more responsive ways because I've had that aid.
But again, I think one needs to think about where and how they use that.
Even as a faculty member, there's some research out that shows, if you just go to it to help with curriculum design, it's going to take things down to the lowest common denominator of the kinds of activities that you're going to do, and you really need to be able to be thoughtful about the kinds of things you're co-designing with AI.
The other thing I'll talk about-- I'll come back to the example I gave in my introductory remarks where we had very explicit instructions for students in my class.
It's a CIH class.
You learned about CIH classes earlier.
It's a CIH class, so there's a lot of writing in there.
We had very explicit instructions about when they could use AI and when they couldn't use AI, and how they needed to disclose it.
And they were pretty good about that, until the end of the semester where it didn't work anymore.
And a lot of them wound up using AI and not disclosing it.
And the markers were clear.
And I think my lesson there is, I need to be-- giving them that instruction at the beginning of semester wasn't sufficient.
They need to learn about it just in time.
So we needed to have an intervention.
Three weeks from the end of the semester, when everything is happening, we need to have an intervention at that time, or maybe a week or two before that, so they're ready for that time when it happens.
And so how am I going to manage that situation when it happens?
And I think we need to be thinking about ways that we not just give students guidelines and instructions, but really take them through that the first several times where they can really live it out.
So maybe I'll ask a related question, which is the flip side of this.
How has this changed how you think about evaluating your students?
Because again, we've existed in this world of-- and we've talked about this a little bit this morning already.
But the typical way that we evaluate students is through assignments and exams. And some of that maybe still works in class.
Exams work.
But again, to Shen Shen's talk this morning, we have usually supplemented exams with very regular assignments that give us some indication that the students are understanding.
And a significant portion of the grade is based on those.
And then, in the more advanced classes, we have these big, kind of open, more open projects that require some more ambitious synthesis of the material.
And yet those classes are more advanced, but they're not so advanced that the AIs can't do a reasonable facsimile of that synthesis for the students.
So how do we think about evaluating?
How have you changed how you think about evaluating or understanding that people are actually learning in this world?
I stumped the panel.
I think it's a very tricky subject.
I will say that-- I mean, this relates to what I was mentioning earlier.
I think a lot of the grading systems or just the means of evaluation in society are collapsing with AI.
The traditional means of doing that, I think, they're separated from the thing that we want to measure, which is underlying quality.
And so part of that means some very rudimentary things that you often hear people talk about, including return to paper exams and those sorts of things, which it forces the person to prove that they know something even without AI.
But what I've also highlighted is, that's becoming increasingly dicey.
And what I mean by that is I've shown, for example, how ambient AI has become.
It's in my glasses.
It's in my lapel pin.
It's in my watch.
And so students can easily have the AI system whisper the right answer to them.
And so really, unless you have a clean testing environment for which you can't access AI, I think it becomes incredibly tricky.
I think that part of the steps that people have been taking, I think, are entirely reasonable, which are the things that you cannot distinguish between a human and an AI.
For example, some basic homework assignments.
It makes no sense, in my opinion, to put a lot of grading weight on that, simply because it's no longer a measure of effort or aptitude.
It could just be a measure of how much you relied on AI.
I can say that more emphasis is placed on, I think, certainly in-class activities, but also their ability to articulate and explain what they're doing.
So end-of-semester projects, where they have to pitch and walk through them, I think, is a little bit of a forcing function to get people to show what they learn and what they don't.
I think one of the other things that I've been doing quite a bit with students is-- Jacob mentioned that the frontier is shifting.
And so part of my role, I think, as an educator in the classes I teach is to have students really grapple with these tools and figure out where this frontier sits.
What is it capable of and what is it not?
And so part of what we're doing, I think, is recognizing that it's useful to be able to use these tools, of course, with the caveats that we highlighted earlier, which I won't sort of rehash here.
But part of what that means is, working with these systems and taking account of what works and what fails, where does the boundary sit?
And force them to reflect upon the cases where AI doesn't do what we hope they would do.
And how do people know that?
Well, they can't really tell unless they actually grapple with the material themselves.
So I think that's part of the equation.
I could tell you many, many things that we are doing.
But anyways, I'll pause there.
I guess one thought is that a lot of the challenges with evaluation are really challenges of scale in the sense that the reason exams are convenient, the reasons take-home assignments are convenient is partly just because-- oh, sorry.
Is that better?
So these are challenges of scale in the sense that a large part of the reason exams are convenient, assignments are convenient, is that you issue one thing to everybody.
They all fill it out.
It comes back, and a TA or an instructor, or increasingly, some sort of automated process, can then go through and assign marks to all of those things and give them back to people.
And maybe what we're sort of facing up against is the fact that the feedback-generating mechanism for a lot of these classes was always maybe suffering from some of the same kinds of issues that we're seeing with AI right now.
And if we lived in a world where we could do one of these Oxbridge-style tutorial systems, and every student is having lots of one-on-one face time with the instructors in their classes and being asked personalized questions about the specific assignment that we set for them, I think a lot of this would go away, even just because of the social pressure from losing face
in your tutorial with your tutor and not necessarily getting a poor score on an exam.
And I think the challenge is just that, in many of the classes here, we have 300 students, 400 students, we have three instructors.
And it's not really possible to provide that kind of individualized interaction.
So yeah, somehow, I don't know that this is a satisfying solution, but if there were just more of us to talk to people, then I think we could actually make a lot of progress on this.
If there were more Jacobs in the world, it would definitely be a better place.
That's the plea for why we should hire more academics.
[LAUGHTER] I'll build on some of the things that were said already.
There's a phrase that I've heard.
I attribute it to Mitch Resnick, although it may be Einstein that actually said it.
It's that we need to measure what we value rather than valuing what we can measure.
And so this relates to your comment around thinking about, what are the outcomes and outputs that we really want our students to have?
And we need to think about how we assess those things.
And it may be that it's a challenge.
It may be that we really run into a challenge when we're doing that.
But it may be that when we really think about what we really want to be able to do and know, that we actually find that there's better ways of finding those matches.
And I think I rely-- I've always relied on, and I think I'm relying more now on, projects, presentations, really, where the students are doing things in an interactive way, where there's people there where they have to defend their ideas in real time.
And I think, as Shen Shen mentioned this morning, I there's also something about processes.
It's not just about the end result of the thing, but I need to have those benchmarks along the way and have feedback that I can give the person and make sure that they're able to have reflections on those as well.
So they're tracking their own progress.
And so I think, before, we could rely on the fact that when we got a product at the end, that there was some process that was behind it, but now we really need to be able to track that as well.
So I want to ask another question to Yael specifically, and you feel free to say you don't feel qualified to answer this question.
But I think this morning we heard people talking about writing and coding and learning languages.
You've been thinking-- I think you think a lot about art.
I'm wondering if you can reflect at all about how AI is affecting what it means to create art, or create visual arts, and if that impacts your research in any way?
But I know there's a lot of pop press about the end of creativity or whatever.
So I'd love to hear your thoughts about that.
Sure.
Well, it's my own perspective, so it's not the ground truth, but I see the recent developments as just another tool.
So humans have been inventing tools throughout history.
And when we invent tools, they just change the way we work.
But I don't think they are meant to replace us.
So I see that as similar development to the invention of the camera, for example, that this has changed art completely, but it just opened new frontiers of art.
So I just think it will be another tool.
However, I do see that the interesting future use cases that I envision would be more around how these models can serve as partners for creation.
And I think that, in the visual domain, this is quite challenging because AI is advancing really fast with language.
But in visual arts, most of the processes are visual.
And under that, it's harder to communicate with the machine today in the common practices we're used to.
And what's happening today is that people change their practices to work through the textual bottleneck, which is highly unnatural for visual artists.
So this also posed an interesting research direction-- how to use them as more of partners for creation and exploration that we already do in the linguistic and textual part.
And it will be interesting to do that in the visual space as well.
Very interesting.
Maybe we'll open it up to somebody in the audience who want to ask a question or two.
I'm sure you guys have things to ask.
Thank you all.
Can you hear me OK?
Yep.
Great.
Thank you, Sam, and thanks to the panel.
My name is Isaac Treves.
I'm an MIT alum.
And background in cognitive science.
I've also been working on a new project with Pawan Sinha, who's a professor of neuroscience, on the effect of AI on the brain.
And you've all touched on it, but I'm curious-- maybe I can give you an image that can-- an analogy that you can explore that we found useful.
And it's the bicycle.
So if you think about the bicycle as an invention, in cognitive science, it's been an interesting example that people use to illustrate how limited people's understanding is of everyday objects around them.
So there's this famous effect called the Illusion of Explanatory Depth.
Eric alluded to it a little bit with this Anthropic study.
You ask people, draw a bicycle and draw its components and tell me how it works.
And you ask them beforehand, how much do you really understand how a bicycle works?
And when they draw this image, it's pretty comical.
You see the bicycle's mechanics don't make any sense.
These are typically not MIT students.
And then you help illustrate this concept that AI is supercharging, that you feel like you understand something, you feel like you've generated code that makes sense, but then you pressure test it, and it kind of falls apart.
Now, if we think about creativity and bicycles for creativity, there's this famous image that Steve Jobs has given.
When he was thinking about the computer, he liked to call it a "bicycle for the mind."
And Sendhil Mullainathan-- I'm butchering his last name-- in EECS now, likes this as well as an analogy, so I'm borrowing it from him.
Then, in a sense, if you use technology to activate the mind and work in concert with the human body, like the bicycle does, you have this opportunity to really amplify human creativity and abilities.
And just to give an example of what he uses in this domain-- so it's not anti-AI.
It's just thinking of a way to use AI in concert with a human being.
He talks about AlphaFold.
And the power of AlphaFold for protein folding is something no human beings can really do, but it's powered scientific discovery.
So I'm just curious about the panel's reactions.
Are there ways to educate students about this notion of explanatory depth that doesn't often come with using a tool like AI?
And can we instead use it to drive creativity?
So really great questions.
I'll just kick this off because it's been on my mind as of late.
So this very well-formulated question was about this analogy that Steve Jobs used in 1980, about this idea that the computer is a bicycle for the mind.
And so I understand that the parallels are it's an amplification of intention.
Computers allow you to do more, to reach further.
But I've come to think about AI slightly differently, which is not really as a bicycle for the mind, but as a motorcycle for the mind.
And what I mean by that is I think a bicycle amplifies the power that you bring.
I think, with a motorcycle, it's capable of propelling itself.
And in those cases, the capabilities multiply, but so do the risks.
And so part of the way that I've often talked to students is to make them aware of what the cognitive science shows us about this illusion of explanatory depth, about creating incentive systems that force people to reckon with that, but also to really bring forth some of what the research is showing about how people cannot really tell
the difference between what they're doing with AI and what they actually understand.
So I actually wanted to call my book The Motorcycle for the Mind, but it was quickly shut down by the editor as being too esoteric.
But yeah, I would love to chat with you more about this.
Eric mentioned metacognition earlier, and one of the incremental things I'm doing in my spring course is actually having a class about metacognition, our metacognition around AI as a tool that we're using for communication.
And don't tell the students, I haven't yet figured out what I'm going to do in that session.
But I think it's a really important topic.
So I will be venturing down that road.
And when I talk with managers and executives who very often ask about critical thinking, for instance, and how AI is affecting our thinking, and they seem particularly concerned for their employees and for their children-- I think they should be concerned for themselves as well--
that this is a conversation that now is necessary everywhere, including in your company, because, how is it affecting the work you do and the work you'll do in the future?
And what counts as good now?
And now that you can generate, say, 30 slide decks in a half day, what do you need slide decks for, or what do you need meeting summaries for?
So these questions, I think, are-- yeah.
So lots of larger questions that necessitate explicit conversations that we haven't really had before.
I guess picking up on the bicycle/motorcycle thing, what feels like the salient difference between, say, the ability to write computer code or to interact with any kind of application and what we can do now is that you can write a program, but only if you know what you want the program to do.
And in the course of writing it, you are understanding and are forced to articulate very explicitly what pieces of the problem-solving process are being offloaded and what pieces of the problem-solving process you are figuring out for yourself.
And I think the thing that feels different or the thing that feels challenging about the AI technologies that we have right now is that you can get something out without quite knowing what it was that you wanted in the first place.
You can ask a sort of ill posed or underspecified question.
And again, this is a sort design decision or a product choice.
Most of the systems that we have access to right now, if you ask a bad question or give a bad instruction, they'll give you something out in response.
They won't force you to clarify what it is that you actually were trying to get in the first place.
And often, precisely because it was underspecified, you won't be able to recognize that the thing that comes out wasn't actually the thing that you were looking for to begin with.
So again, I think there is room, just on the technical side of things, to improve this.
But that feels like a big challenge, is that it's possible to get an answer to a question that is not actually the question that you meant to ask, and not realize that that was the answer that you got.
So I think I would opt for the AI as the e-bike for the mind, which is somewhere in between.
[LAUGHTER] The Vespa for the mind.
And a class 1 e-bike, I think, which is the ones that have to pedal, so it requires you to be involved in the process.
I think that's right.
Because I think-- I have a colleague who I was talking to yesterday down the street, who taught a course on vibe coding to students who are not programmers, not computer scientists.
It was designing for education.
And these were people who had never thought they could ever create an app of any kind.
And they all sort of created things.
I think it was an app a week or something like that for six weeks.
And those are things that were disposable.
They're not meant to be sold or used by anybody else, but they were sort of an expression of something they were interested in, an expression of an idea.
It was ultimately something to communicate something to somebody else.
And I think that AI can open up opportunities like that to communicate and express ourselves in new ways that we didn't have access to before.
Sometimes I do stuff with graphic design that I'm not very capable in doing myself, but I'm able to interact with AI in a way that makes me more capable of doing those things.
And I think that with the right kind of combination, it can really amplify it in a way that the AI is doing maybe more than I can do myself, but I still feel very much involved in the process of that creation.
So if I could just pick up on something Jacob mentioned, which was related to this question.
So when I talk to students about the skills that they need for the next generation, I often draw this distinction between what I loosely call agent orchestration skills and enterprise orchestration skills.
And what I mean by that is, for a lot of people, working with AI means, how do I communicate with the agent?
How do I give it the right context?
How do I design the right systems to get it to do the thing I want to?
Those are critical skills, but that's not the central focus to me.
The latter category is what I refer to as enterprise orchestration skills.
And it's very much related to what Jacob mentioned, which is, we need the people who really understand what are the right questions to be asking.
And that is a really critical skill that I don't think enough people are taking seriously.
Too many of our students grew up in an environment in which they're used to receiving a homework assignment and finishing the homework assignment and then moving on with their life.
But actually, I think, for enterprise orchestration skills, you need to understand, what are the critical tasks that someone needs to solve?
What are the problems that are really worth addressing?
And then I think you need-- on top of that, you need something that I've often loosely referred to as calibration.
You need the ability to understand whether the approach that you're taking is actually solving the problem that you think it's meant to solve.
And for that, I think, when you force students into exercises where they're given some license to choose the right problem but also given instructions to really think critically about whether they've appropriately solved that, whether that was the right problem to ask, I think that brings about a different mode of learning.
And so at least what I teach in my class is this dual view.
We teach them how to use the tools.
But I think a lot of that learning, I think, about what's going to be really important in the future, when you can spin up a thousand different agents to do work for you is, how do you figure out-- where do you aim the army to the right questions?
And how do you know when it's done correctly and actually solved the problem you have in mind?
So that may be a controversial view, but that's where I sit today.
Go ahead.
Hi.
Thank you so much.
This was really, really, really great.
My name is Per Urlaub.
In global languages.
I would like to share a very, very simple and profound insight that's not my insight, but I think that could serve as a lens to look at a lot of issues that you discussed in the panel
and that is incredibly helpful for me in order to think about what my students are doing and what my students should be doing and what we should do in education, what I do as a writer, as a scholar, what I'm doing as a human, in this case.
And this is an insight from Ethan Mollick, who's a business school professor at Wharton, UPenn.
And I recommend you to follow him or whatever.
I think he's one of my heroes thinking about AI in education.
And he said this one very, very, very-- it sounds initially relatively flat, but it's super, super powerful.
He said, AI is really most powerful in the hands of people who are already expert in the domain in which they use the AI.
So in many ways, let's just say, for the sake of an example, I'm a great teacher of European Studies.
Ask my students.
I shouldn't say that about myself.
By using AI in order to just refine my class preparations or so, I can be a phenomenal teacher.
Yeah?
I could not teach computer science because I don't what you are doing.
I can't say, oh, I'm signing up for teaching computer science introduction class.
The night before class, I'll just play around on ChatGPT.
Can you give me lecture notes for tomorrow morning?
Yeah?
I mean, this is kind of exaggerating to make this example-- to show how simple that lens is.
But that lens is really, really, really powerful in my practice in order to think what we do in education.
It also provides us with a limit of what AI can do in the educational space, because it suggests that we first have to teach-- I don't if we have to do that sequentially, necessarily, but we have to first teach students to think about these central matters in our fields
in a really smart way before we can say, hey, and now you can use AI in order to delegate some of the tedious stuff and do amazing stuff with your intellectual approach, how you see the field.
So again, thinking about AI being really powerful in the hands of people who are already experts and being somewhat not powerful at all in the hands who are amateurs in the domain.
I think it's a very, very, very useful way to think about it.
So I just wanted to share that.
I don't know if you want to riff on on that.
It's not really a question.
I apologize for that.
I can't remember if Ethan was referring to this or not, but there's an Anthropic study that basically said the same thing.
They looked at a lot of prompts and the things that came out of the prompts.
And they basically found that people who asked more sophisticated questions got deeper answers so that the more you think about what you want to get, the more advanced the thing that comes out is.
And so I think it speaks to that point that there is very much a relationship between that.
Now, could that be technically manipulated so that people who ask simpler questions got more sophisticated answers?
Maybe.
But I think that matching is probably important for people's understanding.
I think part of this is really a question about, what is the function of expertise?
And certainly, right now, one of the functions of expertise is, like Eric was talking about, just the ability to just double-check outputs for correctness and calibration and things like that, and that these systems make mistakes sometimes, and you need some way of recognizing that.
And you can really only do that if you're an expert.
I think that is going to continue to be true for a long time, but I think maybe less and less or less and less in the ways that we expect right now.
And I think even, to the teaching a computer science class example, I bet if you go to ChatGPT and you say, give me a one-hour introductory computer science lecture.
You just read the entire script from start to finish.
When a student raises their hand in class and asks you a question, you type the question into ChatGPT and you spit out the response.
I'm sure it wouldn't be as good as the introductory computer science classes at MIT, but I don't think it would be a necessarily terrible, terrible class.
But I think the other thing that expertise gives you is taste.
And I think this actually comes back to what Yael was talking about in the context of all of the visual generation stuff, that especially once we start to think about not, I'm going to give you some fixed assignment that requires you to implement some sort of fixed input output mapping, but I want you to choose a project that is good and interesting, you're going to have some idiosyncratic
notion of what it means to be good and what it means to be interesting that you can really only acquire with deep understanding of the domain.
Even then, if your interests there are very different from everybody else's and other people don't like your code or don't like your art or whatever.
And that feels like-- at least is the piece of this that I worry most about right now, that it becomes very easy to produce output that is maybe high quality without ever acquiring one's own notion.
I mean, even just thinking about image generation as paradigmatic of this.
You can create beautiful images using the image generation tools that we have right now.
They all look the same.
They mostly look like schlock.
And the question is, how do you get someone to invent new kinds of art if they don't understand something deep about the kinds of art that have been done before?
And so I think a lot of the education that we have right now is targeted towards expertise in the service of correctness.
But somehow, along the way, people also acquire expertise in the service of taste.
And maybe there's some different way of doing things that treats that as actually like the primary object.
Right.
The whole enterprise of research is asking interesting questions.
And if we're outsourcing, what questions should I be answering?
If the way we figure out what questions we should ask is to ask the LLM what questions we should ask, that doesn't seem good for the future of humanity.
We have final projects in the NLP class, and you can tell who got their final-- all of them even, I think, are competently executed.
You can go up to the poster, and people can talk about what they did with some degree of accuracy.
But you can tell who got their project ideas from a language model and who got their project ideas from thinking about the things that they cared about.
For sure.
For sure.
Yael, do you want to add something to that?
I mean, you've been thinking about creativity.
I'd be curious to hear what you think about that.
Yeah, I think it also relates to the previous question.
So, yeah, I wanted to say that, basically, if we want to design models-- like we are on the technology developer side.
If we want to design models that will make us humans better thinkers or more creative, I think we also need to gain better understanding in what does it even mean and the human practices.
And this is also a very open research question in other fields.
And I can say, personally, in my work, because I come from this more design background, so I have my own knowledge on the design process, and this gives me insights on how to build models that could help designers to actually think and create themselves and not replacing them.
But this is a very narrow perspective based on my own personal experience.
But we don't really have strong literature supporting and also explaining how humans create what kind of tools will help them to be more creative and themselves think outside of the box.
So this is why it's really hard, I guess, to develop the technology when we don't know what it should even do.
We can't even say what it is.
Yeah.
Students always-- my PhD students ask me this.
How do you come up with your ideas?
And it's, like, the hardest question to answer.
How do I know that that's a good research problem?
And it's like, well, you know it when you see it.
It's very hard to tell people-- explaining taste to people is very hard.
Yeah.
Go to [INAUDIBLE].
There's some folks on that.
Yeah.
I'm sorry.
You guys should go ahead.
Thanks.
I'm trying to pull a synthesis from what a number of you have said.
One thing I keep thinking is this question of the balance between empowerment and limitation, whether it's the metaphor of a motorcycle or an e-bike or a bike, that that shape is different for different areas that someone might use, LLMs and AI.
And Yael spoke about AI as a co-creator and the need for more fluid interfaces and interfaces to allow us to create in new and different ways.
Eric mentioned Mitch Resnick.
And I'm guessing the vibe coding class you're talking about was Karen Brennan's.
So I was a teaching assistant for that in the fall with Professor Brennan at Harvard.
And it really was exhilarating to see these educators, most of whom had never coded anything and really never expected to, not just doing readings and then talking about these reading topics about AI in education, but themselves creating things.
And then having created those things and run into limitations and understanding on a more personal, experiential level what was hard to do and how you can make something dazzling, but then you want to refine it with AIs or LLMs. And that can be so frustrating.
And those conversations then, about the theory and the writings we were responding to, were so rich.
So that's just a context I wanted to flag, this possibility of creating in new areas that we would not have been able to be creators in.
So I'll add something there.
I think, particularly in education-- and we've talked to a lot of folks in terms of how they think about integrating it into education-- they basically want to teach their students prompt engineering for whatever class they're doing.
And I think it's a much tighter integration, I think, where the creativity happens than just talking with a chatbot.
And I think, talking about some of the work that Yael was talking about, we need more like IDEs for different kinds of creative domains.
Things that integrate the kinds of tools you're already using in those domains, whether it be molecular visualization or whiteboards, equations.
Whatever it is, I think there's ways we interact in different domains that are not text.
And we need to think about ways that we have those kinds of tools available to people in more different domains.
And I think that would allow for that kind of creative expression and much more attached to the domain.
Good point.
That's a very cool idea.
I'm also really intrigued by the idea of teaching creativity or teaching taste or teaching how to evaluate.
So very often, we're teaching students how to do a thing, and we ourselves are the evaluators.
And I realized early on-- I was like, oh, wait, I actually need to teach students to do what I do, to be able to look at communication and assess it, is it good communication or not?
And so I'm really, really intrigued by this.
And sometimes I think we want kind of a mystery about creation or creativity, but there are things you can teach about it.
So for instance, one element of creativity is to generate many, many, many, many ideas and have a practice of generating a lot of ideas, because that's generally how you get a good idea, is having had lots of bad ideas, and other processes that engender creativity, or how to come up with good questions.
I do an exercise for managers and executives using GenAI and push them in a way that makes them come up with an unusual question, and they're all really impressed with how creative the LLM is with the response.
And I said, well, you were the one who came up with the question.
So that kind of process, to me, is really interesting, how we are going to teach those aspects going forward.
I think this gentleman here has been waiting, and then we'll come back to Zoey over here.
Is that good?
So go ahead.
Yeah thanks.
I'm Curt Newton, I'm Director of MIT OpenCourseWare.
My question comes from the world where all of this learning is not happening just within the confines of this institution, but broadly out in the world.
We've made tremendous progress in opening knowledge up to the whole world.
And I was thrilled.
The previous response-- Jacob, in particular, you called out this, we're heading towards a world where a person could ask some of these tools make a course for me out of this thing that I want to learn.
And I see already in the way that Google Search now responds and leans in this direction.
The old world for autodidacts used to be a lot rougher for people to pull this stuff together.
I'm wondering if we're at a moment where we need to be making really intentional investments to make these tools function better as educators for the broader lifelong learning context.
And maybe there's something that MIT can be doing, for instance, in the way that we've been showing leadership in all of these other spaces for decades, to really contribute here.
And I invite any observations, perspectives on that.
I think I can comment, but I don't know to say MIT what to do.
But I guess I think I would say that the way I see it, our role in academia is to do exactly that, in my opinion.
And these processes will be slower.
And that's the key, because they might be better.
Because things are progressing really fast.
And as all of these day-to-day discuss the question, how it will change education?
And I think our role is to make sure that it will change it in a good way, and not blocking the way we think, and not starting to using tools that are highly nonintuitive to learning.
And I think that maybe our role as researchers is also to develop the underlying technology that will enable that, which also relate to what I said, that currently, the way we use these tools for education is kind of an artifact.
It's a side effect of a tool that was designed for a completely different task.
And something that I personally am interested in is, how can we design the technology that will be more useful for education, and then integrating with the LLMs to widespread all the knowledge that now we have access to?
So yeah, I would say that supporting internally the development of the actual technology and AI models that can also work with representations that may go beyond just text.
And this includes what I'm interested in, which is visual communication and the, yeah, I guess, other technologies, like gestures and more personal relationships with teachers and so on.
I think the key in my mind is not to make-- is not to use these technologies to make that kind of stuff better or more personalized, but to make it more accessible and more attractive to more people.
Because I think the people who are already the autodidacts out there have found this and are using it in different kinds of ways, but it's widening a divide.
And I think the key is to figure out how to narrow that divide and not by bringing the top down.
We need to figure out ways that we can use these kinds of tools to make more people want to learn these things, to make it more accessible, to make it easier to get started and to not-- we know that once people get knocked out of a course or something like that, they never come back.
So we need to find ways to keep people involved.
And I don't think I have the answers right now to do that.
But to me, that's the goal.
I guess, specifically on the point of autodidacticism-- [LAUGHTER] I mean, I think it maybe is actually a little bit complicated, right?
In the sense that I, and I think many of us on the faculty now, get a couple of emails a week from people who have discovered some new sort of deep understanding that unifies the laws of physics and moral philosophy.
No.
And I mean, it's very scary.
And I think it comes from-- and it's very serious.
And it is, in some ways, a result of people setting out down interactions with chatbots that are intended with the goal of understanding and that end with them thinking they understand something and having actually been steered in some totally different direction.
And I think, if you think about one-on-one tutoring with a really good tutor as the thing to aspire for or to aspire to, what makes that work is both that it is expert in the sense that the tutor understands the field of study better than the person that they're trying to teach.
It is directed in the sense that the teacher's goal is not to make the student feel as good as possible or to have the subjective experience of understanding to the greatest degree possible, but to actually understand what's going on.
And it's personalized in the sense that the teacher has a really good mental model of the student and is able to provide what it is that they need right now to maximally support their development.
When you think about the kinds of things that we can do in the classroom, it's very expert, it's very directed, it's a little bit personalized in the best case, because we have some understanding from interactions, at least with those students who still come to lectures and ask questions, of what's going on in their heads.
But it could be more personalized.
I think, when we think about things like, OCW, it's directed and it's expert, but we are totally disconnected from any understanding of what's going on in our students' heads.
And I think there's a lot of room to bridge that gap and, as Yael was saying, to come up with technologies that better support that kind of personalization.
I think it is not what you get from an off-the-shelf language model, chatbot, or whatever right now, both for fundamental reasons that we haven't really figured out how to train these things for good multi-turn interactions with people, and also because I think it's not obviously incentivized for the people who provide these models to build
things that cause actual understanding as opposed to the subjective experience of understanding.
So I think there's a lot to do there, both in terms of changing the fundamental properties of the technologies, but also figuring out then how you provide something that maybe is a little bit closer to the kind of experience that we can provide using these online education platforms, while being a little bit personalized without letting that personalization lead down
the path that results in new theories of physics.
All right.
We got a line of people who want to ask questions.
So maybe we should go to-- we've got five minutes left.
We should go to rapid-fire mode or something.
But go ahead.
We'll try to take a couple questions.
Rapid fire.
My question is, how do we learn how to teach?
Because it's changing so quickly, and everybody has their own amazing methods.
And I'm always like, in a system model, how do we operationalize this?
So as an institution, do we have weekly or monthly discussions about, how do the best TAs, best LAs, best students, and best teachers who are implementing AI teach each other what they're using?
And whether we literally could have a working group happening here, somewhere, or what would be your ideas?
Have a Google Doc where we all just list what we do?
So I wanted to put that out for the panel.
How can we help each other be better teachers so we can help the students be creative and actually know how to think and have personal agency.
That's the quick question.
Well, since we're going rapid fire, I'll just say, I think that's a great idea.
I can say that at Sloan, one of the initiatives that I was tasked with in the Dean's Office is really trying to educate different teaching staff and faculty about how to use these tools.
And so part of that meant weekly brown-bag seminars, where we take turns, and someone presents their particular use case, what worked, what didn't, but also demonstrate how to use these tools.
This was within Sloan.
I think there are-- All of the MIT, I'm trying to figure out, because we all need to do it.
Yeah.
So I think-- Throwing it out there.
I support if you do it.
I wish I had the bandwidth to help with all of that.
But yeah, I mean, certainly, I think there are initiatives within individual departments to do these sorts of things.
I think organizing it is difficult.
But I'm sure there are many, many people who come to me and mention that they wish that there was something like this as well.
For people who are willing to organize it, I suspect you'll get a lot of takers.
Two, three quick things.
Per mentioned this earlier as a model within global languages, communities of practice.
It may be that we don't have a global community of practice across all of MIT, but think about ways that we connect and have communities of practice.
Second thing is, next speaker is from the Teaching and Learning Lab.
Sorry.
And so I think that can play a key role here.
And then the third thing is, we will have a form at the end that people can put things on.
So if there's suggestions like this, things we need at MIT, make sure that you document that there.
I want to ask more about that.
OK.
I will say briefly also, there is this new Institute Committee on Generative AI in the Classroom, or whatever it's called.
And-- Who's running that?
Who is the chair?
Who is the chair?
Part of the reason we're here-- yeah ChatGPT.
Part of the reason that Eric and I are organizing this is because we were put in charge of this committee.
So we're running this Institute Committee, and this was our-- we thought it would be fun to do this as kind of a kickoff event.
So-- You were right.
Yeah.
So, Eric, I know that Sloan actually take lots of initiative in AI, and especially you are leading lots of the effort, including Vivek and Rama's class.
But my question is, how do Sloan as a business school will reshape the MBA AI branding?
And then, has it increased the collaboration with almost the world's best EECS and CSAIL department in the world?
I want to see more the collaboration.
We have so many professors from the computer science here.
We want to actually speak to, leverage the future business leaders in a more general, public, easy way.
Yeah, it's a great point.
I mean, we really want to try to bridge across the Institute.
I can tell you that we have some pilot programs in place.
So, for example, I'm the faculty co-director of a program called the AI Academy, where we bring a number of great faculty from Sloan and from Schwarzman together, including Jacob, to come teach.
And so we teach the business side, the technical side of things.
And I think, in a lot of those interactions, you get the synergies that I think are most-- that people are looking for.
So I can say that we are working towards that model.
There's even some sort of unofficial push towards a joint degree program along these lines.
Again, making no promises.
But there's a lot of demand for those things, and we're working on it.
So I think part of what I can also say is that within Sloan, there are a number of faculty-- and we are Sloan faculty, but they are computer scientists at Sloan.
And so we have courses by a number of wonderful people.
You mention Vivek, but there are joint placements that happen between Sloan faculty and CC-- sorry, and EECS that I think could really teach a lot of these classes.
I would be remiss to say, we're here in the Schwarzman College of Computing Building, and thanks to the College for helping us organize this event and giving us the space.
And so as a part of the Schwarzman College, actually, there have been 25 new shared faculty who've been hired between EECS and the other departments, including several people in Sloan and economics and other related fields.
And these are really very cross-disciplinary people, people who are very deep in their domain but also have expertise in computing.
And we've created a whole new series of courses, these common ground courses that are meant to be courses that integrate people from their disciplines into computing.
So there really is-- we're trying to build up this much more cross-disciplinary culture here.
And the things that Eric mentioned, yeah, we're definitely talking about joint degrees and other things like that.
So we hope to be doing even more.
So maybe we got, I think, time for one last question.
I don't know if there's any time to answer this question, but getting back to where you started us, Eric.
So it seems to me like the largest difference that I see in students between using AI well and using AI poorly are the elements of fear and stress and competition, which I think we all agreed is driven by grading.
Why is grading the tyranny?
What would be a good assessment system?
And why are we allowing it to govern us?
Can we break it?
And if we break it, how should we rebuild it?
Wow.
OK.
All right.
So that's a lot there.
30 seconds.
Yes.
I think you're sort of asking the right questions.
I think this is a much longer conversation that I have to have with you.
I would add on top of that, you mentioned pressure and fear and those sorts of things.
I also think it's worth reflecting a little bit on what we know from cognitive science and behavioral economics, that we are evolutionarily sort of wired to conserve energy, and particularly our brains are this amazing anomaly within biology, that our brains are 2% of our body weight, but they consume over 20% of our energy.
And so what that means is people are biologically programmed to conserve energy for evolutionary reasons.
And so it's not-- I think a lot of people have this mindset-- I shouldn't say a lot of people, but there is some people who view young people and say, oh, gosh, you should think for yourself.
And if you just say that to them, they'll figure it out.
But I don't think that really kind of reflects the modern reality of the environment that students are in.
Certainly the competition, but certainly the forces in which they were trained and we have developed as humans.
And so I am very much in favor of rethinking how we-- or sorry, rethinking and redesigning how we handle education.
Certainly, moving away from forced curves, I think, is part of the issue.
But the reality is that we live in a society with at least the following two elements-- a highly competitive environment, as we've talked about earlier, but highly convex payoffs.
In virtually every economic setting, you look across the spectrum, the marginal benefit of going from OK to good is far, far smaller than going from good to great.
And so what that means is that when you push people into this environment where not only do you have to compete, but if you make it to the top, you get huge rewards, well, I think grading is only part of the issue.
It's like reflecting of this broader societal convexity and incentives.
And so I wish we could solve it with one dial, but I think it's part of a broader discussion we have to have.
And I'd be happy to have it when I have more than 30 seconds.
All right.
I think that was a really great way to wrap up.
Thank you very much, everybody.
Thanks to the panelists.
[APPLAUSE]
Loading video analysis...