AI & The End of Education
By Johnathan Bi
Summary
## Key takeaways - **AI Can't Replicate Human Life Backing Ideas**: Socrates makes Alcibiades feel shame because his life backs up his ideas, enduring hunger, cold, and serving as a gentle teacher and great warrior. Even if AI matches semantic content perfectly, the human dimension behind the words is missing. [02:21], [02:41] - **Ex-Slaves' Poems Defy AI Replication**: A poem by someone 30 years out of slavery responding to Charge of the Light Brigade or Gray's Elegy captures feelings from lived experience of reading great books that AI cannot produce. AI can generate a Tennysonian poem but not convey that human engagement over time. [04:29], [05:04] - **AI Ends Arm's-Length Academic Philosophy**: If AI advances sufficiently, it spells the end of academic philosophy as solving intellectual puzzles at arm's length that most don't apply to lives. We might return to philosophy as a way of life. [06:58], [07:15] - **Human Care Decides What Matters**: There are infinite aspects in the universe but we as humans give a damn and decide what matters through care and concern, which AI does not. Much knowledge is primordially practical and inarticulable, gained through world experience. [08:07], [08:47] - **Alpha School: AI Frees Time for Experience**: At Alpha School, kids use AI two hours a day for personalized math and language mastery, crunching a six-hour day into two with great results, then spend the rest experiential learning like biking 5 miles or public speaking. Guides help with motivations while AI coaches performance. [28:12], [29:08] - **AI Risks Creating Passive Two-Class Society**: The technology enabling self-directed six-year-olds can breed passivity, dependence, and doom scrolling, potentially creating two classes: one autonomous and thriving, the other enfeebled by passive use. [30:30], [31:12]
Topics Covered
- AI Lacks Human Life's Authority
- Responses to Books Reveal Human Experience
- AI Ends Detached Academic Philosophy
- AI Enables Eudaimonic Self-Development
- Reject AI Determinism Build Desired Future
Full Transcript
Can AI write a great book?
>> AI is not going to be able to write with the deliberateness of illusion that a person who has been deeply influenced.
>> AI spells the end of academic philosophy as it stands right now.
>> There's a human being that's behind each of the great books. There's always a human connection.
>> There are books I can think of uh that is not a human connection. For Muhammad,
for example, uh those aren't his words, right? Those are the words of Allah.
right? Those are the words of Allah.
>> I sit with Shakespeare and he winces not. I sit with Claude and he winces
not. I sit with Claude and he winces not.
>> That text will never be done by AI. But it can't say how do you feel
by AI. But it can't say how do you feel from the experience you have had to read great books.
>> A counterpoint to technological determinism. You can build the kind of
determinism. You can build the kind of future that you want.
>> I'd like to start this off with a question to Jonathan. Can AI, which is a as most of us know it so far as a chatbased technology, can it truly engage in Socratic inquiry or does it
only stimulate the form while missing the essence?
>> Yeah. Um, well, I guess it depends on what we think the the essence of a Socratic questioning is. One candidate
for the essence is just the semantic words themselves, right? And there um a version of the question you can ask is
is there anything the wisest human tutor or even Socrates himself that would be said or could be said to a potential student that an AI couldn't plausibly uh
now or in the future say? Another way to ask this question is can AI write a great book? like if an AI uh will
great book? like if an AI uh will witness the the fall of Rome, uh would it have the capacity to write the city of God or something like that? And my
stance on that is clearly it can't do it now, but I don't see a reason why it couldn't do it. So on the level of purely semantic content of the questions
asked of the Socratic tutor to the Socratic pupil, I don't think there's there's anything necessarily that's going to be different. But now the question is if there's nothing different
in the pure semantic language does the fact that it comes from a human does that make it anything more meaningful in the Socratic method and I I think definitely the answer is yes because uh
just I just recently gave a a lecture on uh Plato's uh symposium and Alabides who is this aristocratic playboy general uh says when he listens to the speeches of
Socrates he's the only man in Athens who's able to allow him to feel shame And why is that? It's because Socrates, he has a life that backs up the ideas.
And we're told how he's able to brace hunger and coldness and he's at once, you know, a very gentle teacher, but also a great warrior on the battlefield.
And so, the fact that a human is saying these words, I think adds a a new dimension that that even if AI got it perfect on the semantic content, that would be missing. I'll just say one one
last point which is the fact that there is no human saying it in in this case that I gave is a negative right but I
could also see it as being positive for uh socratic questioning namely that the the person who is questioned is not trying to impress or not trying to win
the recognition uh of of such a such a dialogue and so one last example here at Cosmos we're looking at uh different kinds of therapy apps AI therapy apps.
And one of the benefits, one of the key reasons cited why AI therapy right now is already better than a lot of human therapy is that the the the patient is
not trying to impress or does not feel any uh recognition from from the system.
And so so my my full answer would be I don't see a reason why not uh AI can generate the semantic content that a human would. And the fact that it's not
human would. And the fact that it's not a human uh has both positives and negatives that we're going to have to navigate around.
I want to say something about um my work on the Black Periodical Literature Project. I say that because it is a
Project. I say that because it is a record in many ways of emancipated slaves in America becoming educated,
going to college, reading great books at places like Fisk, um having reading groups in small towns, writing and that
the traces of writing, the traces of the great books manifest itself in fiction.
But if you sit there with a student and say, "Okay, look at this poem written by uh somebody 30 years out of slavery who
has reacted to uh Charge of the Light Brigade, right? Or has uh responded to a
Brigade, right? Or has uh responded to a poem in Gray's Elegy about mute and glorious Miltons, right, of of uh Americans who have not been allowed to
to speak and have poetry.
There is a way that that kind of understanding the production of that text will never be done
by AI to say to say for an AI an AI can produce a tennisian poem but it can't say how do you feel from the experience you have had to read
great books and so there are multiple levels of how we think about what is great literature and what does it mean in the classroom to understand the engage engagements of people with this
work over time. That I think is um is what I want to do in the classroom after AI delivers the facts.
>> Thank you very much. I I hear a something close to a disagreement here which we might follow up on in a moment.
Right. So Hollis is saying that there's a certain kind of text that AI is not going to produce. It sounds like you're saying not the great books themselves but the the responses to the great books showing how people are taking them up.
And Jonathan, you were saying that you do think that an AI will eventually produce a great book. And it's not exactly the same claim, but it's these are >> Yeah. approximate claims.
>> Yeah. approximate claims. >> Yeah. So, um I think there's there's a
>> Yeah. So, um I think there's there's a gradient of how much the person kind of matters. Like do I really need to
matters. Like do I really need to understand who Kant was to read critique of pure reason? Like not as much as you know I need to understand who who Plato
was to read the dialogues. And uh even more so for an autobiography of course like the confessions right then Augustine's life becomes even more important. Um, and I guess my push back
important. Um, and I guess my push back would be like does it really matter that it's uh um Shelly, for example, we we we read
Frankenstein. Uh that that that
Frankenstein. Uh that that that literature is created like if there's something a lot better created not by a human. Um
human. Um again, there's something lost there.
Like I want to affirm that there's something lost there. uh but
maybe the quality differences are so big that we just don't end up caring.
Another way to frame this is I think if AI uh gets to a sufficiently advanced place, it spells kind of the end of academic philosophy as it stands right
now, which I take to be kind of this arms length solving intellectual puzzles that most people don't really apply to their lives. Uh whereas we might go back
their lives. Uh whereas we might go back to the philosophy as a as a way of life.
uh kind of emphasis on philosophy because that's what what becomes important. So I'm certainly not saying
important. So I'm certainly not saying it can write every genre and I'm not saying that even in the case that it can write nothing is lost. Um but yeah, thank you. I want to bring in Brendan
thank you. I want to bring in Brendan here. So we're I'm going to ask you sort
here. So we're I'm going to ask you sort of the same question from the opposite side, right? So we're talking about
side, right? So we're talking about whether there's there's something that having a text written by a human has in it that an AI couldn't replicate. Let me
ask about that from the the side of the learning. Is there something from a
learning. Is there something from a philosophical perspective, is there something irreducibly human about learning that AI can't replicate or replace?
>> Well, there are infinite aspects in the universe and we have to decide what matters
and we as humans give a damn. AI doesn't give a damn and we decide what matters and through you
know the lens of H highaidiger I think it is we view the world with care and concern and so I think that is one
really significant limitation another that I think is significant but I'm mixed on whether it
can be resolved is that much of the knowledge that guides our action is of a primordial primordially practical
character. So when we think of learning
character. So when we think of learning we think about propositional knowledge or we think of explicit semantic knowledge things like what you might
learn in science or physics. Um but a lot of knowledge isn't like that. A lot
of knowledge is kind of um uh inarticulable knowledge and um and for that we we seem to need some
experience in the world. And it's not clear to me what kinds of experience we would need and what is preconditional to learning from those experiences. But it
definitely is the case that our um explicit semantic knowledge, what we consciously picture with our intellect, is in some sense cradled by this um
tacit dimension of knowledge and we gain it through trying things out and experiencing the world and enjoying it and getting hurt by it and that sort of
thing. And so and doing it with with a
thing. And so and doing it with with a kind of volition free free freely. Um
and so can we emulate that with robots?
Can they go explore the world and get hurt? I maybe I I don't know. I I think
hurt? I maybe I I don't know. I I think that's that's where I would question um I would question about the limitation.
>> Something I look for a lot I think about this question a lot in my own work. And
one of the things I I uh am monitoring for in AI's progress, right? I remember it's at least 10 years
right? I remember it's at least 10 years ago that you saw the first AI programs that could produce a new work by Mozart that could fool some uh classical music scholars, right? And that was impressive
scholars, right? And that was impressive and frightening. Then the question
and frightening. Then the question becomes, are they going to create a new Mozart or a new whoever's the the answer to Mozart without that already there in their training data? And I think that
that speaks to the to the points that all of you are making, right? Which is
if the AI is drawing on the well of human experience that has been put into words, what about the well that hasn't been put into words yet, right? That is
a lot of what literature does, great literature does is to draw attention to things that are already there in the human experience, but nobody's quite been able to pay attention to yet. I I
don't know that it could or it couldn't.
I would be a little bit surprised if it could, but it seems like that's one way to put the open question. I just wanted to respond to this point because I it seems like part of the question has to
do with what the status of text is. U
and I actually thought you were going to talk about Plato's Fedrris but I will since Socrates in the Fedrris um describes uh a deep problem with
writing that is that when the word is written down it can be passed along and separated from its speaker and it can't explain itself anymore.
Um, and that the the thing that seems to really matter is what Socrates and Fed describe as um the living
writing that's written in the soul of the person who knows. Um, now that sounds sentimental or something like that, but I think what it's in fact describing is, and it's not just true
for writing, it's also true for spoken speech. So, um, people hear words and
speech. So, um, people hear words and phrases and slogans and they repeat them. also a big feature of Plato's
them. also a big feature of Plato's dialogues. Lots of people saying things
dialogues. Lots of people saying things that they don't truly understand.
They've heard it from somewhere. So the
the word sort of the sentence sort of travels as a unit and then it it doesn't get examined. And then the conversation
get examined. And then the conversation is the place where you you discover whether those what those words might mean uh and you you you either make the
connection between you the speaker who wants to say the words or you don't you reject it. And this is this is part of
reject it. And this is this is part of what conversation is, what living conversation is. And I think that one of
conversation is. And I think that one of the things that that Fedra's passage points to is that it's not one of the really the whole basis behind this practice of conversation where we
examine certain pieces of speech.
What it suggests to me is that there's something that's not text and it's also not even just implicit knowledge. It's
it's um the spontaneous human mind that uh thinks that reflects that perceives that experiences that puts things together that doesn't put things
together and I think it's that mind that I um that I encounter when say I'm I'm beating my head against Kant in the critique of pure reason and I'm like
Kant what are you doing? I'm thinking
what did K what was Kant's mind doing when he with his not just with his mind but also with his will and his heart and his desires. What was he trying to do
his desires. What was he trying to do when he did that? That's part of my exercise as a reader and part of my exercise as a conversationalist. What do
you Jonathan? What do you Brendan? What
do you Hollis? What are you after? What
do you see? What do you understand that's behind your words? So I think looking behind words is crucial to philosophy and crucial to
education and I I and I I can't see how a machine could do it.
>> Yeah.
>> So if I can just respond to that um the metaphor I'm glad you brought in the fedus. One of my favorite metaphors I
fedus. One of my favorite metaphors I think is uh an idea is like a seed and he asked his interlocutor where would you want to plant the seed? Would it be in the dead pages of a book or in the
living fertile soil of a man's soul?
Right. Insold speech. So I don't disagree with you on what the end goal is is we want to get insold speech into people. Right. Like like to do it,
people. Right. Like like to do it, >> right? Right. Right. The the go we want
>> right? Right. Right. The the go we want them to to self-guard.
>> Yeah.
>> And then the question becomes, >> can an AI get help you get insult speech as good as a text can? One reason to think that it can even better is one of
the critiques that Socrates has in the fa fadus of writing is that to everyone it gives the same speech. A book does not know how to hide or withhold or tailor specific ideas to people. LLM can
do that. This is what we're seeing in therapy already therapy bots. Um and the last thing I'll say is I totally agree with you. It's important for me when I
with you. It's important for me when I read a text to think that I won't use mind here but structure. If I read Kant, I know Kant is a serious thinker. So if
I there's something I don't understand, I need to work to understand what he's getting at. But I don't see again why AI
getting at. But I don't see again why AI can't eventually have enough you trust it enough to see that there might be a structure beneath it. I'll give you an
example. Video games against against uh
example. Video games against against uh like like AI or like just computer NPCs and it's trying to build a strategy around what you're doing. You think what
is it trying to do? What is the underlying structure here? So I I don't think we need a mind behind the text to get insult speech, but we need a structure to wrestle with. Yeah.
>> So can I can I say that I think um and I kind of am halfway between you two on this, right? There's the primary text,
this, right? There's the primary text, there's secondary text, there's the critical community that which is part of what one does when one is a scholar on things. You just don't read the Kant,
things. You just don't read the Kant, you read who has read Kant, who has been influenced by it, what those genealogies are, not just what Plato is. And that's
why I started um by talking about this this archive that um that I've been spending my scholarship on because to see it's not just Plato, it's not just
Kant. It is the way that it has
Kant. It is the way that it has influenced things and the thing that has when when we get in discussions about a particular book like we did yesterday
with Shelly, you know, Shelly has AI is not going to be able to write with the way the deliberateness of illusion that a person came who has been deeply
influenced by and hung out with and had weekends with and hated and somebody stole somebody's spouse, right? and you
have those living experiences and then when you drop that person's poetry at a moment in your text it's going to mean something different right because we embed illusions we embed what we have
read all of our language includes everything that we have read already so I think where I see and to be very
specific like sure the words can be delivered to readers through LLM that's awesome but what we as humans and scholars and thinkers need to do is
understand those g genealogies of influence.
>> I was saying how do you square this with the web devoce I I sit with Shakespeare and he winces not. I mean I sit with Claude and he
not. I mean I sit with Claude and he winces not. I mean h yeah
winces not. I mean h yeah >> well I I mean I think you know so we deboy was saying you know I'm reading this book and he doesn't care my race
right he doesn't care and and which is something maybe Plato should have thought about like when when the text is sitting there it's not going to have a bias against its reader and I think
about you know the ways that we are going to have young people approach Claude and Claude doesn't care if you're a woman if you're poor if you're this or you're that it is going to answer your
questions I'm all for it, right? I am
all for what LLMs can do to deliver but those refinements.
But c can I back up Brendan here because I I think or I think that um it's already true about the technology
of the book. Okay. That I mean if I don't normally like to talk like that, but I since I'm with technologists, I'll be a technologist. um that it's already true that um you know it's one of the
great ironies that Plato and Aristotle who thought that very very few people were were capable of philosophy um their books are like widely printed they're in paperbacks they're cheap any
anyone anyone can open it up and they can with Dubo or with any of us engage with this mind which in experience
despite it being in the mind of a dead person feels very much alive. Um, and
something which you know if you read these books every year as I do, you know, you go back to them again and again and you notice things that were never there before um and that are again
part of this um spontaneous act of thinking and imagination that creates these books. Um I I think it's just not
these books. Um I I think it's just not the uh it there's a there's a human being that's um behind each of the great books.
Sometimes a community. So if we think about like the Iliad or the Hebrew Bible, you know, books which were written um with audiences for over a community over a period of time, but
it's always a human connection and it's always a that I think I don't know how I would know how to read without thinking that there's a human being at the other end of that who is communicating something that they've perceived,
they've experienced, they've thought, they've put together the judgment that they've formed. Well,
they've formed. Well, >> two responses here. refers to Zena uh there are books I can think of uh that is not a human connection uh and those are religious texts for Muhammad for
example uh those aren't his words right those are the words of Allah and so this is this is the point I was trying to make which is what we need behind a text is to have confidence that there's a
structure behind it not necessarily a human structure and in religious texts we're already very happy to do this and if again I'm not this is this is not
clearly not Now, if we have uh an epistemic trust in the products of AI, I don't see why not having a human
connection, but having a a a uh a trust in the structure that created it uh is any different. And then to respond to um Hollis's point, I'm really glad you you brought up Frankenstein
because I think there's two separate claims that we need to to discuss separately. The first claim is whether
separately. The first claim is whether an AI can write a book like such as such of Frankenstein with the deliberateness with the idea that if you poke into an arbitrary seemingly arbitrary detail
it's not arbitrary it's connected to a structure that's the first question and my answer there is clearly it can't do it now I don't see a theoretical reason why it couldn't do it and I think the burden of proof is on the person who
says no way there's a theoretical limit then I think there's an even more interesting question if it can write such books Does it matter that it's not a human who
wrote it? And again, from genre to
wrote it? And again, from genre to genre, I think it changes. In the case of Frankenstein, I don't think my enjoyment of reading the book is any
less if you told me uh if I didn't know who the author was in a lot of the cases, right? And so, so that's what I'm
cases, right? And so, so that's what I'm trying to say.
>> Well, also, the monster read Milton and we rejected him. This is an important point about uh the distinction I think playing out at another level. In other
words, the monster had all of the eloquence in his own writing and um in his own ability to engage the great books, yet he was completely
outcast by society for reasons that were aesthetic. For example, the watery eyes
aesthetic. For example, the watery eyes of the monster were called, you know, called to mind. um and ultimately kind of societal in and then he did a lot of
violence and then it was more justified.
>> I just want to note here as a moderator that as we're discussing the possibility of an AI that could write Frankenstein without ever having read it or having in its training data, we're having booming peels of thunder and lightning in the background.
>> Well, like can I respond? Clearly, this
is of textual significance, right?
>> A non-human textual maybe.
>> So, I mean I think you're right to distinguish, right? you know, whether
distinguish, right? you know, whether someday an AI could be able to to do this and would it matter. I'm going to pull back a little bit because the we're supposed to be talking about education a
little bit more broadly and again coming from the public education sector where you know right now the great crisis in public education right or what if you look at
what what states are wanting to do or the way states and lawmakers are using AI it's for things like workforce alignment right they workforce alignment
right that the idea is all of these AIs are looking around the state or ver whatever state you're in in the country and saying these are the job openings these are the trainings that you're going to need and I have seen the
algorithms right that these AIs do in creating curriculum for state universities no philosophers right and this is a real
problem with the AI algorithm that because philosophers and readers of great texts can do everything right but aren't aligned to a particular job so
when I think about and I guess I would push back against like sure we can have AIs write novels as great but is that
our societal problem right now and can we actually have AI do a more uh better things for higher education which
is talk about the importance of reading Plato right in whether with AI or without or or to how does how does AI
support human flourishing? How does it get our state legislators to think more broadly about the conversation that we're having here?
>> I'd like to Zena, go ahead and then I want to ask Brandon a question.
>> Just I wanted to to to formulate my version of that question, which is why we ask a lot of questions about what AI could do or what AI can't do. But the
question I'm always interested in is what do we want from AI? What is the problem or what are the problems that in education apart from those pragmatic
ones that it can solve for us? That is
what what do we what are we looking for?
I'd be interested especially in those of you who I mean this is my chance to talk to people who are actually working on it. I mean what's what's the what's the
it. I mean what's what's the what's the aspiration? What problems can it solve?
aspiration? What problems can it solve?
>> Brendan, I I think this is a good question for you. I'll add a little bit more onto this here. Um we've been talking a lot about what we stand to lose through AI. I think it's worth focusing on what we stand to gain,
right? So, um, I I will just say
right? So, um, I I will just say personally in my own work when I'm writing about AI, I'm usually writing in a pessimistic mode because it looks like potentially radical change and I I'm
more of a conservative small C at least.
Um, when I'm using it for my own purposes to to get tasks done on my list, I feel great about it and I I haven't entirely reconciled these two things yet. I I wonder how you feel
things yet. I I wonder how you feel about this, uh, Brendan, in the educational system, right? Is there a way that you can create an AI that could solve some of the urgent problems that are facing uh higher education right now
or primary school education? Um, another
way to put this is if we have an educational system that's primarily designed around efficiency and and a lot of the worry is that uh education is continuing down the path where
efficiency is the goal. Is there a way that you could imagine an an AI being created that could expand an educational system that's based around a different concept like udeimmonia?
Yeah. So, one of the charges that gets levied against education, particularly like K12 public school education is that
it's the Prussian model of educating for the industrial revolutions, educating factory workers. That's and loyal
factory workers. That's and loyal citizens. You know, one of my favorite
citizens. You know, one of my favorite thinkers on this subject and a person who's a um who's part of the udemonic tradition is Wilhham von Humbled. He's a
German philosopher. He is writing and thinking at an age that is not unlike ours in so far as it's an incipient age of automation. Um and he takes he
of automation. Um and he takes he embraces a lot of what Aristotle has to say about what it means to live a flourishing life. He adds to it a kind
flourishing life. He adds to it a kind of romantic individualism.
So there's it's has a much more individual plural character. He cares a lot more about developing individuality and uniqueness in humans and uh which isn't absent in Aristotle but it's
different. and he has a a minimal state
different. and he has a a minimal state in mind as well. He thinks he thinks a lot less of the kind of paternalistic state that Aristotle might envision for for cultivating this. Okay, why do I
bring this up? Well, the Prussian model, they're both Prussian, right? Humbult is
Prussian. Um the Prussian model, it's a different thing and it happens under Bismar decades later. Humboldt is, I think, more instructive as a kind of
countercase like a world that could have been if we had gone that way. And why do I say that? Well, because his aim was on trying to get individuals to have a cultivated self-development where
they're um trying things out, encountering the world, learning and developing a sense of their harmonious, consistent self. And the way that I've
consistent self. And the way that I've tried to do this in my own life, I think like what where my mind goes is I have a four and a six-year-old kid. And so what I do is they go to a school called Alpha
School in Austin.
They have a time delimited period where AI is where they use AI two hours a day to try to master things like math and
language and they take what would have been a six, seven hour day, crunch it down into two hours. They get shockingly good results at that. They learn at a
very high rate. AI handles the kind of content delivery in a highly personalized way. Their teacher who they
personalized way. Their teacher who they call a guide in the monosur style is alongside them and helping them unblock and think about motivations. AI is
analyzing their performance coaching them giving them insights about where they didn't read this or they skipped over this and you know this is the kind of question in a very granular way that
they need more of in as part of their curriculum. So, that's one component,
curriculum. So, that's one component, right? Is a two-hour component. Then for
right? Is a two-hour component. Then for
the remainder of the day, they're out in the world. They're trying things.
the world. They're trying things.
They're doing really interesting tests.
Like my daughter is a kindergartener.
She has to ride a bike for 5 miles without stopping. She has to speak in
without stopping. She has to speak in public in front of 100 people. She has
to do things that are pretty hard for kindergarteners to do. And it's all kind of experiential. Again, guide-based um
of experiential. Again, guide-based um component. You learn a lot about
component. You learn a lot about yourself that way. And then the third component is that we do tutoring with her on some of the rudiments of philosophy you might say or on metacognition. So thinking about
metacognition. So thinking about thinking, what it feels like to think and to be puzzled. Um questions like what is it what is the difference between a bird and a plane? Or what
about when daddy says something and the AI says another thing? Uh who's right?
The answer is AI according to her. And
or but what if AI1 says this and AI2 says this? then she's puzzled and she
says this? then she's puzzled and she has to think about it and it raises these questions. It sparks a little bit
these questions. It sparks a little bit of curiosity and gives her a little bit of a foothold into these questions. So I
would say that those ideas are consistent with kind of the humbultian idea which he called building um of individual self-development and they're
a way to bring AI in that cultivates strengthens the self-direction. I see
her now going off and she'll say, you know, how do I garden? and she'll ask AI that question. I don't know how to
that question. I don't know how to garden. And I really delight in her
garden. And I really delight in her asking those kinds of questions and being active about it. And it strikes me that it may be one of the coolest times to be alive as a six-year-old for some.
What I'll leave you with on this question is that the there is a darker side of this, which is that um the same
technology that can make it a wonderful time to be a six-year-old can also breed passivity dependence doom scrolling, that sort of thing. Um
sometimes this is the fault of the parents. Sometimes the parents are
parents. Sometimes the parents are working two jobs and the technology functions as a kind of substitute and sometimes it is simply u kind of a lack
of access. And yet I think the um
of access. And yet I think the um infeeblement that might result in an early childhood uh trajectory that is
marked by this kind of passive use could create almost two classes of people. I
say that with an awareness of how dark that sounds. um it should not be so but
that sounds. um it should not be so but there needs to be significant work on trying to scale those models that inculcate the kind of self-direction the
autonomy that is going to be necessary in the metacognition frankly to to thrive in the AIH >> I think I just wanted to ask because I this is such a treat for me to be out of
my lite bubble uh to talk to people who who use the technology and and understand it. So here you have um a
understand it. So here you have um a device you know your phone or your social media accounts which provides something
a mode of social interaction um that you could do in person but you can also do it on this device and the device is easier to access. It overcomes
all kinds of social anxiety. It's much
easier to do but it's also worse.
There's an epidemic of loneliness.
There's an epidemic of isolation. And
there's people who are when they need to make human to human interactions, they can't. So I suppose listening to the and
can't. So I suppose listening to the and thinking about this idea that AI could be used in the way that Brendan and Or I think also Jonathan are describing it
where AI is used in a way that's inquisitive, that's open-ended, and that's critical, not the way it's normally used in the college classrooms, but the way that it ought to be used or
could be used. I worry that um it would in the same way that social media shortcuts that that loneliness which eventually drives you to go out and talk to a
living human being. If the AI is because it's so easy just to have something that seems like a conversation about the fadris with your computer, it's keeping you from reaching out to asking a friend
to read the faded or reaching out to a professor of a similar topic. And since
intellectual community is sort of my uh purpose in life, I I I worry that we're going to see something which even if all the be all the best case scenarios turn
out, it's going to increase uh social isolation and reduce the extent to which learning is a matter of human connection and human community. Um
>> but I will just say you founded the Katherine project.
>> Yeah. It is using very advanced technology, >> the internet. That's true.
>> And it's bringing people together.
>> And I only say that as an example, a counterpoint to technological determinism, meaning you can build the kind of future that you want. I think
this concern about atomization is longstanding, latent um you know
articulated 1830 by Toqueville and the there seems no doubt that AI could exacerbate it. It
could create a hall of mirrors culture in which we have self-curated realities.
we have all we need or so we think and we withdraw into ourselves and um that shatters something really important about um human life and self-governance
like there's a very dark possibility there but I would also say that it seems to open up an incredible opportunity to bring people together in important ways
so my my my answer on this and other issues is to like go go build that like you've done >> well what does it look like concretely Well, Catherine Project, >> well, but Katherine Project just the the
technology is a mere means. You end up with just reading books and face tof face conversation. And it is it is a
face conversation. And it is it is a little worse than face to face community. It's worth it because it's
community. It's worth it because it's sometimes the only thing if you're full-time caregiver, if you're milit deployed military, if you're elderly and
immobile, etc. But it it that there's nothing. So, I'm just wondering how to I
nothing. So, I'm just wondering how to I just want a little help thinking through what the what the AI how how the AI correctly used could be a vehicle for
building face-to-face community in real.
>> So, one one one idea could be one of our one of our friends is building this where people who have no exposure to great books, they don't know where to start and they just ask very basic self-help therapy questions like uh you
know, I'm having issues with my with my with my girlfriend. uh and then through that they get introduced to certain books as as a way through that and to build an intellectual community you can
say well everyone who gets directed to Plato's symposium let's say uh then gets to read that together and so again there's a very simple way as a sorting mechanism in order to build intellectual
community but uh you know maybe you're not so far away from your leite circles because I I actually agree with your social media criticism I so I I don't use social media like like almost at Uh
there's an interesting question about why I post. It has to do with Brusso in the first discourse about why he hates the printing press and still writes books, but I won't get into that. Um but
uh I I think to echo Brendan's point like the only way to combat that and this is actually I think why Rouso wrote books despite not liking the printing press
caveats there is to write better books or or to warn people about the printing press. is to go on social media and tell
press. is to go on social media and tell people about this is to build better AI systems that lead people to community like I don't think the I I I think if the to use it very simplistically if all
the good people retreat from society uh and refuse to engage with technology then the only actors building it are going to be people who who are looking to exploit >> one thing I want to mention is a a
project that Cosmos is backing built by Gavin Leech who's here um a brilliant uh computer scientist and philosopher
looks at written works using the really rich representations that LLM allow you to to
obtain through embeddings and there are multiple uses of this but one of the speculative uses I think is if you write
on Substack if you generate a lot of written content then it is possible I think never more possible than before to kind of find your fellows.
>> I think that's right. I think that and you said sorting and I think that this is something that AI can do. I mean, you think about how we sort ourselves, how people choose colleges, which is badly, right? And and or how you choose, you
right? And and or how you choose, you know, you live in your town, your parents chose your town, your parents choose your school. I think that there's a way that finding community uh in
different ways. I think about, you know,
different ways. I think about, you know, Harry Potter and the sorting hats, right? that that some sort of weird
right? that that some sort of weird magic puts you into the house that you should be in. Um there's a way that radical individuality
of um I was talking to a fellow who's who's putting the entire library of Alexandria online is what he says and you you you enter this portal by asking questions answering questions and it
comes up with your philosophical genealogy um so that you know where to read and you know like okay everybody wants to be a nichian but maybe you're not right or this or that or the other
but it can then sort you a a lot better than a first year writing class >> in a big public can.
>> Those were the friends that I was I was referring to. Uh uh lightning uh is the
referring to. Uh uh lightning uh is the project name. Yeah.
project name. Yeah.
>> And uh again, thank you. Uh
what I was going to say is there's another great example actually from alpha school where for a lot of the I think this is maybe post kindergarten youth you tell me um they have what is
called a child's knowledge graph and their interest graph. So what do they need to learn and what are they interested in? So if I need to learn
interested in? So if I need to learn second grade reading >> and I really I I really like like uh uh MLB right? uh right
MLB right? uh right >> like if you can form sentences at that level about the current MLB season that would be great that's possible with an aristocratic tutor right who's dedicated
to you but those kind of activities are already are already available um for AI to do so there is a kind of way to to bridge that and even for me I'm trying to learn ancient Greek right now uh the
AI is not fully there but I'm also going to uh view allegedly where the sirens lived the islands where the sirens lived so I would love to read the passages in the Odyssey where he encounters of
sirens, but but my ancient Greek is not good enough for Homer yet. So, I would love like a more reduced version but authentic version uh of AI to to read there. And so, so that's another uh
there. And so, so that's another uh would be another example. Are people all all going to use this instead of cheating with essays? Probably not. But
again, it's up to build and design that system. I'd like to to jump in here. Um
system. I'd like to to jump in here. Um
we have just a couple minutes left on the panel here. Um a final question perhaps. I think this might be a
perhaps. I think this might be a question for Brendan. is doesn't this a lot of this question the whole question that we're talking about on stage here come down to what we think of the people who are going to be building the technology and what we think their
ultimate aim is. I will speak from my own experience. I was a computer science
own experience. I was a computer science major in college. The reason I'm now sitting on this stage instead of building products is because of conversations like this that I had with my fellow computer science majors. I
took a class on singularity and transhumanism which is the idea that technology is going to take off at an exponential pace and we're going to stop being human. I was the only one in my
being human. I was the only one in my class who was opposed to this idea. Um,
and the only one who even had a a sort of a set of resources for for questioning it at all. I was listening on my drive out here to a um a podcast
that was interviewing Peter Teal and they they're talking about AI and they're talking about the the fairly prevalent view that AI is going to be a sort of it's either going to be or it's going to build a successor species to
human beings. And uh Dowed asks Teal, do
human beings. And uh Dowed asks Teal, do you think that we should preserve the human species or not or should there be a successor species? And Teal hesitates for it. Had to had be a solid 25 seconds
for it. Had to had be a solid 25 seconds of thinking about it and then offering and Ross asked him a couple of times, should we preserve the human species?
And there was a very reluctant yes. You
can imagine a path. You can imagine a couple of different paths for AI, right?
Right. There's one path where it's letting us do a lot more and that's great. And there's another path where
great. And there's another path where it's relieving us and making us do much less and where that's a welcome blessing. And it's a world where all of
blessing. And it's a world where all of the flaws that come along with people doing stuff and all of the the terrible things that come along with uh human moral agency being wrapped up in things.
Find this in driverless cars and the justice system. Every a AI application
justice system. Every a AI application that you can think of has enthusiasts behind it for whom it's pretty obvious that the main exciting thing is we're not going to have to have people do this anymore because we don't want to trust
people with this. Do you think that that is a majority or a minority view? Am I
describing this correctly? I mean I think this kind of maps to the reductionists uh that you've talked about elsewhere. Is that
about elsewhere. Is that is this a valid concern? Um do you think that the the view of human of AI as expanding human agency is what most of
the people who are building AI want for it?
>> Um I do not think it is the majority view.
I think that we have found resonance however in this model and in and that's been very encouraging to me.
But I think that wrapped up in the question there's a kind of esqueological narrative that both sides have embraced
a kind of end times narrative where in the one case we die and in the other case we we're superseded and so we die
in a way as well. And there isn't there aren't a lot of resources in either of the prevailing views prevailing as
measured by mind share prevailing as measured by dollars um philanthropic dollars for example 1.5 billion in the existential you know pessimist position
there aren't resources to think about the human good and I would say that places like St. John's have to continue
to be in the game in their own way. I'm
not advocating for a retooling. Um, but
they have to recognize that this is their moment. But, you know, if the
their moment. But, you know, if the world creates moments when philosophy really matters, this moment is surely one of them. And
never before has it been so important to have a kind of unity of thinking about the human questions, the perennial questions with
the builders.
Loading video analysis...