Scott Hanselman - Human Hearts, Silicon Minds
By Global AI Community
Summary
Topics Covered
- Tech Speeds Faster Than Ethics
- Quantifying Productivity Erodes Humanity
- AI Enables Four-Day Workweeks
- AI as Socratic Mirror for Learning
- Beware AI Grifters Lacking Expertise
Full Transcript
Welcome to another episode of Silicon Minds, a human heart. Today we'll be talking with Scott Hanselman from Microsoft.
Scott, if you need to write one single social post to introduce yourself, what would it say? Uh, that's a good
question. I would say I've been teaching
question. I would say I've been teaching computers for 30 years. Now we in those 30 years you've seen a lot of changes
happening. How do you feel about the
happening. How do you feel about the current interaction between humans and AI?
I started with 64k of memory and now we have pocket supercomputers that are connected to the whole, you know, all of human consciousness and now we've got
both very large language models and small language models that can run in airplane mode on a device like this that I can have whole conversations with. I
feel like we have moved technology faster than we've moved computer ethics and uh AI combined with social media and the
algorithm are pushing the psychology of our brains maybe faster than it is than is healthy. Um, so I I I think about AI
is healthy. Um, so I I I think about AI in the context of social media, which is using AI to give us an infinite feed of
of nonsense, and how it's changing our kids' brains in schools as we learn whether or not we should ban uh, you know, cell phones in in school. I think
that the it's changing people's brains and we haven't figured out if that's a good thing or not. It's certainly a thing, but it is concerning and I worry that the speed of the technology is
moving faster than our ability to healthily absorb it.
That brings us also a bit to how AI makes us more efficient. How much more can we become
efficient. How much more can we become more efficient that it's not going to work on our our mental health? There's a
book by Dr. Ephoma Ajuna called the quantified worker and she talks about when you start measuring output and start thinking about productivity it
changes the relationship and the power dynamics. So if you imagine 150 years
dynamics. So if you imagine 150 years ago when we started building um uh assembly lines, you would have people doing their widgets and moving their
things on the assembly line and then there was an overseer with a clipboard measuring and they go, "Oh yeah, John is more productive than Jeff and Anna's
even more productive." And then we start quantifying workers by productivity. Uh
and then we start using terms 100 years later like 10x developer and we build a mythos around productivity as being the most important thing which then we build
into hustle culture which we then algorithmatize and put into an infinite uh an infinite series of Instagram reels. That can be super challenging. So
reels. That can be super challenging. So
I feel like that's a problem for young people's brains if they're told your value is only in the amount of widgets that you can move per hour. So I
recommend that we read that kind of research like actual PhDs with with thoughtful analysis of what it does to your brain when you start quantifying
productivity. How much more productive
productivity. How much more productive can we be? But also, do we need to?
Like, why do I am I supposed like they they promised me it was going to be a 40-hour week? It's not a 40-hour week,
40-hour week? It's not a 40-hour week, depending on where you live in Europe.
Um, if AI is so productive, why aren't we promoting people working 20our weeks?
Like, isn't that the Star Trek the next generation promise of AI? But what we're seeing is no, I can work a 50-hour week and get a 100 hours of work done if I just use AI. Uh I think it'd be more
interesting to ask ourselves why we are uh lionizing work so much. So I'd love to see a movement where AI allows me to work a 4 day work week. I guess the
biggest problem there is capitalization that people want to capitalism. Yeah.
Capitalism. Yeah. Unchecked capitalism
is a problem. So when people say our AI is going to take your job, AI may be the tool, but it won't be the thing that takes your job. All right. But also the the the there's seven days in a week and
there's five days that we're expected to work for no other reason than someone decided. And if if productivity is the
decided. And if if productivity is the goal, then let's be productive in 3 days or 4 days. Uh additionally, why do we need to go and make people if I'm 10x
more developed more more efficient, maybe I can just work one day. But again
the point then becomes not necessarily capitalism but becomes moving the stock price right so I am interested in any tool AI or not AI did it make someone's
life better then that was a good thing if it made someone's life worse then why did we do that are there things particular in your personal life where you say this is
really an AI tool that helps my own life yeah um an example was that my son was working on his advanced biology course and I haven't thought about biology in
20 years, 30 years. So I needed to page back in deep deep memories so that I could remember biology. So I got chatbt.
I put it into voice mode and I told it a huge prompt, big long bunch of context.
I told it about myself, about my son, about where we lived, about what I did and what I knew and what I didn't know.
And I said, "I need you to quiz me and get me up to speed over the next two hours on biology." So, just like in the Matrix when uh Trinity gets the instructions and schematics for the
helicopter burned into her brain and says, "I need to know how to drive a helicopter." Okay, now I can drive a
helicopter." Okay, now I can drive a helicopter. I spent two hours getting up
helicopter. I spent two hours getting up to speed on high school biology, not by using the AI to cheat on any tests, but rather using it to fill in the gaps in
my understanding of the of the topic.
That had a lot of value. and using the Socratic method to interview the AI and be interviewed with the AI ultimately means I wasn't really talking to the AI.
I was talking to myself in the mirror that has a lot of value and I can see value in that. Using a AI to generate a five paragraph essay so I can
get a B minus in a class I didn't want to take anyway has less value.
Is that also linked to educating people how to use these kind of tools? Yes.
Where are we? How can we get people to get more educated about these things?
That is a huge problem. And uh we've got folks, you know, at Microsoft Education thinking about that. But I think that it's moving so fast that it's freaking teachers out. And teachers are not
teachers out. And teachers are not understanding how best to teach this stuff. They're they're not thinking
stuff. They're they're not thinking about it within an eth ethical context.
They're not thinking about it within a psychological context. They are
psychological context. They are anthropomorphizing the AI and referring to it as a human. They're using terms like hallucination, which implies that
the AI has mental health and a psyche.
It has none of those things. Uh, and uh, while it's fun and exciting to say it will change the world, no one's talking about changing the world for the better.
We need that to start with teachers and educators and school districts, which need to get up to speed on this stuff.
And I'm concerned that the wrong people are talking to those school districts, the wrong grifters. Are you familiar with the term grifter? No, a grifter is
a uh a roaming con artist who will tell you that they're going to retar your um your driveway. They your driveway looks
your driveway. They your driveway looks very old. I'll retar it for you. And
very old. I'll retar it for you. And
then you come out and it's it's black and fresh and shiny and they oh it's wonderful. You give them money and they
wonderful. You give them money and they run away and you realize that they just painted it black. They didn't retar it at all. But by then it's too long and
at all. But by then it's too long and the grifter is gone. So a grift is a long con. confidence man. Uh AI grifters
long con. confidence man. Uh AI grifters often uh if you'll look in their LinkedIn profiles will say thought leader and AI expert and whatnot and you'll find out that they have no
experience at all in AI and they have no background in data science of any kind but they're running around taring everyone's uh driveways and uh and then running away. So I think we need to
running away. So I think we need to listen to scientists, people with background in machine learning, people who are doing work in this space both technically and sociologically and and
listen to them.
Any extra advice that you could just give to anyone like where do we need to look at to be able to be prepared for the future? It's funny because we have a
the future? It's funny because we have a situation where AI is trying to teach everyone everything very very quickly but it's an echo chamber. I would
encourage people to make sure that they're reading long form original works on the thought around AI. Uh just one example and there's many but one example
would be like Maggie Appleton is putting a huge amount of work into what is the user interface for AI? What is AI's UI?
Um it's probably not just a text box and a button. Uh and there's there's
a button. Uh and there's there's interesting research being done in that space. So, uh, go outside the AI bubble.
space. So, uh, go outside the AI bubble.
Make sure that if you're learning AI from someone that they, uh, seem reasonable, that they seem, um, honest, and that they have an actual background in computer science and they're not just
a rando who calls themselves a prompt engineer. So, be careful for the con
engineer. So, be careful for the con mans. Watch out for the con people.
mans. Watch out for the con people.
Yeah, Scott, thank you very much for this interview. Looking forward what
this interview. Looking forward what other things you're also doing within Microsoft in the future. And uh well it was great having you here. Thank you for the opportunity.
Loading video analysis...