LongCut logo

AI Generation: Rethinking Primary and Secondary Education | AI House Davos 2026

By AI House Davos

Summary

## Key takeaways - **Japan's Slow Curriculum Updates**: National curricula in Japan are updated every 10 years, with the next revision for 2030 adding AI to informatics subjects covering ethics, media literacy, digital citizenship, programming, and AI to foster 'advanced essential workers' in fields like medicine and agriculture. [04:41], [06:54] - **GenAI as Author Requires Criticality**: Generative AI acts as an author producing outputs from prompts, so children must treat it like any authored work by questioning, critiquing, and improving it rather than accepting it uncritically. [12:38], [15:03] - **Overreliance Erodes Cognitive Skills**: Overreliance on AI risks losing problem-solving capabilities, map-reading skills, confidence in thinking, productive struggle, attention spans, self-efficacy, intrinsic motivation, and collaboration skills due to brain plasticity like the 'Google effect'. [26:33], [28:45] - **Shift Assessments to Oral Exams**: Generative AI makes written reports unreliable for assessment, so practitioners are moving toward oral assessments to verify student understanding and reorganize their thinking constructively. [36:49], [37:31] - **Programming Still Essential Despite GenAI**: Students must learn programming and informatics basics to understand AI's workings, analyze problems, abstract, design, implement, test, and evaluate, rather than solely relying on GenAI prompts to generate code. [39:21], [41:09] - **History Lesson Critiques AI Accuracy**: In a scenario, students questioned a GenAI role-playing Marie Antoinette, checked its internet-sourced answers for inaccuracies, and discussed why it produced false information to build historical evidence evaluation and AI criticality skills. [50:20], [51:14]

Topics Covered

  • Japan Mandates AI Literacy for Essential Workers
  • Treat Generative AI as External Author
  • AI Risks Eroding Children's Cognitive Skills
  • Prioritize Long-Term Memory Over AI Outputs
  • Teach Programming Despite AI Code Generation

Full Transcript

It's a great time to open this session without the title. No, sorry. But the

the this session's title is AI generation rethinking uh primary and secondary education. Yes. And so thank

secondary education. Yes. And so thank you for gathering here. And so first uh let me introduce the discussion members

here. And so the sing next to me is

here. And so the sing next to me is professor Masami Haga from University of

Tokyo. Okay. And next is uh professor uh

Tokyo. Okay. And next is uh professor uh Dom Paci from Lancaster University and also the yeah professor Mary Web from

King's College London and so the uh later uh I I ask them to talk a lot so uh so

please allow me to go on. Yes. And uh

this session is uh uh about the conversation uh this session is conversation about the education and

learning and also teacher empowerment uh in the context of Jen AI era and uh as

you know the Jen AI is now changing uh the many things and one of the thing is

education and so uh this session uh We want to talk about the moving beyond the

discussion of AI just as a tool. But the

decision focuses on the human agency and dignity in the age of AI. And so and we

also aim to provide uh compass for educators and policy makers on how to

design K12 education and uh protects and neutes children's agency and uh the central narrative will explore uh the

whether AI become a crutch that fosters dependency or a scaffold world that

supports autonomous growth. So uh please uh give your attention to the yes each

panels and uh this uh session we have uh uh mainly uh the three contents. The

first is uh about the context and the the social shifts and redefining the goals of education. And the second uh

cont is risk is about the unseen risks to dignity and development on children's

brain on children's areas. And uh the final part will be uh

the vision and so this is about the protecting and expanding agency and so this is the almost all the our uh

discussion content and the final finally that one thing I should

say let you know is that it's a pity that uh we had One more member uh she is

uh she her name is Natalia Kosminia and she is uh she's not here because uh her health trouble so

the this is all members of discussion so and fin Japan okay so let's start the first

content Okay. And so the first is uh the

content Okay. And so the first is uh the about the context and social shift and redefining goals of education. So please

start.

>> Yes. I'd like to talk about the context in Japan. So I'm currently the director

in Japan. So I'm currently the director of the institute for AI and beyond and that and which is a partner of house

Davos. So that is why I am here. But uh

Davos. So that is why I am here. But uh

I'm also involved about for a decade uh in uh high school education of informatics and uh currently I'm a

member of a working group for updating the national curricula for primary and secondary education in Japan and

national curricula in Japan are updated every 10 years. So uh that so it is a

typical example that change of education takes a a long time. So actually we are now discussing the curriculara starting

from 2030 >> and I'm a member of a working group uh for the subjects uh related to informatics

and uh from primary school to high school and uh the subjects are expected

to teach many things. For example, uh uh ethical issues such as copyright and

privacy and also uh media uh literacy and also uh something

about uh digital citizenship and also uh the subjects I should teach about the technology including computer architecture or networking or

programming and also for information systems and now uh AI is added to the

list of materials. That is because uh as you know uh Japan is a good example of

matured and now decaying countrydeed uh population is decreasing but uh productivity does not increase. So the

government or the ministry of education considered that AI is the key issue uh to raise the productivity and they

recently recently use the word use this phrase advanced essential worker uh which probably means

uh digitally transformed essential workers uh with AI literacy uh in such fields like medicine, healthcare care or

uh agriculture or construction or maintenance of infrastructures.

So they they want to foster such uh advanced essential workers and of course such advanced essential workers uh do

not necessarily graduate universities.

So uh education primary in primary schools,

high schools are very important to teach AI to those uh essential workers and uh

now AI is added uh to the list of materials of the subjects and uh

uh however careless carelessly uh teaching AI has a danger of uh

reducing agency or dignity of children.

So uh the teacher's role is very important uh to uh uh use AI to promote uh agency

and uh creativity.

And so that is why uh the the subject of this panel is quite uh important uh uh for me because uh I I have to talk with

the officials of the ministry of education to draft uh the next uh national curriculara. Mhm.

national curriculara. Mhm.

>> However, uh and uh so uh the risk of uh using AI uh which

might endanger dignity or agency of children will be discussed in this panel and I think the teacher role is very

important. So teachers should carefully

important. So teachers should carefully use AI to uh promote agency or uh creativity of children. On the other

hand, as you know, AI can teach a basic knowledge and also basic skills to children. It's a kind of uh AI can be a

children. It's a kind of uh AI can be a kind of individual teacher. So that will

save uh teachers time and uh so so that the teachers can focus on uh fostering

uh agency and creativity and so on.

So so that is my opinion and uh finally let let me uh uh mention one issue about using AI. So that is about uh assessment

using AI. So that is about uh assessment or evaluation of children or student. So

as you know uh it has become very difficult to assess students according to the outputs such as reports uh

written by a students uh because generative AI can uh easily write good reports on behalf of the students. So

uh I think it is a big issue uh of how to uh assess students in this era of

generative AI and uh so maybe we have to develop methodologies to assist students with the help of AI.

So that that that is what I want to say at the moment. Thank you.

>> Okay. Thank you. And uh so the the the at the uh beginning of the the section

content context. Yes. The next speaker

content context. Yes. The next speaker is uh professor Dasi. Please give your speech.

>> Yes.

>> Okay. Thank you. Thank you Toshinori and thank you Mazami. Um I'm in a what you might call a beneficial position coming second. I can always argue with whoever

second. I can always argue with whoever has gone before me. But of course with someone coming after me that person is able to argue with what I say. So that's

the that's the context that we're in with regard to AI. I think um the what I've what I've my field is technology

enhanced learning. My interest is in

enhanced learning. My interest is in learning. So fundamentally I come from a

learning. So fundamentally I come from a learning background and I'm interested in how technology supports or hinders learning and teaching.

Um, and I've been looking at this through the concept of emerging technologies for many years. Um, I won't tell you how many, okay? But it's many

years. I've seen technologies come and

years. I've seen technologies come and go. I've seen technologies change and

go. I've seen technologies change and I've seen technologies emerge.

And generative AI is one of those that's happening now. You know, it's a very

happening now. You know, it's a very recent technology. AI itself is not new.

recent technology. AI itself is not new.

AI itself has been around for some time.

I worked with with AI in 2000.

Yeah. I was working then with a government department on looking at how AI might support aspects of teaching management. So that has evolved. Now

management. So that has evolved. Now

we're in the field of Gen AI, generative AI, and that's different. It's exactly

what it says. It's generative. It

generates something.

So you give it prompts and it will generate something. It therefore is what

generate something. It therefore is what I would regard as an author.

What you get out from that is is an authored piece of work. And the question is how do you regard another author?

How do you take forward another author's perspective and work with that? And how

do children do that in terms of what they ask a genai system to do in terms of what is generated? So for me the

learning is all to do with thinking about what Gen AI is. What is it actually doing for you and how do you define it so that you can think about

that in terms of what it means for learning.

And for me it comes from that concept of a authorship.

If we were reading something from someone else, if we were looking at an image from someone else, if we were reading a novel from someone else, how

would we regard that? What would we do with it? Would we accept it? Would we

with it? Would we accept it? Would we

not accept it? Would we question it?

Would we think about it? would we think about how how it might be improved, what we would think about it, etc. So I think this this whole idea of Gen

AI really needs to be put in the context of how we might define Gen AI outcomes.

There are other issues that go with that. But I I would say think about it

that. But I I would say think about it as as an author and what would you do with that if you if you gained an an output from another

author.

One of the key things that we uh my my my background currently by the way is that I'm seeing the uses of AI in education both through from nursery,

primary and secondary education and I'm seeing examples of this right across the board and I'm seeing what I would regard as some good examples and I'm seeing

some alternatives.

Okay. The things that I'm that I'm that I'm particularly um fond of seeing is that idea of regarding it as as an

author output and the fact that teachers are able to think about this with their children in terms of questioning what is there and being critical of what is

there and its outcome.

So it I think that that questioning and criticality approach is fundamental now and that's something I think that is already happening in education but I

think it's something which needs to be promoted in education. It's something

which needs to be brought to the four.

We've heard about concepts like critical thinking for some time. That's not a new idea or a new concept, but it's important in our current context where

we're seeing authorship which is actually generated by individuals on a vast scale and that can happen very very

quickly and it can happen really within a lesson period quite easily. Several

outputs could be created quite easily.

So how do we regard that? How do we question what is there? How do we get children to question what is there? And

how do we ask them to be critical? And

how do we do that for children who come through from nursery to primary to secondary? How do we maintain that

secondary? How do we maintain that questioning approach?

I I suppose it goes back a long time in terms of education. You know, Socrates brought in the concept of educational questioning. He used a questioning

questioning. He used a questioning approach in education. So if we went back to Socrates, I think we in a sense we're looking at what was happening at

that time. He was he was bringing on

that time. He was he was bringing on board a concept of education. And one of the things that he was mainly concerned about was this idea of bringing forward

questioning and criticality.

Not just accepting things as they were and as they are, but thinking about what do we what do we think about this? Do we

think it's right? Do we think it's wrong? Do we think it's as good as it

wrong? Do we think it's as good as it could be? Could it be better? How would

could be? Could it be better? How would

we improve it? Now, interestingly, we're already seeing some Gen AI systems that are taking that approach.

We're actually seeing the start of Gen AI systems that are bringing forward questions rather than answers.

Again, this is something that we're we're seeing as as emerging.

And what I would say about Gen AI is it is new. It's only been around for a few

is new. It's only been around for a few years. And yet even within those few

years. And yet even within those few years, we've seen a large amount of development. At the start of those few

development. At the start of those few years, we didn't see Gen AI systems that were generating questions. We saw them generating outputs only.

So we're seeing quite rapid development in terms of Gen AI and know and we will see that over the next two years over

the next four years etc. We can we can reckon on changes with regard to technologies and particularly with regard to software and apps roughly

every 18 months.

So as as educationalists, as practitioners, as policy makers with regard to Gen AI, we're going to have to be prepared to think about what's going

to happen about every 18 months to two years. Is this going to affect what we

years. Is this going to affect what we think, how we approach this, and what it means in terms of how we work with

children in classrooms. So I think um I think that that that that is important. I I I do want to go

on probably to talk about this idea of um what education is about and what education is for because I think that what we're seeing

in terms of Gen AI is sometimes a desire to ask education to be preparing children

within a technological environment so that they can move into a technological environment. environment. I think I

environment. environment. I think I think that that is fair. What I think is unfair is to suggest that all children

will end up in technology industry.

So I think that that education is wider than that. It's to do with preparing us

than that. It's to do with preparing us for society. It's it's to do with us

for society. It's it's to do with us preparing for being ready to work in an environment with other people.

And that that is a particularly important area. I think we mustn't

important area. I think we mustn't forget that we're not looking to develop children to work with technology.

We're developing children to work with other children >> and with other people. And I think in our in our particular societies, we need

that.

>> We need that to be continued. We need to work with each other.

>> We need to be able to communicate with each other. And we need to be able to

each other. And we need to be able to think and reason forward. And therefore

that questioning and criticality I think is hugely important.

>> Now I haven't talked much about the technology itself.

>> Yeah. I've talked about the concepts of how I think Gen AI is already influencing what might be happening in terms of learning and education and

where that might be going.

>> There is more you know and um I don't know whether you want me to open this up now or later.

>> Okay. Yes. And

>> tell me do I stop now or do I go forward?

>> Oh yes it's a very right time. Yes to

yes to next. Yes. So I think yes they they are talking about the several timeline. We need to think about the

timeline. We need to think about the very short timeline and med timeline and long timeline and yes education is a

very uh yeah uh complex phenomena I think and also the yes especially Dong is arguing about the uh the now the

emergence of J AI. We have to rethink about the very basic of the question that is what is a dialogue or what is a

human dialogue is or and also the the how important the human connection is etc. And so and uh behind uh so after

this uh discussion and so the next uh point is risk and is unseen risks to dignity or development and so uh please

give your speech uh please uh Mary.

>> Okay thanks Toshini and thanks Don and I'm not going to disagree >> yet.

I reserve judgment on that. Um, yes,

I've been asked to talk about risks.

But first of all, I want to say I'm actually very optimistic about AI. Um,

and I actually did my PhD on AI in education in the 1990s. And since then, um, I've done quite a lot of work on technology enhanced learning, but also

on computer science education. So one of my particular focuses at the moment is I'm um a member of the steering

committee of informatics for all in Europe. Um I represent IIP on on that

Europe. Um I represent IIP on on that committee. Um and our goal is to develop

committee. Um and our goal is to develop informatics education across European countries and European countries are very variable. So we have developed a

very variable. So we have developed a framework that um helps them to develop their um own curricula in their countries to suit their particular

situations. But we are absolutely clear

situations. But we are absolutely clear that informatics is really important. So

I'm coming from that background and I may come back to that in a moment. Um

but I've been asked to talk about risks.

Um so as I said I'm optimistic. I think

AI is going to be it already is really important in the world. It's going to be really important in solving all the problems that we are creating for ourselves and it's also already

important in in education. We're already

using it. Um, one of the things that that I've been doing over the last couple of years is working with PhD students who are researching language

learning in um usually um older students, usually um undergraduates, but also um looking at um the use of some of these programs that help you to learn

languages or to develop pronunciation at school level. And these things already

school level. And these things already work well and teachers can send their students to use these programs and that works well. And there are some areas of

works well. And there are some areas of learning where systems that um have built-in AI and it's generative AI now.

It's a mixture probably but it does include generative AI. Um that's working well and um they're developing these

skills. But there are lots of risks with

skills. But there are lots of risks with AI and particularly what I'm going to talk about is um risks to students cognitive development and with that

comes their agency.

So what we have to do is we have to enable students to learn and to de develop the cognitive skills and abilities and their understanding of key

concepts. We are not going to say that

concepts. We are not going to say that they don't need some knowledge and understanding just because generative AI can um tell them the answers. And we've

got to gradually introduce them to the capabilities of AI um so that they're able to as Don said to critically evaluate the the outputs from the AI.

But the risks are are quite significant um that they'll become over reliant on AI if we get this wrong. And of course we have to bear in mind that um students

don't start in school, they start at home and they're at home a lot of the time and they are going to come into contact with with AI. I was in a a lesson the other day and this was with

um 14 year olds and one of my um beginning teachers and um he's a teacher of computer science. So he was um trying to develop the understanding in his

class of what is AI which is incredibly difficult to understand and he was finding it difficult to get them to understand but he had a really good activity. He was asking them to assume

activity. He was asking them to assume that Google maps was not available to them um but they had to find their way across London. Um and then he said oh

across London. Um and then he said oh well we look on the internet. No there's

no internet available. Next step was okay, we'll take a taxi. Yes. Well, you

could take a taxi.

>> Take a taxi, >> but how would you do it if there weren't any taxis available and you had to get yourselves across London and gradually got into thinking about um using

paperbased maps, using underground maps and and so on. Um but this is one of the skills that we've probably already lost because they won't see their parents

using maps. most likely they'll see them

using maps. most likely they'll see them using Google Maps to find their way around because Google Maps has been using AI for a very long time. Um mostly

predictive AI in the past of course but for years. So that's a skill that um

for years. So that's a skill that um we've probably lost unless we focus on saying yes we do need those map reading skills still. So we have to think what

skills still. So we have to think what skills are still needed.

Um so uh if we look at the actual risks that we know about we know that um brains change as a result of technology there was some research quite some time

ago by a researcher called Sparrow who identified that if um people had access to the internet they didn't bother to remember things and and that's called

the Google effect. So we know that brains can change. We know that brains are very plastic and they can change. Um

so if students become over reliant on AI, there is a danger that they'll lose their problem solving capabilities that they won't develop those. There's also

if they're seeing that AI is doing so well and I think it is doing really well now, it's improving so rapidly. Then why

would they bother to think? They would

lose confidence in their own thinking.

And we have to be aware of that that they may just think well I can just let the AI do it so why should I bother they also one of the things you have to

do to learn it's not easy learning there's this idea of productive struggle it is a struggle to learn um and there's a psychologist who came up with this

idea of what we want is either productive success in doing a task or productive failure in doing a task.

task, that's also okay because you're learning while you're doing that task.

And if you fail, that's one of the things that we do. We fail and then we have to try again. Um so we want students to be doing this to be

struggling. Um because otherwise, um

struggling. Um because otherwise, um they will lose this this capability.

Um and then there's a problem maybe shorter attention attention um spans.

And then there's the problem of motivation. Students need to be

motivation. Students need to be motivated to learn and that's difficult when AI does things so so well. So

there's a a chance that their sense of self-efficacy, their sense that they can succeed and that it's worthwhile succeeding will be undermined by AI.

Um, and it might not be clear to them how it's worthwhile for humans to do anything because the AI can do everything. I mean, maybe we're thinking

everything. I mean, maybe we're thinking some way ahead, but I don't think it's all that far ahead before AI can do so many things that um why do we bother? Um

so that would potentially reduce what we call their intrinsic motivation. They

won't necessarily be motivated to learn.

And then I think Don said a bit about this so I won't say much about it. Okay.

>> And I need to stop, do I? I was just going to mention that they might lose their collaboration skills and those kinds of interreational skills unless we really focus on that. So the answer to

this is that we have to make sure that we don't let them lose those skills and that is going to be down to I think we've all said this teachers teachers are going to be really important going forward.

>> Yes.

>> Yes. Thank you. Yes. And so the the the I I want to uh ask you all about one thing that is uh the listening to your

each of your opinion about the gen education. The one question occurs in my

education. The one question occurs in my mind that the the being uh productive is a good message for kids or not. uh being

productive uh what is being productive means uh and what uh the term productive should be understood in the context of education to protect the human dignity

of kids and in the jai era. So uh could you give uh your opinion? So Dong and uh okay so sorry

>> I may not answer your question but uh >> it's live show it's okay >> okay >> uh the subjects yeah we are discussing

currently for the national cula uh the overall goal is to solve problems and

eventually create new values so problem solving and uh value creation are set as the final goals of those subjects. Yeah.

>> And uh so that can motivate uh children and students uh to do learning and uh

for that goals uh generative AI is a very efficient tool. Yes. So because in a short period of time so children uh

will find problems and also finds a way to solve problems and actually implement uh the solution using generative AI.

Yeah. Because uh they may not need programming but just yeah ask generative AI to to write code. So so so

generative AI make that uh process of problem solving quite efficient efficient.

>> Yeah in that sense really productive.

>> So in that way so particularly teachers are quite attempted to use generative AI in their classes because they can motivate well

motivated student.

>> Yeah. and quickly reach the solutions or final goal. So that is the current

final goal. So that is the current situation but but it has risks of course. So it it will make children

course. So it it will make children quite dependent on a generative AI and so may may lose basic knowledge or skills. So

skills. So >> that a problem I think.

>> Oh yes. Okay. Don please.

>> Yeah. Um I in terms of production in terms of productivity in terms of out output um from a learning perspective I think what's important here is to think about

whether that productive output is something which is going to go into working memory or into short-term memory or long-term memory.

Now, if I were to ask you as individuals, where do you think your major learning

experiences lie? And what is important

experiences lie? And what is important to you in terms of your use of that learning?

Would you say that it's to do with your working memory? Would you say it's to do

working memory? Would you say it's to do with your short-term memory? Or would

you say it's to do with your long-term memory?

Now, it's interesting that I'm already seeing some nods with regards to long-term memory.

Now we have to be careful with regard to motivation with regard to productive outcome

and reg with regard to the generation of learning that we're not focusing on only working memory

>> because an AI because generative AI is something which gives us an immediate answer.

>> Mhm.

How far do we keep and hold on to that in terms of it going into long-term memory? And what does it mean for us in

memory? And what does it mean for us in terms of long-term memory?

>> And that's why I would come back to this idea of questioning and criticality.

>> Being questioning of something and being critical of something means that you're moving it potentially from working memory toward short-term and then in long-term memory. Mhm.

long-term memory. Mhm.

>> So if we want if we want people to develop in terms of their long-term learning, >> then we cannot just focus on short-term memory, >> we have to be careful with that with

regard to any technology uses, but certainly with regard to Gen AI because we do have that potential that it could

lead us to to learning focusing on something which is which is readily lost and which actually we need to

re-encounter in order to use it again.

You know there is um there is a psychologist um he was not too far away from here. He was in Austria rather than

from here. He was in Austria rather than Switzerland but but never mind uh who said that basically if you don't reuse something within a month you will lose

it.

>> You will have to retry it. You will have to replay it.

Now, now that appears to be true.

Think about how you use a technology for example and how you use a function of a technology. If you don't use it

technology. If you don't use it regularly, do you remember it like riding a bicycle or do you forget it?

Have you moved it from working memory into long-term memory? In other words, riding a bicycle, you might move to long-term memory, but actually the use of a particular function which you've

only used once or twice is not going to go into long-term memory. So, we do have to bear that in mind with regard to education generally and with regard to

learning generally. We are not going to

learning generally. We are not going to be able to be able to develop young people into adults who will be ready for society if we're only going to focus on

short-term memory.

>> That is not going to work for us in the long term.

>> It won't enable questioning. It won't

enable criticality. It will it will enable dependency.

>> So I think one has to be careful about where that dependency and generative output come. Mhm.

output come. Mhm.

>> I' I'd also say that with regard to productive output with Gen AI, one of the things which is happening there with regard to assessment is that

uh my experience of seeing this in practice is that practitioners are moving more and more towards oral assessment.

So rather than depending upon written assessment, they're needing to depend more upon oral assessment. Now, actually

from a learning perspective, you can argue that that's useful.

Actually, being able to ask people questions and get them to think and provide an answer is constructive.

It's actually generating, it's actually getting them to reorganize their thinking sometimes into an output which can be very very useful. So I think with

regard to Gen AI, this is something that we need to be aware of that we need to be aware of the fact that actually forms of assessment might need to shift in

terms of moving some of it at least from written towards oral >> so that at least we have some basis of different forms of assessment, different

forms of of ideas of what is emerging and what what the outcome is. Yes. Um,

so I think I think that's probably what what I would say on that. Um, I could I could go on to talk more about the the parental aspects, etc. But maybe I should stop there and hand over to Mary.

>> Okay. So Mary, please.

>> Okay. Um, I'm not going to disagree again this time. So that's all good. But

I just want to add to that actually that um when we were talking about what I was talking about um being productive in learning so that they actually learn

something. Um what's important there is

something. Um what's important there is the process that they're going through not the not the outcome from it so much as they're going through this process of

doing this task they are learning and that's what we want to happen. And of

course, if it happens really quickly because they get the answer from generative AI, then that is useless.

That is actually unproductive success, which is not what we want. Um because

they're not learning. They're being

successful in the task because they've got the answer from Gen AI, but they're not being productive in their learning.

So, I think that's what I'd want to say about um I could say other things actually. Can I say something about um

actually. Can I say something about um computer science?

>> Yeah, why not?

>> Okay. because um there is a debate going on. I mean in the um group that I work

on. I mean in the um group that I work with informatics for all we all agree that informatics is still really really important and that students should learn to program. Yeah.

to program. Yeah.

>> And that gradually they should start to use Gen AI to support them in programming but they have to develop this understanding of programming. Mhm.

>> Um whereas there are other people in the world who are also computer scientists and some of them are computer science educators who feel that we might not need to learn to program and I think they are completely wrong. I don't think

there's anybody here as I'm not disagreeing with people here but I do disagree with some of the people that I interact with because they they think

okay geni can do it all but that's deeply problematic because if I mean we want programmers but we also want people who can understand

um genai and in order to understand genai they need to know something about how it works and in order to do that they need to know something about informatics and they need to know

something about how computer scientists developed AI in the first place which was through programming originally obviously it's um very advanced at the moment and this is one of the issues

that how do we get them to understand these more advanced things but there are simulations that people can use to begin to understand how these things work and if they've got the basics of informatics

if they understand something about how we think about a problem how we um analyze that problem in the real world, how we abstract from that to think about

how we might um develop programs to support that and then design the um program, implement it in some programming language in which we don't get bogged down in the syntax because that's been one of the reasons why

people struggle with learning to program. And then once we've implemented

program. And then once we've implemented that um program, they they then um test it, test bits of it, they look at the whole thing and they evaluate it against

what they were trying to achieve. That's

what programming is. Don will probably say that's what computational thinking is, but I don't like the computational thinking. I prefer the word programming,

thinking. I prefer the word programming, not just coding. That's just I mean we could just write a a simple piece of of code in HTML to do something that would be coding. Um but we want to go through

be coding. Um but we want to go through this whole problem solving programming process and we still think it's important.

>> Yeah. Yeah. I have no objection.

>> I have no objection. But uh yeah, I found many Yeah. people including

computer scientists who say that uh uh we don't need programming anymore just let the generative AI to write code so

but uh uh let me yeah think about another uh notions like uh machine code so recently uh even in computer science

department you don't teach programming in machine code concrete machine code such as Intel processors. Of course,

they teach the abstract or basic notions of what is machine instructions and how uh machine programs are written. But

they they don't uh do exercises for writing machine code >> and uh so similarly so uh if you specify

uh good uh spec uh good prompts for solving problems then generative can write uh for example Python program

>> probably almost correct Python programs. So in that case uh uh you don't need programming but I

agree that the uh uh abstract notions of uh algorithms or computation models and so on should be

taught because they are basic for information technology. But uh uh

information technology. But uh uh but uh if you carefully yeah write uh prompts for generative AI you can almost

correctly uh produce programs that solves pro that sort of problems. So, so, so many people yeah claimed that

that kind of yeah uh methodology and uh so it's quite yeah difficult to yeah difficult situation in which we text

programming is really necessary or not.

So I myself yeah agree with your opinion but uh uh because as I said teachers are quite tempted to to

let uh children or students to solve problems. >> So it's it's a kind of yeah balance.

Yeah. So how how you use ours for yeah informatic subjects.

>> So thank you. And uh so the it's only 10. Oh yeah, sorry. 9 minutes. Yes. So

10. Oh yeah, sorry. 9 minutes. Yes. So

please give your question.

>> Can we question?

>> Yeah.

>> Let's have >> Yeah.

>> Two seconds. No problem.

>> Mhm.

>> With or without >> Yeah. very brief. Um I'm Ukrainian. Been

>> Yeah. very brief. Um I'm Ukrainian. Been

doing a lot of educational stuff previously businessman. So sharing all

previously businessman. So sharing all the concerns that you've mentioned most of them and I'm definitely for developing as I call it common sense that's what we lacking in the current education because we're not producing

engineers it's not typing or programming that's the problem it's the idea that the person should have also on side note you mentioned that we will we need to

have checks of oral exams and stuff like that I'm also the president of esports that's gaming and we had a problem that in female tournaments it's only males participating so we actually developed

veloped a system that checks a lot of online and webcam presence that might be used in something in AI and examination.

I think just an idea. But my question is uh if knowledge now is so on demand including primary schools and so on, should primary and secondary education stop rewarding memorization and start

rewarding curiosity, reasoning, understanding, real understanding and so on and assess that through some kind of tests and exams and verbal exams throughout of that. The obvious answer is yes. But my question is how?

is yes. But my question is how?

>> Mhm.

>> So short short Yeah. Yeah. answer is

welcome. So please don't and Mary. Yeah.

>> Okay. So um yes I think that um coming back in essence to to some of the speakers who were saying it earlier

I wouldn't like to say that computer scientists or people who program don't think.

I think they do think and I think that they have thinking processes and those thinking processes are to do with aspects like critical thinking like

computational thinking etc. So there is a basis there in terms of how they are thinking which is being applied in essence to problem solving and that and

that programming is something which is leading to that problem solving. It's

something that is enabling that. Now

going back to your question, I would say yes, what we need to do is very genuinely focus on questioning, critical thinking and problem solving. We need to

understand what computational thinking is. We need to make sure that it is

is. We need to make sure that it is clearly defined and developed. We need

to make sure that the basis of that then can be translated through practices such as programming.

We need to ensure that all of that is in place. But it's not only going to be to

place. But it's not only going to be to do with computing and informatics in schools. It's also going to be to do

schools. It's also going to be to do with other subject areas.

>> And what we don't have at the moment is we don't have the research basis within those other subject areas to quite understand how we move forward with that.

>> Sorry.

>> Can we do it tomorrow?

>> Genai is new. Whenever you get new new fields emerging, you tend to get two two areas of attitude formed. Those who are

advocates and those who are skeptics and and and others then fall in between, but they're in smaller numbers. We've

got to enable those groups to come together. We've got to enable both the

together. We've got to enable both the concerns about it and the benefits of it to be brought together and discussed.

And that's that's that's that's to do with this this idea how long it will take. It will take us some time.

take. It will take us some time.

>> It took us 25 years with the internet.

>> Mhm.

>> Can we talk about it in a minute?

>> Sure. Let's

>> So Okay. So, one question and so please ask Mary in this. So, yes, please.

>> Sorry. Sorry.

>> Sorry, Benjamin. Um, just very quick question. Um since you mentioned that

question. Um since you mentioned that you're changing the curriculara in 2030 or every 18 24 months are there any recommendations and I'm a father of four children so

>> um any recommendations you have immediately to teachers right now where you say hey geni and everything is moving at such a fast pace we will change curricular but um this is our recommendation right now to be changed

on Monday.

>> Yeah I can answer that. Okay,

>> let them experiment, let them try things out, give them opportunities and and don't put too much pressure on them. Um

it is a bit challenging because um some of the um genai systems are not very safe. So they we have to provide systems

safe. So they we have to provide systems that are reasonably safe but there are some coming along and there are some already in schools. Um just give teachers freedom to experiment with these things and get let them talk to

each other as well so that you have groups of teachers thinking about well what can we do with the genai? How can

we make this useful to students? How can

we make it such that the students can then critique the um the AI in some way when it comes up with answers? And there

are I could talk about examples, but I don't think I've got much time, have I?

Really?

>> Three minutes. Yeah. Yeah.

>> Can I just um this is this is um an example. It's not actually from a

example. It's not actually from a classroom. It's um an example from a a

classroom. It's um an example from a a group of expert um educators who who got together to think about um how what will happen in the future but it can already

happen now actually in some cases. So we

we created this scenario based on on what what might happen in a history lesson and I'm using history because um it's a humanities subject and I think it is really important that we develop

these things not just in science and computer science. So um in in this in

computer science. So um in in this in history um people have to learn how to evaluate historical evidence. Um that's

one of the key skills in history. And so

this example lesson the the teacher um talked to the students about some of the work they'd already been doing about um evaluating evidence in in history and then said um and you're going to

interact with this generative AI who is going to be Mary Antoanette. and they

had to question this this AI and and this AI was behaving as Mary Antoanet, but it was a it was an AI that was getting information off the internet.

So, a lot of it was inaccurate. And then

the students had to check the answers to see how accurate this this AI was in the answers it was giving. How accurate was it in terms of of Marie and Twinette?

And then at the end of the lesson um the discussion was about not just historical evidence and how you evaluate that but also about um why the generative AI was

coming up with this a lot of this false information because it was getting information from um well it was getting data um off the internet and so all sorts of different bits of evidence that

were not necessarily accurate. So she

was able in this scenario that we we came up with to um to not only teach about history but also to teach about generative AI and to get them to be

doing the critical evaluation of it.

>> Okay. So thank you. Uh one minute and 30. Yes. You Yes. Very short question.

30. Yes. You Yes. Very short question.

Uh sorry.

>> Yeah.

>> Sorry. Yeah. Yeah.

>> Sorry. Very quickly. Um I'm a mander of four. Yeah. I said uh and I'm a

four. Yeah. I said uh and I'm a Brazilian teacher um and I'd like to know how do you see uh the classical

education based on triv grammar logic rhetoric uh the Greek model um in in the context of mitigating the risks you

mentioned.

>> Okay, Tim. Yeah, you one only one sentence please. Sorry.

sentence please. Sorry.

>> One second. Sorry.

>> I'm sorry.

>> One one sentence. Okay. I think um >> I think we have to take the context that we're working in.

>> Sorry.

>> Take what we believe to be the the foundations.

What is it that we are aiming for and what are the key features of it that we wish to uphold?

And then how do we apply any technology to that situation to benefit it? Not to

replace it, but to benefit it.

>> Okay.

>> And to benefit it for the young people, >> not necessarily for ourselves.

>> Yeah. How does it help those young people? And as a mother of four, you

people? And as a mother of four, you will you'll be well aware of that.

>> Okay. So, it's just a right time to stop and and conclude our session. So thank

you for coming here and so thank you for the giving the your speech and thank you for all. Yeah.

for all. Yeah.

Loading...

Loading video analysis...