LongCut logo

It's a jungle out there: Where do South African universities stand on AI?

By Academy of Science of South Africa

Summary

## Key takeaways - **AI reshapes higher education, demanding new approaches.**: Artificial intelligence is fundamentally altering how universities teach, learn, research, and operate, necessitating a reevaluation of existing practices and the development of new strategies to harness its potential while addressing its challenges. [00:14] - **Universities need clear AI frameworks, not rigid policies.**: Given AI's rapid evolution, universities benefit more from flexible frameworks that provide high-level guidance and allow for adaptation, rather than fixed policies that could quickly become outdated. [15:12] - **Embrace AI ethically, responsibly, and human-centrically.**: Institutions must adopt AI with a focus on ethical and responsible use, ensuring a human-centered approach that prioritizes equity, accessibility, and the preservation of fundamental human skills and abilities. [13:45], [23:32] - **AI literacy is a critical competency for all.**: Developing critical AI literacies is essential for both students and staff, encompassing an understanding of how AI works, its ethical implications, potential biases, and appropriate usage within academic contexts. [13:54], [21:24] - **Rethinking assessment in the age of AI is crucial.**: The rise of AI necessitates a shift in assessment strategies, moving beyond traditional methods and potentially focusing on tasks that foster knowledge creation and critical evaluation, rather than simple recall. [16:36], [43:04] - **Collaboration and sharing are key to navigating AI.**: Universities benefit from sharing resources, policies, and learnings regarding AI implementation, fostering a collaborative ecosystem to address common challenges and advance collective understanding and best practices. [22:29], [30:00]

Topics Covered

  • Educating Leaders: Bridging the AI Knowledge Gap in Higher Education
  • Rethinking Education: The Impact of LLMs on Teaching, Learning, and Assessment
  • Embracing AI Ethically: A Human-Centered Approach for Universities
  • African Solutions for African AI Problems
  • Regulate AI Now: A Call for Government Action

Full Transcript

behalf of ASF

it is my great pleasure to welcome you

all to today's webinar on the use of

artificial intelligence in higher

education the jungle so to speak out

there we are gathered here at an

exciting and pivotal time a time when AI

is reshaping the way we teach learn

research and manage our institutions

this webinar provides us with an

opportunity to explore not only the

possibilities that AI offer

but also the challenges and

responsibilities that come with it. As

educators, researchers, and leaders, we

share a common goal, and that is to

harness the power of technology in ways

that enhance human potential, promote

inclusion, and uphold academic

integrity.

I encourage you to engage actively,

share your insights, learn from one

another as we discuss how AI can

strengthen higher education, making it

more adaptive, innovative and future

ready. Thank you for being here and I

wish you an inspiring and productive

session.

in a recent

uh community of experts meeting that we

had on the use of AI to improve

efficiency both in high education in

government and also in the private

sector. There was one thing that came

out as a common denominator

and that was that we accept that people

know and are all wellinformed about AI

in each one of these different spheres.

In the short term, it was recommended

that both business leaders, government

officials, and academics be exposed to

what AI actually is. So, I'm looking

forward to your insights this afternoon

to see how we move forward on this

extremely important topic. Thank you

very much and welcome.

>> Well, thank you very much.

>> Yeah. So, Dr. Becker, I was I'm just

handing over to you.

>> Oh, thank you. I I appreciate that.

Thank you. Thank you for the kind

welcome and again thank you. Thank you

everyone. Um I see as we uh as we speak

the numbers are are are running up.

Welcome everyone and again thank you

profit. Um folks uh it it seems we've

had so many people um registering

that um uh the colleagues at ASF had

decided uh or have been forced um to

change the uh the the license and I

think that that means that for

participants u you can't see the names

necessarily of of other participants um

so and that accommodates a a larger um

cohort of people to join in. So, so we

apologize for that. Um, and I see as we

as we speak it's it's still running up.

Right folks, we're part of a larger ASF

series, this Lakota AILA series where we

started by asking what are large

language models? How does that fit into

AI? Why is this being thrust upon us all

of a sudden? And we then pivoted through

a whole series of cycles. So, um, what

are the problems? what are the biases?

What are the training issues with that?

What are the good use cases? Um what are

the bad use cases? We looked at a whole

uh series of of of discussions on uh

cheating and and whether we are

superpowering um that we looked at

opportunities within the research and

what are the good opportunities, what

are the unexplored avenues of machine

learning. And then most recently we

looked at companionship and chat bots

and tools. We even looked at an AI

supervisor that generated quite a lot of

of um interest.

So I want us to think back at the the

story that is um oft uh raised um by

legal scholars uh but now increasingly

in the ML space as well. the 1890 essay

uh in the Harvard Law Review about the

right to privacy or privacy if you

insist.

This article 1890

the right to privacy came in the light

of a new invention the instantaneous

photograph.

In other words,

we've always had an idea of this uh

concept, this abstract idea of privacy

or privacy, but technology and changes

in technology forced us to rethink

something that we thought was settled.

We had to clarify what we mean. We used

to think that privacy was a function of

property and space you know the home um

and certain personal contents my letters

for instance but we came to redefine it

as personhood. So the likeness of a

person that should not and in in good

ethics cannot be taken without consent.

And then we had to change our legal

framework in that case, but also

with normative impacts on on how we

think in in in terms of the ethical

space about how we mitigate or protect

against those abuses.

And in a way, that's why we're here

today. LLMs are doing the same thing. We

are all of us are rethinking teaching

and learning research and certainly

assessments of our students.

So

what we're going to do today in terms of

uh a plan is we will engage with um a

series of experts that we've got. I'll

introduce them shortly and for about 45

minutes or so we'll have a let's call it

a a facilitator Q&A. So um there might

be uh one presentation of two slides

here or a picture or maybe a website

shown as illustration but generally the

idea is that we've brought for you a

series of experts across uh epistemic

communities across institutions within

South South Africa um to help us think

through where we are in terms of

formulating and guiding once we've done

that. So hopefully before the hour is

up, we'll switch over to a panelistled

discussion as we've done throughout this

uh series. In other words, what we will

do is allow for the panelists to

question one another for points of

clarification, points of priority um or

um if there are any um outright

disputes, then we'll put that on the

table. Um we are the opposite of all of

social media. we want to engage and

engage in good faith um and that's what

we hope to do and then after that we

will engage within audience Q&A now if

it is possible and I'll be told uh by a

voice somewhere um if it's not um please

engage in the meantime through the chat

function um I myself I'm not very good

at listening and making notes and

answering in the chat but I know that

some of the panelists are and certainly

um everyone has the uh you know the

liberty to answer and there's there's

normally quite a robust discussion

there.

So here to report on the state of play

um these uh uh policies that we have um

we have firstly uh Suka Walgie from uh

uh UCT where uh she's the director of

the center of information and um in

teaching and learning. We have these u

uh centers in all universities as you

know with long names that have heavy

acronyms. This is no different. Um, so

KA leads the university capacity

development project on assessment and AI

literacy and also co-leads the um AI

teaching innovation grants program.

We have Dr. Nicola Pallet uh who's a

senior lecturer and edtech specialist at

roads. Um she's based at the center for

higher education research, teaching and

learning and works on I love this phrase

critical and compassionate approaches to

education technologies. Also leads the

higher education uh learning and

teaching association of southern Africa.

We have uh professor an Fhou uh

professor of philosophy at Northwest

University. He holds two PhDs one from

Stalamosh and one from the FU uh who's

director of Northwest University. um AI

hub and also the founder of the AI

circle of southern Africa for higher

education.

We have uh Miss uh Pindiwe Kamulan who's

a chartered accountant also an executive

um for digital teaching and learning

over at UNISA. Um, Paneer runs the

digital transformation initiative um and

has piloted

um the compulsory academic integrity

course for UNISA's 260,000

students.

And we have I'm not sure if you've

joined us yet. Oh yes, I see he has

which is wonderful. Dr.

uh augule um who is an AI researcher at

based at the University of Mungumalanga

with wide experience wide exposure um he

was among many other things part of the

team that developed the UMP's AI usage

guidelines so we have someone here from

the oldest university in South Africa

and someone from the youngest university

um I I trust that is Mumalanga I think

Sly is found in the same year right um

from the eldest and youngest the biggest

certainly um so and we're going to swing

into the questions immediately

and the question one is we've got folks

here from different universities from

different institutions with different

kinds of thinking

could you explain the basic principles

of your institutions's position on

generative AI large language models and

the like and the reasoning behind it and

we're going to Suka first. Suka, the

floor is yours.

>> Thank you. Good afternoon everybody.

It's an absolute pleasure to be here. Um

and thank you for the invitation. So

before I address um UC our position

UCT's position on gender of AI

specifically, I do want to emphasize

um that UCT like other universities has

had a long established foundation in AI

research um that predates generative AI

and the explosion. Um for example, we've

had AI research unit based in the

computer science department and so on

and we're currently establishing a new

AI institute. So the reason for saying

that is that that universities do

already have you know expertise in

foundational research as well as applied

AI research. So what I'm going to say is

is is within that context and hopefully

it will become clearer as to what

strategies we might use um but

particularly it's around looking at your

own expertise within as well as we

respond. So while um generative AI has

brought AI into the teaching and

learning spotlight, it is important to

remember that AI has been around for

many decades um you know 60 years and

including in teaching and learning and

so I think um our institutional

engagement has to be much broader and

deeper than than looking at um because

sort of responding just to chat GPT and

education applications

um and and that that's really the

framing. But that said um for teaching

and learning specifically and from where

I am um operating as the director of the

center of innovation learning and

teaching um silt at UCT I am also the

chair of our AI and education working

group soon to become or has become our

AI in the education committee of

practice um for the institution. Um we

have um and the question was really like

what is your institutional position and

our position is in framed in our

recently ratified framework. It's called

a UT framework for AI in education,

generative and other AI in teaching and

learning and assessment. And we

deliberately did that because uh we had

a 6 to 8 month consultation around you

know trying to answer the question what

does it mean for UCT to uh you know what

is it what is UC's position the question

you're asking in relation to generative

AI and we came up with this framework

through stakeholder consultation and

there was considerable debate around

whether it should be just gen AI or

broadly broader than that and what this

means and so I think when you do this

sort of work you c you you quickly get

away from some of the technical issues

to actually some of the underlying

philosophical underpinnings of what what

we are doing and why we are doing this.

So um I'll post after my little um in

interlude I'll post some links into the

chat um but not to be distracted now. So

okay what are the principles? So I'm

going to just quickly read them out and

they will be very familiar. So our six

guiding principles around um uh

responding to a generative AI is ethical

and responsible. So we promote ethical

and responsible use of AI. We foster

critical AI literacies as a core

competency. We're maintaining a human-

centered approach to education, ensuring

equity and accessibility and AI use.

Number five, balancing innovation with

responsible implementation. And then

lastly, con supporting continuous

learning and agility to adapt to

advancements. Now, I want to be clear,

each of these principles probably sound

very familiar. Ethical use, critical AI,

literacies equity accessibility but

they are themselves deeply contested

concepts that require ongoing

reflection, institutional discussion,

and meaning making. That's really the

point I want to say. Just because we say

these things, and we do um they've come

through consultation, it's not the end

of the story. It's the beginning of

actually what does it mean? What does

ethical use mean when AI models are

trained on contested data? How do we

define equity when students have

differential access to premium tools and

so on? And I think that's what I'm

saying here is that while we have a

framework, the work we're actively

engaged in is not settled questions but

ongoing dialogue. And this is really how

we want to move forward with the broader

community as we shape uh UC's response

to um AI what it means for us very

quickly as well um why did we go for a

framework rather than a policy um so

policy suggests fixed rules compliance

mechanisms but given the rapid pace of

AI development we recognize that would

probably be counterproductive and get

bogged down but the framework provides

highle guidance and it articulates our

position at this moment in time

essentially it's where we stand in 2025

around what we want to do. It's also

high level principles but we also have a

roadmap that addresses practical

concerns because I'm sure for those of

you who have done policy work and

consultation you get the push back

saying but yes it's just words what does

it actually mean in practice so we built

in a road map within the framework that

acknowledges constraints um resources

required and so on um and um I will as I

said I'll post the framework into the

chat effectively to operationalize the

um the framework we have three pillars

and most of our AI work fits under one

of these three pillars. The first is

promoting AI literacies. So this

addresses the practical need for

capacity building but also kind of

philosophical commitment to critical and

ethical engagement. What does literacies

mean? Who gets to say what they are? So

this just isn't about functional skills.

Um but uh it at the same time um uh has

an 18-month sort of plan for what types

of um training will be rolled out and

what what we think is required. Our

second pillar is ensuring assessment

integrity. This came up very strongly as

a requirement. It was top of mind and

and we're responding to that. So it's

around responding to immediate practical

pressures around assessment security.

What do we do with assessments? how can

we secure um you know and so on and and

each discipline and course and

qualification needs to take a a

different approach appropriate to the

learning outcomes but it's also around

what is assessment and actually is it

really about curricula that we're

talking about not about assessment and

again the road map with this pillar sit

says what is available right now for

assessment redesign assistance but also

what we think the big questions are

coming up and then third pillar is

around AI enabled innov innovation. So

this is balancing the issues challenges

around integrity, the need for

literacies with um a a call to say you

know can AI help us innovate in our um

teaching and learning spaces um and so

can we innovate in curriculum and

pedagogy and assessment and this

pillars's road map includes pilots for

AI teaching and learning use cases as

well as processes for developing the

best set of AI capabilities that will be

needed in our teaching and learning

ecosystem in the future. So the

reasoning behind this framework is we're

deliberately pragmatic and intentional

in terms of design rather than reactive

and we wanted to move away from the

sense of crisis and and attention on

associating AI purely with academic

misconduct. We recognized early that

neither banning AI nor uncritically

embracing it is going to serve either

our our students or ourselves. And we

acknowledge that AI is a dualpurpose

technology both deeply problematic that

can undermine traditional pedagogies but

also has um much to to offer us and

that's really that's where I'm going to

stop there in the interest of time.

Thank you.

>> Thanks AA. Lots lots of questions from

that and uh what a wonderful scene that

you've said there. Um let's pivot

straight into Nicola. The floor is

yours.

Thank you, Martin. So, Roads University

and I must say I was at um UCT

previously and it it has been quite a

big shift going from one research

intensive to another that's much smaller

um and very differently resourced and I

think that will come out through the

discussion and I want to remind people

that that is a very very important

factor is the unequal um resourcing

across our inst institutions and we're

also positioned very differently. So our

center is located uh Churtle Center for

High Education Research uh teaching and

learning um and the edtech team that I'm

part of specifically we are situated

within the faculty of education and

um we provide both operational support

um um you know to to colleagues in other

faculties but also do quite a bit of

academic research um and supervision of

post-graduate students ourselves. Um so

I think that makes us quite um a unique

um animal but so how we've approached it

um is more situated within higher

education scholarship. Um yes we are

also focused on uh critical AI

literacies um in our guidelines. So yeah

I should actually have started there. So

we have we chose to develop guidelines

rather than a fixed policy which would

allow us to uh revise these on a

continuous basis in response to the

these you know this rapidly evolving um

AI landscape. Um so we have actually got

three guidelines one for students

um one for teaching and learning with AI

tools for lecturers and another on

guidelines for assessment in the time of

AI.

So back to the higher education framing

um we encourage colleagues to engage

with the top question which is that um

you know we've got to think about what

higher education is really for and we

encourage colleagues to think about and

students around disciplinary knowledge

building. Um and we aim to support

academic practices that foster students

capacity to become knowledge creators

rather than just information consumers.

Um preparing them for lifelong learning

in a world that is fundamentally going

going to be changed by AI. Um our core a

core principle is also to um yes also

have ongoing conversations and in the

process educate our community. Um and

this involves

um helping students and staff understand

how generative AI works, recognizing

inequalities and biases associated with

these tools and examining ethical issues

um practicing appropriate use. Um we are

also uh very anti- you know that that AI

is not just about plagiarism detection

and how do we move from a punitive

approach um in terms of AI use to a more

you know in the words of Sarah Eaton you

know going from developmental to a

restorative approach

um which is quite tricky and I think

that will come out later again but yes

main things I think is is around um what

is higher education for and um

encouraging

lecturers, students to think about AI

within the context of disciplinary

knowledge building.

>> Thank you, Nicola. Um an

>> yes. Can you see and hear me? Um Martin.

>> Yes. Loud and clear.

Yes, I just posted in the chat um also

the link to the website of uh Northwest

University's AI website because I'm

going to refer to a lot of documents.

I'm not going to show them but um

participants are welcome to have a look

there. Yeah. And thank you for the

invite. My experience is that uh we all

learn together as universities. Uh

that's I think the yeah the nice thing

about AI. It's new for all of us and we

have to learn. So I'm uh glad to be part

of this conversation and also learn from

my colleagues. So I'm going to answer

your question short and sweet Martin. We

have a lot of people. Uh so we have I

think basically two principles at

Northwest that guide our

our um institutions policy or take or

approach to AI. the two principles first

one is to embrace AI with ethical with

in an ethical and responsible way so

it's really to embrace it but it's just

not uh we cannot just say to embrace it

it must be qualified with ethical

responsible and I think a choice for

this to embrace it fully um is because

we know this is the future AI is not

going to go away we need to explore the

positive the potential of AI especially

in academic context for research for

teaching learning but we also have to

learn how to use it critically uh

ethically responsibly uh because there

are a lot of risks and dangers um

involved with AI use. The second um

principle is that we want to be and

remain human within our use of AI. It's

a human centered approach. uh and we

make a deliberate choice for for

following that or on a principled level

because AI can uh break down what's

valuable for us as human beings um yeah

our basic skills and abilities um can be

sidelined if we get overdependent over

reliance or if we do a lot of cognitive

downloading we can even be manipulated

by AI if we do not engage with it

critically So our approach is or the

principle is AI should serve us as human

beings and contribute to our societies

our dreams and not the other way around

that we serve AI and become part of its

algorithms and its data and um make

people rich through it or get

manipulated through different ideologies

and both I must say both these

principles to embrace it ethically

responsibly and to remain human first of

for in our use with AI um is part of our

policy um our strategy um our whole

approach and of course the challenge is

then how to implement this uh on a

practical level I'll stop there Martin

thank you

>> thank you thank you you took us back to

Wall Street there with money what money

should serve people shouldn't serve

money um fantastic thank you very much

uh meander you are up next.

>> Good afternoon everyone. I hope everyone

is having a great day. Um you know uh

it's always a disadvantage to be the

last speaker because everybody then just

sums up what they have been doing in

their space which is similar to yours.

uh but but but we look at promoting um

augmented intelligence and how does in

human form we need AI in order to

advance superior outputs within our

work. Um therefore the institution has

been very good at promoting ethical

utilization of AI through the drafting

of its policy and procedures. uh we did

so through a bottom up approach uh

whereby there were several consultations

that took place within the colleges

within the administrative spheres in

order to inform our policy and our

framework. So unlike uh UCT we actually

developed guidelines and not framework

so that we can operationalize the

dayto-day activities that may be

required within the spheres. Um we look

at the principles of integrity first

ensuring that there is ethical and

responsible utilization of AI providing

students with those examples that they

need and what we are seeing right now is

that there might not be a deeper

exploration of um the utilization of AI

tools and maybe chair just to decrease

uh to inform uh the participants that uh

Google has announced uh its availability

ility to students uh for free of their

Gemini Pro. So students might then have

much advanced uh tools that they can

work from. uh we see this because um I

think we we we we still dealing with the

challenges of the unreadiness of school

levers right um coming into the

university space and AI giving us those

opportunities of multilinguism ensuring

student can also personalized their

learning. So from our side we do have

two critical centers uh that have

advanced uh AI uh through the academic

development and open virtual hub which

has advanced some of the AI trainings

through MOS and student webinars and

from a academic and an administrative

perspective we're still relying on our

CPD department to produce um the

relevant training to ensure that we can

advance the AI academic literacy. Um we

have introduced the compulsory academic

course. I can touch on it a bit later

on. Uh for our NQF5 to NQF8 students. Um

as the uh Martin has said through the

introduction we were very successful

with over 260,000 students who have um

participated within the course and we

have had a positive um outcomes out of

it because it seems that students

themselves are unaware on how to utilize

AI but also are unaware on what is

ethical utilization of AI within our um

policies. these um and and and and the

guidelines what we have established is a

multi

can you hear me

>> in order for us to then align those key

roles and responsib

Oh, I'm losing you.

>> Can anyone else confirm that it's not

the bandwidth on my end?

>> I can confirm.

>> Okay. All right. Meip me and Dway, we we

may have to come back to you. Uh, okay.

Lots of thumbs up. um there and she was

just in full swing um which is a real

pity um hopefully. Me do you want to try

do you want to try again? We lost the

the the last 30 seconds which was fairly

crucial.

>> Yes. So so so our AI task team um was

approved by Senate in September 2023. Um

it has six sub teams looking at uh the

AIdriven research, AI powered student

support, AI assisted assessment,

enhanced teaching. Uh we separated the

enhanced teaching and learning with how

do you infuse module design utilizing AI

and also our powered um data analytics.

Uh there have been some great output.

One of those has been obviously the AI

policy and guidelines that have come out

of the team and also we looking at the

data aspect as to how do you track the

activities from your students to ensure

that you can then nudge them if you see

non-participation

on their online but more so um obviously

the university is a distance learning um

institution and and we then have to then

protect our online assessments ments

through various AI proctoring tools that

we are utilizing but also the college of

science engineering and technology um

through one of their centers which looks

at augmented intelligence and data

science um they have developed uh their

the owna proctoring tool which they'll

be rolling out soon. So there's various

activities that we have rolled out but

most importantly of our stance is to

ensure that we advance AI use um by our

academics and our students. Thank you.

>> Thank you very much. I mean so many

questions that we could explore um

thinking about feedback you know on

260,000 students doing um uh AI literacy

training or you know about the pro uh

proctoring tools. So may hopefully we

can come back to that. Um finally we've

got um

uh Doc the floor is yours.

>> For having me uh uh it's a it's a

pleasure being part of this uh august

body. So just to quickly uh without

wasting time, we we understand that UMP

that the proliferation of artificial

intelligence and generative AI

uh is becoming something that we cannot

run away from and as a result uh we have

our position. We do not uh also develop

uh policy instead we created uh

generative AI uh guidelines. So uh our

position in one line is basically that

we enable generative AI to enhance

learning, research and administration

but only with disclosure, human

accountability and also uh fairness as

well as privacy by design uh especially

uh to to our staffs and at UNMP the

there are six core principles and what

each of those principles put uh behind

our generative AI guideline means is

that academic integrity and transparency

is very very important. So uh staff and

student needs to use uh generative AI

where uh it is permitted but they need

to always disclose and uh to say when

and how they use it. So as a result uh

clear acknowledgement is is required and

lecturers uh can set task by task rules.

So when generative AI is allowed in

coursework, we typically cap uh it as a

generative AI content maybe EG around

25%ative

AI and require proper also we we take

into contribution human accountability

and oversight because we realize that AI

assist but it does not or decide. So as

a result uh human remain responsible for

accuracy for ethics and for outcomes and

they must always verify for bias and uh

hallucination especially in retail and

uh they've got to close uh it where they

use it. Another thing is that uh there's

fairness and nondiscrimination. As a

result, we make sure that we monitor AI

use uh in anything that we do uh

admissions, greeting, hiring and also in

our staff research to avoid embedding

bias and decision that must not just be

I mean decision that must be touched and

uh must be reviewable. Another thing is

that when using generative AI always

ensure that privacy and popular

compliance is embedded in your use that

is do not impute uh personal and

sensitive data into generative AI tools.

Uh so uh also we take into consideration

governance training and support. uh the

policy our policy is owned by the TVC

teaching and learning uh Menco and also

the CIO but we noticed that even when

you are going to use generative AI part

of the research that we do is that one

needs to understand how to prompt the

tool. So one of the challenges that many

people have is that proper prompting of

generative AI is very very important. As

a result, we have product engineering uh

uh workshops for for for our staffs and

students every every now and then. So,

uh if I have to leave you with let's say

three words in our uh in the position in

the UN's position on the use of

generative AI, it is enable, disclose

and protect. Uh that is basically our

gen AI etos in action. Thank you. Over

to you moderator. Thank Thank you very

much. Well, let's swing immediately into

the next question. This the the second

of uh three questions. Um and we're

going to lift the hood a little bit and

get into the the the the messy details

here. Um and this is a space among among

friends. So we're talking about our

specific institutions.

Two questions about this. The first is

what is the relationship between the

guidelines? We've had guidelines uh and

uh framework. Um what is the

relationship between that for students

maybe undergraduate students and

post-graduate students? Is there a

distinction or not? Are they all lumped

together? What about staff? What about

uh do we split that and see staff as

well there's academic staff maybe

there's re research staff maybe they're

different um maybe admin staff are seen

as different um so what are these uh

divisions these cleavages the the

shibiliths that we use within our

organizations in terms of the guidance

that we are providing

that's part A and part B are the schisms

maybe not between uh undergrad and

postgrad or between uh teaching and uh

assessment but between different let's

call it disciplinary cultures. So

between the folks from law versus this

is the folks from humanities the

engineers and those from arts. So this

is uh we're bearing our souls to one

another um as we learn. Um and I'm going

to ask uh Pinder can you can you lead us

on this please?

Thank you Maxine. I think from UNISA we

don't have a differentiated policy um to

address the specific domains right um we

are hoping that the colleges themselves

will then uh inform through the

established guideline then extrapolated

further to find relevance within their

specializations and domain. Um as

earlier indicated

uh from uh uh an institutional approach

uh having identified uh and the

increasing usage of AI among students we

then um rolled out the compulsory

academic integrity course and one of the

reasons was that we were seeing that the

the problem is actually multi-layered

one that students are not um well

advanced in terms of academic writing.

One of the speakers um I think it was

Nicolet, she spoke about um not looking

at it at a punitive measure but having

an educational approach. So how do you

advance academic skills knowing that

your incoming students are also

struggling with um uh comprehension,

right? So we advanced uh the academic

integrity course has five topics within

it. So what is UNICEA's values mission

in relation to academic integrity?

Secondly, what is academic integrity?

And part of the feedback that student

provided was that they they were

actually not aware that the examples

that we had provided to them as this is

unethical that they thought it was

actually an okay manner to utilize AI

and provide us with outputs. Secondly,

even when we speak about um AI

utilization, which is one of the modules

within the course, uh you find that um

uh the ethical use of drafting,

outlining,

um having input in the final outcome,

what is acceptable and what is not

acceptable. It's quite not clear. One of

the studies we engaging with the

students on uh actually revealed that

students themselves are unaware how to

utilize these tools. They're very good

in asking the first question of what or

how or copy and paste what their uh

lecturer would have said but they are

unable to progress into enriched

learning by prompting it further to say

provide me an example explain it to me

like I'm a 5-year-old or do that sense

check to say what you saying you

understand and that's what we were

trying to build um there is a gap that

we do uh recognize that from an

academics perspective, we haven't

advanced a lot of um AI literacy uh

skills uh they are left on their own on

the various tools to determine which way

they will want to personalize, gamify or

visualize their learning content. Uh

it's something that we are working on as

part of the AI uh task teams. But we

have provided sufficient um uh uh

prompting guidelines uh as a starting

base for the student at least for them

to understand the gray areas and the

gray matters that are there. So um I I I

I think we have done well because we

have advanced a chatbot within the

module so that the student can then

engage in within the boundaries of the

learning objectives and outcomes of that

module for them to then understand it at

a much deeper and personalized level. I

will leave it at that uh for our

institution as we have un understood

that there's some strengths in what we

have done so far but there's still much

more that needs to be done and

particularly uh when we look at the UK

and the US the manner in which they have

gified

uh their learning um if we consider as

well uh whether the bloom taxonomy is

still relevant from a pedagogical

perspective because we then need to look

at the higher order of creation and

evaluation rather than knowledge recall

that we already know that AI provides

easily. Thank you.

>> Thank you. Thanks, Pend. We're talking

about your institution, your guidelines,

how they sit uh how they break into uh

different facets. Uh what are things

like at Northwest?

>> Uh yeah, thank you Martin. So just to

answer your questions um straight, do

policies and guidelines at our

university

allow for disciplinary differences. The

short answer is yes and it's needed

because um for example engineering the

emphasis is not so much on academic

writing while in humanities it would be

so certain forms of AIU should be

allowed in different faculties not only

in faculties in disciplines and it

differ from lecturer to lecturer. So

there should be a lot of flexibility

with these guidelines and policies. But

we can ask then the question why then

guidelines and policy if it's so open or

flexible. So um I think in a

previous discussion we um we already

realized in South Africa we we don't

have a university yet who have a AI

policy. I I looked into it and I'm will

be glad if somebody correct me but um

there's no university yet. At Northwest

we have a framework policy and hopefully

by middle November we will have a policy

approved by council and only half of the

universities South Africa um have

guidelines which is quite um yeah I must

say weird because we need these

guidelines um and policies. Perhaps you

can ask why a policy is it really

needed? what's the will it solve the

problems of AI? Uh and I thought about

that because I've put a lot of effort in

developing our policy now and have a lot

of consultation about it. So I looked at

other policies at at Northwest um and

asked why are those needed? Um and

there's a policy that I thought about

now of course the example is not it's

not fair in a way and not fully

applicable but if you think about the

sexual harassment policy some

universities have them. Why do they have

them? What role do they fulfill? If you

have a sexual harassment policy, it does

not take away sexual harassment or solve

the problem at the university, but it

does give some guidelines how to deal

with it in that institution. Who's

responsible? Uh how should they report

it? Where should it go? What is our

stance? Um so in that sense a a policy

is crucial to point out who's the

responsible person where's the office

where these things are centralized where

is it coordinated and of course such a

policy cannot be in described in too

much detail. There should be an openness

for interpretation. It should link with

other policies uh that's already in

place codes of conducts behavioral

policies things like that. And the same

for AI policy. It we need a policy to

bring everything together. Guidelines

are good. Most universities worldwide

have guidelines. But guidelines, the

implementation of it, the revision of

it, the you know who's dealing with it,

the teaching learning office, the

research one, is it at it? So with a

policy you can easily get it thing

together and ensure that it's enough

differences for faculties lecturers but

also enough coherence to guide the

university in a certain direction. If

you allow me you I talking quick and

fast um Martin but I do want to share my

screen quickly that fine.

>> Yeah go ahead not a problem

>> just to show you what we have. Can you

see the quality now?

Yes, on my screen I know it's small but

this is how our policy look like that we

hope to approve now in middle November

and you will see the policy statement

basically said that we want to guard the

human centered ethical sustainable

lawful effective use all the risk

everything of AI this is what the policy

must do um and it's short we cannot go

too long in AI policy it's basically

principles but what's important

important for me it it indicates the

roles and responsibility of people how

to implement and make this policy or the

AI governance at the university

practical that there's not sort of

contradiction between different

departments we you will see that I

highlighted it here we leave specific um

open openness for differences between um

faculties between lecturers but again

that should be guided within the in the

policy and what I want to lastly just

indicate we have a AI up at North

University that's sort of the hands and

feet of the policy otherwise the policy

will just go to the shelf and gather

dust and we have a AI steering committee

who's driving the policy we have it rule

on the use of AI and all these um

stakeholders come together to talk about

these policies and guidelines in

relation to our academic integrity

policy and how we deal with um the

practical issues of AI responsible and

ethical use um and how can we uh really

have an educative approach if it comes

to the ethical responsible use of AI

>> and I'll stop there. Thank you.

>> I'm going to pause you there. Thank you.

Natural pause.

Um Suka, what would how would you

respond?

Thank you. So I'm going to build a bit

of what Anna has said because there's a

lot of resonance between I think some of

our approaches.

Um so I mentioned the framework earlier

and the rationale for it. Um and I think

the framework is has happened relatively

recently but early on from 23 we

recognized that different stakeholders

need different guidelines and entry

points into the work of AI. So we did

develop um complimentary guidelines I

suppose. specific guides, um, a teaching

and learning guide for academic staff,

assessment guide, prompting guide, a

researchers guide, and student guide.

And I did post a link earlier, and these

are creative commons um, and people can

reuse them. Um, and I think they're not

hierarchical. So, it sort of answers

your question that that that we we do

need to think about differential needs

um, and creating space for further

disciplinary specific discussion and

adaptations. Um but I think maybe one

way of thinking about it is to take an

ecosystem approach to um responding to

generative AI. And so we and I suppose

maybe the story is that we have we had

these guidelines from 2023

um I see Nicholas says um you know there

they need updating and so on. We've been

updating and she's right um that the

field moves so quickly. What we said at

the beginning of 2023 looks you know a

bit naive in 2024 maybe. So we've been

updating the guidelines every 3 to six

months. We have a team doing that and so

on. Um but then what the guidelines are

not enough which is why we went towards

the framework and I think that's what NA

is referring to and it's about messaging

that it's not the responsibility of one

individual somewhere or a student. It's

the whole university ecosystem that

needs to be supportive around a

transitioning into whatever world we're

going into. And so in the framework for

example we've mapped out how different

actors across the institution need to

operate executive leadership um the

committees that are responsible the

support departments faculty depart

faculties and departments themselves

even down to teaching and learning

committees and what their roles are in

and then individual teaching staff staff

is support teaching and students and we

did that by going to each of these

stakeholders and saying what is your

responsibility what are you willing to

sign up for in in this in this

framework. You know, we've got the

guidelines, but they're just the start.

You know, they they are, you know, take

this as a guideline in itself, but we

needed something stronger. And I think

that's where we thought we needed this

this this framework. Um, and what we

also do in the framework is then say,

well, there's no oneizefits all. And we

explicitly state that disciplines need

to take ownership and agency over what

generative AI means for their field for

what they teach what is going to be in

the future curriculum and therefore how

assessment needs to be changed or not

really how assessment needs to be

changed. How do you now assess? What are

defendable decisions that you can make

about assessment given the fact that we

know that students will be using

generative AI? Of course, we can secure

certain assessments and we have where

it's absolutely crucial that students

develop particular foundational skills

that they are in a situation invigilated

and observable and so on. All of those

things are in play. But I think what

what we have absolutely said is that yes

disciplinary differences it you know it

would be nonsensical to have an a policy

for AI just as it would be for internet

or social media that is just a blanket.

So it really is important to start

deliating the different stakeholders and

the different disciplines. So it cuts in

in various different ways. And so I

suppose the challenge though and again I

I keep coming back to none of there's

possibly no happy ending here. You know

there's lots of like productive tensions

and more questions coming out. You know

we need enough consistency in our

institution approaches to maintain say

institutional standards around academic

integrity. So we can say things like

secure your assessments for particular

you know maybe for lower um for the for

for sort of the undergraduate um you

know first years but we also need enough

flexibility for disciplines to determine

appropriate use within the pedagogical

context and I think that's that's really

the conversation going forward and in

terms of you know we already have

academic misconduct policy and

assessment policy so in a sense we

should not be reinventing the wheel with

new policies we need to look at what our

existing policies already have and then

adapt or adjust um accordingly. So I

think that's that's really how I would

respond to to that particular question.

Thanks.

>> Thank you. Thank you. Thank you very

much. Um Hola, how would you how would

you respond and share a little bit about

the inner workings over at NPU?

>> Okay, thank you. Uh so I would say our

guidelines fit together which means that

we use a tiered framework

uh a single uh institutionwide set of

principles. Then we have role specific

uh guidance for staff including our

researchers and for student we have

other ones with uh course and discipline

level rules which are set by uh which

are set by where appropriate. So uh

institutionalwide

umbrella we have general guidelines that

are that apply to academics

student management and non-academic

staff. And these general guidelines

spelled out the core principles which

are fairness, accountability,

transparency, privacy and robustness. We

also have staff and researchers

guideline separate uh which cover uh

chain research administration and

communication and this guideline

requires uh disclosure authorship

uh authorship clarity to ensure that AI

is not the author and also verification

of finding before publication and then

we have student uh guidelines which is

basically for undergraduate for

undergraduate and post-graduate. So what

the student guidelines does is that it

emphasizes academic integrity. Uh also

lectural level permission are emphasized

for each assessment. Proper citation is

required where the use of generative AI

is uh is is is enabled and also it must

be popular aligned privacy. Now the

question is who owns the policy?

Formally uh I'll say the framework of of

the guideline. Formally the owner is the

DVC teaching and learning and also uh uh

Mango which is management committee of

the institution as well as the CIO act

compliance officer uh with consultation

via Mango. So we have AI subcommittee

within the senate and that keeps the

governance central while implementation

is local. So uh where the difference is

allowed is that our baseline is

basically consistent which is disclose

disclose use protect privacy ensure

fairness uh keep human uh in the loop

but we we permit local v variation by

task and and disciplines. Also, we

allowed our lecturers uh to set where

and how Gen AI is allowed for specific

assignment especially in in technical uh

modules like uh software development and

all those. We allow our students to

engage in what we call vibe coding via

generative AI. But where they do that

they need to uh site that this was done

using this otherwise if they if they ask

questions and they are unable to answer

how that was done that's a challenge. So

basically our joint AI policy is state

uh we have a single institutionwide set

of policies or uh principles that

applies to everyone and uh each also

have practical guidance and also like I

said uh lecturers set task level rules

uh to reflect on disciplinary norms. So

governance as I said sits with the DVC

uh teaching and learning with MCO and

CIO acting as oversight. So whatever the

discipline is the constant are the same

as I said just make sure that you

disclose the use you protect privacy and

you avoid bias and above all make sure

that you keep human accountability human

must be kept in the loop so we back this

standard disclosure I'm going to I'm

going to stop you there thank you thank

you thank you for that Nicola

Have we got you? Oh, there you are.

>> Yes, I'm here.

>> Yeah, disciplinary cultures and the

relationship between these guidelines. I

think for us lecturers are encouraged to

use the guides um you know to think

about

you know not to see them as generic but

rather um what does appropriate use look

like within a specific disciplines and

to include an AI statement in their

course outlines that speaks to

appropriate uses of GNAI and how these

support disciplinary knowledge building

or even how particular uses might

undermine that. um where lectures might

want to discourage particular uses. Um

this is also um you know something that

I think with AC across whether it's

postgrads um students or researchers

um I think what what is common across

these is around evaluative judgment

and um you know epistemic access. So for

students

postgrads to achieve epistemic access

involves internalizing the standards and

ways of knowing in their discipline you

know often and this is the you know

about the ways of being an academic

often we see AI statements in particular

journals where they're making that

explicit around how you can use AI in

your research articles.

Um but we see the cap capacity to to

judge the validity of an AI response um

as crucial and that that can only really

be developed through a strong foundation

of in disciplinary knowledge. So yes you

know and we and we do encourage that um

as part of discipline specific

practices.

Um so for example you know colleagues in

law note how you know often geni tools

tend to hallucinate a lot in the um

South African legal context because of a

lack of data um and some other fields as

well. It's really important that um

whether you're a student, lecturer,

researcher that you verify um AI outputs

and you know get so it goes be and

develop that evaluative judgment. So

it's not all about learning about your

specific uh disciplines geni practices

and in a way it's not often you know not

just within a discipline um you know or

department

um it could be you know particular

fields where these norms um are emerging

um yeah

>> thank you Nicola um oh okay and that's a

that's a full house so we're going to do

one more round and it is now three of

us. So I'm going to for the first time

in the series take up my stopwatch um

and uh give speakers two minutes

each

to do the following um and that is to

reflect on what has worked particularly

well or what has gone badly wrong. Again

we're among friends. The formal way that

I want to say this is can you share some

learnings or challenges or ideally both

regarding generative AI the guidelines

or the policies as it relates to

research integrity research use. This is

uh reflecting the questions I see um in

a house I s briefly saw a question by

Jennifer waterme um about this thing is

what you know what what works well and

what hasn't and um let's put that out as

sharpies as sharpish as possible so that

we can get get some engagement from the

panelists among one another and then

from the audience as well. Um so um can

we go back to you Nicola? Is that okay?

We you were last now you're first. Um

what what has worked well or what has

worked quite badly?

>> I think what is working really well is

the commitment to ongoing conversations

with both staff and students. I mean

we've been doing a lot of stuff

informally you know discussions and

residences you know uh conversations

with academics. It doesn't even have to

always be that workshop. Um, but it's

important to, I think, you know, build

trust through those ongoing

conversations.

And that feeds back into updating our

guidelines to make sure they're relevant

and effective. We've also been doing a

students um as partners approach to

revising our guidelines. And I think

this is something that I see across

universities, people using this um

student as partners approach. Um I think

that's that's been going really I mean

that that that's got a lot of potential.

Um I think across institutions what I

see is willingness to share. Um as

mentioned you know I'm part of a number

of professional organizations and

networks. I'm part of the digital

learning and teaching team in Halasa. So

I'm um the lead for that team and then

also involved with the USAP digital

education learning and teaching um

community of practice and I can share a

link to uh that site that's got a

wonderful collection of OEARS that all

the colleagues here um a lot of fellow

panelists have contributed to. So I

think that's working well. We're not

starting from scratch. we can learn from

each other and that is happening in

these organizations.

>> Super. And that was 1 minute 30 seconds.

Amazing. Thank you, Nicola. Um, hola.

What has worked well and what is not

working well? What's going wrong at MPU

with regards to AI policy?

>> Okay. So what is working well for us is

uh basically I would say we have one

umbrella many playbooks where we have a

single institutional frame uh uh we have

a which gives the same north star while

rules specific guidelines testers and

researchers uh and student exactly what

to do. Human accountability stays

central. Uh privacy first we know that

and skills and support matter where we

have regular workshops uh especially on

how our students will perform on our th

perform prompting. Uh the challenges

that we have where the friction is is

basically hallucination and over trust.

So uh assessment gray zone uh threshold

trackers and task by task permission

which can confuse uh how we handle them.

data handling traps and bias and

fairness uh the policy commitment to

just those are some of the uh

challenges. So uh the I think uh in uh

observation that we have is that

research integrity which is authorship

clarity uh documentation uh uh trail so

whereby we make sure that staff must

keep clear record of of of what they do.

So this is basically we we believe this

is part of a scholarly uh good scholarly

hygiene. Uh, thank you. Over to you.

>> Thanks. I appreciate that. Um, and dear

over it to Unisa. What has worked well?

What hasn't worked well?

>> I think the showcases have uh definitely

had some positives amongst um academics

where we illustrate and demonstrate the

various tools how to utilize them for

visualizations.

um the our AI policy and guidelines

unfortunately it's still under uh Senate

review uh for for approval and because

um it has not been widely distributed

then we're finding that there's still a

lot of confusion amongst both academics

and students as to how to utilize these

tools uh effectively. But one of the

positives we we do uh subscribe our

students to the uh writing tools. Uh we

use open rightful. We also have

integrated uh kinosis within our word uh

tool uh so that students can then

advance their research utilizing uh this

tool. There's a lot of fear I think uh

you we also seen this in in in social

media. follow students a lot and you see

them uh uh lamenting AI outcome, right?

As if an AI outcome is a bad thing.

While we understand and I think maybe at

the beginning when these tools got

introduced, there was a lot of

hallucinations involved. Um and it has

then generated mistrust and lesser

utilization of it. uh you would see uh

especially with the older academics also

uh we struggling with uh the punitive

approach where you do see those elements

of um AI usage. uh we still utilize Turn

It In and we know that even with Turn It

In and other detective tools there are

false positives that are uh are

identified and that becomes a

problematic disciplinary manner matter

where we find um there is a high

increases of disciplinary cases through

the recognition of these tools. So, so

there's still uh quite a lot of work

that we still need to do from an

educational perspective, but I think uh

the showcases, the webinars uh have

assisted us in in ensuring that uh

colleges and stakeholders are aware of

the positives of these.

>> That's two minutes and lovely. Thank you

so much. Uh Suka,

>> thank you. Um, so I think what I would

probably start saying maybe is that

frameworks, policies are probably

necessary but not sufficient. Um, you

know, from where we're sitting, they're

helping to start conversations, which is

great. They legitimize certain

initiatives, provide institutional

backing, you can see the gaps and

advocate for resources, but I think the

real work happens in the messy middle of

implementation and contestation. And I

think that's where we're at. So, it's um

you know, we're we're in the midst of of

this right now. And I think one of the I

mean, maybe it's both both something

that is um a positive and a negative is

is to let go of certainty here. um we're

used to having answers, clear policies,

established practices, reliable

assessment methods. But with generative

AI, the more we find out, the the more

we unear practices, how students are

using them. Um we're operating in

genuine uncertainty. I think we we don't

know if a student has used AI unless

you're actually watching them. We can't

agree on appropriate what appropriate

use means possibly yet because we're not

sufficiently we haven't had enough time

to get our heads around it. we're still

redesigning assessments while the

technology is evolving. So I think

that's that's one of the you know it's a

it's it's both the balancing act of of

dealing with uncertainty and that can be

a negative for some people a positive

for others. So I wouldn't I I would say

that one of the things that has worked

well I suppose is that um we've tried to

shift it away from academic misconduct

and and AI use be and students and

cheaters and that away from that

narrative. Not sure we're fully there

yet but but because we have an

innovation pillar we have had a really

great response to our AI teaching

innovation grants call. We've said here

is money apply for it and we had a great

response for people saying well actually

maybe I am interested in seeing what I

can do for my students in this in my

course using AI. So we've got people

building chat bots, people experimenting

with marking and grading and so on. And

we've done that without like trying to

keep away from vendors, you know, and

saying this is a vendor because that's

also the other environment I think these

tools are showing up in all of our

technologies without us asking for them.

And I think the challenge is as

universities we need to control the

narrative around the use of AI and

particular tools. So I think again I

think that for us that has worked well

because we're trying to genuinely

surface use cases from the bottom up um

rather than the narrative of um larger

vendors saying well you know AI will

solve all your grading and marking

problems because that is also quite a

dangerous narrative. So that I'll stop

there but that's really top of mind at

the moment. Thanks

>> most helpful. Um an

>> um yes thank you Martin what worked well

I think um something definitely unique

of Northwest University is that we

established the AI up in the beginning

of this year it wasn't my initiative

although I'm the director now of the AI

up but it works well suddenly there's a

central office where some anybody at the

university can ask for help where

there's coordination taking place uh

where I can sit on different forums um

communicate we have our own website uh

so it's really helpful to have a

centralized place for teaching learning

research and all other possible AI

application in admin or finance or

wherever um at the AI up our guidelines

I think it took a lot of effort but we

have that in place and it works well we

really open to revise it often that will

be part of the the nature of it I think

our educ ative approach to um using AI

for the students and our integrity

system works really well if somebody's

reported for irresponsible use of AI say

um the example was mentioned by Nicolola

that fake sources were used or something

we see that students don't uh after

doing remedial courses don't make the

same mistakes again we we we train them

to use AI more responsibly

uh we also had a lot training. We

developed two courses um at Northwest.

One for students, we call it AI for

academic and career success. A twohour

online course which is working really

well. We get good feedback from students

and we had a course now for for changing

assessments AI and assessments where we

taught now in 10 workshop about 600

lecturers a dayong lecturer how to

change their assessments and integrate

AI. We don't want to police the use of

AI. would rather want to integrate it

and change our assessments that um allow

uh students to use AI but also allow

lecturers to feel confident that there's

learning taking place. What's not

working well of course uh everything

everything is not perfect. I think we

struggle um with a lot of things and

it's not in a specific order but the

digital divide is still a reality. We

have really poor students that don't

have access to these devices that some

wealthy students have. Even the example

of Indie where he mentioned about Gemini

Pro that Google made available that's

for free for students. That's just a

catch that you need to register with

your credit card before you can use it

for a year. How many students have a

credit card and after a year you will

have to start to pay. So we don't have

that yet. The second difficult thing is

that the problem from teaching learning

moved now to the the post-graduate

research domain. Um we struggle there

with supervisors that's on sure uh

post-graduate students that are unsure

uh uh examiners that are unsure. So

there's a lot of work there to be done

and we I think that's our new focus um

for the new year to see what we can do

there. Um yeah so that's in short. I

think we also struggle like other

universities. Not everybody is buying in

and using AI effectively. Um but we we

trying with training. Thanks Martin.

>> Thank you. Thank you for that. And then

I'm going to ask you, have you got a

question for any of the other panelists?

Um but I'm going to ask you to make it

specific.

I know initially we said we might we

might have sort of what do you guys

think questions but um our engagement is

is long and I really want the questions

from the floor to be addressed as well.

So is there something specific that you

want to to ask someone

>> but these are just for curiosity so you

can decide if it's good enough to ask.

Um

>> it will be I'm sure

>> I would like to know two things. what

are other universities doing about the

digital divide in terms of making AI

tools available uh for all students um

and also what are the experience at

other universities um of not having a

central office or place or because I

hear there's some contradictory sort of

approaches and even competition you know

developing AI courses who should take

responsibility so I'm because it's

working so well with us I'm just

wondering how other universities or

cutting with that.

>> Okay, we lots lots to deal with there.

Um before before anyone answers, Nicola,

what question would you like to throw

into the mix? Um An has violated my

dictat here. Um but nonetheless, is it

possible to direct it at someone? If

not, then that's fine and we make it a

general question.

Yeah, I guess just to say around funding

because um so far to my knowledge maybe

maybe other colleagues are aware of our

national software consortiums um funding

particular AI tools making um you across

you know inter you know university deals

that can benefit um all of us because we

know a lot of the really good bespoke

tools I mean feedback fruits mind joy um

you know just some of them that that I

know of and have exper experience of are

really really expensive and our current

university budgets you know it comes as

a shock to them when they see the prices

um associated with these. So yeah, if

you know anything of software

consortiums and how you're dealing with

the increased costs of um the tools that

you want to implement.

>> That's a that's a great question. Um

Khan, if you had a question to your

fellow panelists, what would it be? Or

to someone in specific? Sorry, I I

should say that

>> I think the question that I would have

asked is basically uh what somebody had

actually asked in uh in the in the in

the panel. I can't remember uh the

person or the question which says uh

which asked us to look into uh

the the uh our view on how AI is

impacting South Africa differently for

the country because uh what do we think

we can do on this thing because I think

uh it is high time we start looking for

African solution to African problems. So

as a result of this uh how can we all

come together as uh

an institution under the umbrella of

South African universities and then work

together on one project that can

actually benefit the entire uh South

African society in terms of AI. How can

we where where can we get this data

because we have a lot of data in Africa

but I think the challenge that we have

is that we still don't know how to mine

it for now. Thank you.

>> Thank you very much Pender. What

question would you ask of a fellow

panelist?

I think to um uh Suka because they had

taken the decision of disabling their AI

um detective tools um uh just on its

impact now on how they are measuring

integrity and um equity and and and and

learning outcomes given that aspect uh I

I think it's one of the challenges that

we are confronted with with

overutilization of AI tools and it'll be

interesting to find out from her and

other panelists as to how are they then

measuring its impact and its effect on

academic integrity.

>> Lovely. Thank you. Thanks for that. So

now um your question.

>> Thanks. Yeah. Um I'm interested from

fellow panelists. Um, so we're thinking

of um we're not thinking we're we're

rolling out um mandatory AI literacies

training for students and I wondered if

the panelists had any experience of of

what that might look like. It's not

something that we we have done before in

rolling out anything mandatory.

>> That's that's fantastic. Okay. Now what

I think we do since it is 20 past um

instead of just an open engagement

because then we're going to be here um

is we allow questions from um and Louise

I would need to check with you and with

it with the technical team at ASF. Is

that possible? Can we can we actually

allow participants to ask questions? Um

is it going to block us if we do that?

Not not sure if that is No, no, we know

it it's possible because we had an

earlier comment, right? Um by somebody

who had a hand up.

So, we've got a whole bunch of questions

in the in the Q&A. Um I've seen that uh

some of the panelists have been have

been answering them. We've got these

one two three four five six seven

questions. Um and then uh we're starting

to o open this to the floor now. So

we've got a question around um how does

it work if you don't have a central

office? Do we see that it is a bit of a

bun fight uh between different

institutional bodies or not? Uh number

one. Number two, how do we deal with a

digital divide and the fact that um we

have students with no resources uh

coming into university and and expected

um to perform like those who do in many

cases. Um we've got a great question

about the funding and software and what

consortiums are there, what

opportunities are there. Um, we've got

had a the question about what makes

South Africa different or what are the

commonalities with other African

countries or other uh countries

elsewhere? We've had a question about

how well does compulsory AI training

work um and and how would we measure the

the the success? So, do people do it and

and can you actually measure outcomes?

And and maybe the rider to that is how

well do AI proctoring tools work? Let's

forget about differences of sort of

philosophical stance, but do they do

they actually work? Um and um let's let

let's maybe start there. Um any one of

the panelists want to take any question?

>> I won't call on any

>> please. Yes, go ahead. If I may then

take the one that was asked about

mandatory AI literacy because we rolled

it out for our students. I think that

that that works very well because the

students that have subsequently found

themselves in the disciplinary office

have then stated that post doing the

course it has assisted them in

identifying that ethical uh utilization

of AI. So, so I think it it will be a

good response for UCT. The challenge

that I still foresee is that the

opportunity still still is there uh for

misuse of the um AI tools and then how

do we then prevent that. Uh I must say

from our side uh based on the tool that

we have developed with our professors

within the college of science and

engineering uh we have successfully

managed to block out these nefarious AI

tools that screen reads the students um

page and answer on the students behalf.

So we assured that we are uh

strengthening um the ability of the

proctoring tool and utilizing uh a

second camera for AI algorithms to

assess the student behavior then we'll

be able to be assured that with informal

examinations that that is the student

and that student didn't partake in

utilization of AI but I I I do agree

with um Sakina that uh the the mandatory

course it does provide uh the benefits

that they would want to see from a

student heightened level of awareness.

>> Fantastic. Um, so we've had comments on

uh AI proctoring and a little bit about

AI um sort of training. Um does anyone

want to comment on the digital divide

maybe on what makes South Africa

different

or similar?

and then

okay. Yes, go ahead. Yes.

>> So maybe also Nicholas's question as

well around software because I think

it's a little bit related. So I mean I

totally agree we you know at the South

African universities we don't have the

the dollars and the pounds to pay high

heavy licensing fees for tools that you

know are quite new maybe not tested not

evaluated so I mean our approach is to

use what we have so I mean we have

Microsoft so we we've rolling out or

letting people copilot chat it comes

with our subscription and it's in a safe

sort of data environment it may not be

you know whatever Everybody is used to

you know people are using bring your own

eye all over the place your own AI you

know um chat GPT and um claude and so on

but we are trying to at least offer a

baseline set of AI capabilities based on

what we have at the moment in the

institution and also within what our

learning management system offers and

these things are showing up but it's not

it's not a you know long-term it could

lead to a sort of digital divide between

universities in South Africa and

universities elsewhere um where having

access to these tools is a kind of um

you know like a perk or a you know some

students you know globally have access

to you know particular types of tools so

I think it is actually a real concern so

I'm not I'm not at the moment at least

we're we're sort of looking at actually

what we have in our environment and how

we can make make um best use of it um in

terms of whether South African

universities you know whether things are

different I mean that's that's a multi

multi-layered question. Maybe one

example um is that um you know I've I

was running um an assessment discussion

forum and somebody from our African

feminist uh studies department said

they're actually using um uh the large

language models in teaching because the

African feminist is not very well

represented in the data. So they

actually can use it in a way to critique

um large language models because it

shows what is not in there and what is

not representative. So they're using as

an example in curriculum around um you

know epistemic access and so on. So

there's sort of examples around

awareness of what is in the data, what's

it's been trained on. Uh and and that

that we found other use cases where um

you know epistemologies that have

emanated from from from our context are

not well represented and that's actually

being used um in in a positive way. Um

maybe that's that's that's a small

example, but these are multi-purpose

technologies and the the use cases in

which you can build them build on them

are are quite extensive. So I think it's

quite differentiated. Thanks.

>> Thank you. That's a lovely nuance there.

Um

I want to ask if it's possible um to

Natalie Swaniple in the audience. Sorry

that I'm calling on you like this. Um

would you be comfortable to unmute and

ask your question?

Otherwise I'll do it. Um, I'm happy to

do it. Sorry, Natalie, you might have

stepped away to make a cup of tea.

Okay, so I want to put this to the to to

to panelists. Um, what Natalie is saying

is we we we're asking the wrong

question. We shouldn't be using LMS at

all. They are there's an ethical Well,

maybe maybe I'm caricaturing this, so I

should apologize in advance, but she's

saying, hold on. large language models

are trained on stolen data. There is a

massive environmental harm um in terms

of energy use, in terms of water use. Um

there are these biases baked in because

of the data that is trained on. Um and

it is producing AI slop and a whole

bunch of uh other nefarious things. Are

we sweeping that under the rug? Who

wants to respond to that?

>> I I do not think uh we sweeping that

under the under the blood. But remember

on the when we're talking about the data

that the large language models are

trained about or on rather those are

datas that are in the public domain.

Majority of them are data that are in

the public domain and uh I think from I

I made a suggestion in one of the uh

articles that I wrote that uh there's a

uh also in one of the webinars that I've

participated in which is uh I can't

remember one of the book publishing

company I made a suggestion that

government in every country needs to

start regulating uh the use of

artificial intelligence which will

include generative AI just like they've

they've regulated the use of social

media you know uh without that if if

it's like in a in a society where

there's no law nobody can be can be

accused of being of of being sinner you

know so if government start regulating

uh uh AI then one of the things that

government needs to do uh is to ensure

that the data that the large language

model are trained upon uh consent are

given before they can use such data

otherwise uh there could be consequences

just like South Africa has come up with

papia just like the Europe has come up

with GDPR just like America has come up

with EPA and all those things so I think

the bulk of the work is is is going to

be on the government they've regulated

social media they've regulated

cryptocurrency They've regulated

banking. I think it's high time the

government of every country start uh

regulating the use of artificial

intelligence and how data that are

trained on large language models are

being used. And there's no there's no

there's no big deal if South Africa uh

take the the front seat. We can do it

before America even and that will make

us to be a strong country because in

terms of artificial intelligence

research I think in Africa South Africa

is doing well more than other countries

in Africa. Thank you.

>> Thank you very much.

>> So we still have a question around the

the digital divide. Um Tuko said

something about that. We still have this

open issue around um South Africa. Is it

is it is it different and what

opportunities there are? Um we've made

some gestures towards that. But I want

to ask I want to go back to Sica um and

ask two questions about um the UCT

approach um that you are spearheading.

The one is um can you give us guidance?

Um how do you respond to the UNISA idea

of making AI literacy or AI literacy

literacies training compulsory? Is that

something that you're moving towards and

will be implementing? Um and maybe just

on the one side on the other side maybe

uh just give us a bit more information

about the idea of the AI innovation

grant is that specifically to to

lecturers and is the idea um to spur

sort of creative ideas or best practices

or is that something else entirely?

>> Thanks. So yes, the um the mandatory

capabilities training for AI uh for

students, we actually have it in the

framework and we are ideating how best

to roll that out. So I was just looking

for ideas. Um but it's likely so we

already have AI training for students in

our learning management system, but at

the moment it's voluntary and they're

encouraged to go and and do it and it's

self-paced. Um so yeah, uh that's very

helpful. Um I think it's just something

that um we want to really discuss you

know how disciplinary specific um it

needs to be whether it it's sufficient

to have a kind of baseline level of of

AI literacies that looks at some core

issues around what what AI is how to use

it prompting and so on so that all

students when they come in at

orientation then they have a refresher

so yes that that's that's on the on the

road map um and we know there's some we

you know maybe students won't be able to

take final exams unless they've done it

or something. So, we need some carrot

and stick stick approaches to that. Um,

and then the AI innovation grants. I'm

just going to pop a link into the chat.

Um, but yes, we managed to secure some

funds through the DBC teaching and

learning at the end of last year. Um,

and we put out a call to the entire

teaching community. So it was academic

staff and staff who support um teaching

and said think about um how you might

use AI in for a teaching and learning

problem and um we sent it out and we got

a really great response. We had a

committee who evaluated the proposals

and we're actually using it um as both

an incentive to encourage people. So,

you know, because because we've got a

whole like range of people interested,

but you also need to support the early

adopters in your institution um to to

encourage people who actually want to be

on the edge and who want to try things

out um and possibly fail, you know, and

so on. And and we want to really kind of

like get the nuances. What does it mean

if you are going to do some AI assisted

grading with human oversight? What does

that actually look like? you know,

should you what what are the ethical

issues you look at? Should you should

you tell your students? At what point do

your students know? I mean, we think yes

to all of those. Um and so what we're

also doing is we're monitoring these um

grants quite carefully. They're invited

to come and speak to us to to share

their challenges. So, it's a very safe

space for people to say actually I don't

think that worked very well or I'm

having real challenges with this. And we

wanted and we've got 14 of those um

across the institution different use

cases. We're also discovering what it

takes for staff to say build a bot, you

know, do vibe coding, what tools work

for them. So we were tool agnostic. We

said use whatever you like. We'll pay

for the license. We wanted a broad range

of things. And it's been a lovely sort

of generative kind of space for people

who really just want to experiment and

ideulate and innovate and and and we've

set up a supportive infrastructure not

without its problems you know it it

sometimes butts up against some some

existing practices within departments

and there's negotiation but it is the

messy work um but it's also I think

advice is to to also gather enough

people around who want to push the edge

a bit as well here so that we're ready

for when things come so that we don't

have to take everything from the north

or from vendors that we build our own um

capabilities in the innovation space as

well so that we're not just users and

consumers but also builders and um uh

people who who who can imagine around

teaching and learning. And we're

interested particularly in things that

are you are important to our context

such as supporting multilingualism,

curriculum change, epistemic access,

those things that are very important to

institutional mission and that's what we

asked people to um think about when they

put in their proposals. Um yeah, thank

you.

>> Thank you. Um,

I'm looking at uh one of the uh

questions in the in in in the Q&A there.

Um, so I put it to you in a in a very

practical sense. um colleagues um is

anyone oh it now sounds as if I'm saying

is is anyone still listening to jazz but

is anyone still and I don't mean it in a

porative way is anyone using um AI

detection tools and then specifically I

think to an to Nicola who both um

emphasized the idea of a restorative

um approach um to say a little bit more

about that and Um I have heard s sort of

stories about how um you know lecturers

um would uh be called out for um having

used it or researchers rather um and

then go through some sort of restorative

approach and have that um with quite a

quite a happy ending. Um but you'd

rather have it from the horse's mouth

than from me. So anything about um from

any one of our panelists um are you

using detection tools? Why are you using

and is it working? And then maybe

something from Nicola or Ana about the

restorative approach and in practical

terms how does that actually work? How

do you restore somebody?

Martin, can I say something?

>> Yes, please go.

Yeah. So the just two comments uh the

energy usage is high. I just want to go

back to that. Uh the energy use uh as is

is immensely high. Uh that however will

lead to new kinds of computers based on

biology and I don't want to say much

about that but these are in being

developed at the moment at Microsoft and

others which will be far less energy

intensive. The second thing that I think

we need to consider is with digitization

and we talk about the digital divide but

that's a temporary thing in the future

what will happen with digitization in

high education it will do two things it

will demonetize higher education and it

will democratize higher education and I

think those are two very positive uh

things that will happen at school level

but also in higher education. I have

absolutely no doubt about that and we as

universities need to prepare ourselves

to lead that way. Otherwise we will find

private institutions even individuals

setting up their own uh learning and

teaching programs and all they need to

do really uh is to get accreditation of

what they offer. So just those two

comments quickly on the energy usage and

also on the positive effects of

digitization

in terms of assessment if we have to

check whether someone is has used AI to

answer the question and if we worry

about that we are asking the wrong

question

uh we should be ask we should be asking

ourselves what are the questions that we

are asking um if it's purely about

memory if you go to the lower cognitive

levels according to Bloom's taxonomy uh

then we asking the right questions the

wrong question. So so my uh point of

view here

is that the first thing we should ask

ourselves what is the kind of question

that we are asking because then it

becomes irrelevant whether someone is

using a library Google

AI or any other tool for that matter.

Thank you.

Thank you. That's lovely. I much

appreciate that. Thank you Jean.

Can I come

to your to your question? Is that fine

about detection tool?

>> Yes, go ahead. Yeah.

>> Let me just say um to professor clutter

quickly, I agree with his last statement

that if you have to see detect if AI was

used, you asked the wrong question. But

it's extremely difficult with uh AI

tools that develop so quickly so fast to

ask a question that AI cannot answer in

a way. even if you keep all the looms

taxonomy levels uh in play. So I think

six months ago or a year ago you could

have asked I remember last year this

time we had some questions you can ask

you know and I will not be able to

answer it now it's impossible if you

have any assignment given as a takeaway

or something that students have to do at

home like uh uh research for PhD or

master they will have access to uh AI

that will be able to answer it how good

it is that that's the question if has

verified the sources. I see you nod your

head glitter. So I'm glad about that. So

there's the skills I think we need to to

train um students and of course we

struggle to to train and help or assist

um lecturers to develop new assessment

ways of assessment ways of learning. Um

that's what I hear from Sakina as well

that we all try to change our assessment

to to move away from detection tools. I

think that is what we want to do um

because it's contentious. It's a

negative way of of dealing with AI to

try to catch students who use it. But at

Northwest University, we do use it. we

do use it um as a secondary or ancillary

tool in detecting irresponsible use of

AI because that's still something we

have to deal with that sometimes a

lecturer will say you may not use AI and

then we get uh AI tools being used. We

see it with the prompts even in the

answer or we see it with fake sources or

sometimes we put the Trojan horse into

the assessment and then we see the

results of that. So what we do we use

detection tools and we use in a

secondary manner um and if we detect

that students use it irresponsibly in

other words they depended too much on it

or they didn't declare it or something

like that we engage in educative manner

and I think that's what's uh critically

important with detection tools that it

should not be used in a punitive way

that it's immediately a disciplinary as

if this is the final proof uh definitive

proof that somebody used uh AI. We

rather see that um it can help to flag

sometimes irresponsible use if you have

a detection tool. Take for example I can

compare it with a medical doctor who has

to make diagnosis of his patient. So he

uses different tests. He will see the

patient look pale or this or this. So he

can say yes I think you have this

disease but you can use a blood test for

example and that blood test can be say

95% accurate. So why not use that to

also confirm if there's some problem

here that it can be flat and then if the

blood test come back you can say yes so

we have this test I know it's only 95%

so let's look now deeper into your uh

symptoms and what the possible problem

can be to help the patient to have a

better diagnosis. So it's not to punish

the students that we use detection

tools, but we do want to flag that there

might be a problem with some assignments

and then the lecturer will have to go

and look and see yes there are sources

here that doesn't fit it at all or

there's definitive prompts in it. Um we

need to acknowledge that there's false

positives sometimes. I I must also add

on that point we had about in the last

few years I think at least 3,000

students reported for uh the

irresponsible use of of AI and not one

of them they have an appeal option not

one of them was a false positive but we

don't know how many was uh false

negatives so how many students got away

with it you see so the detection tools

are not the answer but if we move away

too quickly and this is my last

sentence. If we move away too quickly

from detection tools, we put a huge

responsibility on the shoulders of of

lecturers that might miss potential

misuse irresponsible use of AI. And

while we searching for alternatives,

while we um changing our assessments to

get away from a policing um culture, I

think um it's still needed to um use

these tools. Um yeah, the combination

and this time to move fully away.

>> Sorry that I'm cutting you there. That's

a very long last sentence.

Hola, please go ahead.

>> Thank you. So I just basically wanted to

say uh I wanted to support what

professor professor Ky uh said and I

think as academics and researchers we

need to start moving towards application

uh in our teaching rather than theory uh

because basically the AI has come to

stay. There's nothing we can do to it.

It has come to become part of us. So

we've we've got to find a way of

integrating it into our academics. So uh

we've got to find a way of using it for

the benefit of the academic society and

the society at large. Uh irrespective of

how we want to uh that's why we said

it's important whether it is late now

for government for government to

regulate the use of AI or not but it is

important at least for the regulation to

come in place so that we know uh there's

a difference between guideline and

policy. So guideline is just a we say

this is how you must use it. If it's not

become if it hasn't become policy it's

difficult to discipline anybody that is

caught using Ahi. So I think we just

need to find a way of integrate AI into

academic uh uh curricula in South Africa

and in Africa at large because otherwise

if we do not we are going to find

ourselves behind far far far far behind

what the west and the Europes are doing.

Thank you.

>> Thank you very much. Is there any one of

the panelists who who wants to have a a

a final one minute before I close out?

If there's something burning, please.

Oh, Nicola, go ahead.

>> Yes, big big burning one and I think we

need to make more of it is how this

previous and I think it's still

something that's playing out in many of

our institutions is this punitive

approach and you know around detection.

many spaces they say oh but it's to have

a conversation with students but in

actual fact it really has damaged the

relationship of trust between lecturers

and students. So going forward it's like

how do we get students to use AI in

critical and discipline specific ways

but also repair that trust um that we

see is so broken. I mean we ran a survey

with students and you know when you have

any other comments so many of them said

well this is what happened to me and one

person even mentioned um suicidal

ideiation. So this is a real uh real big

issue and I think we've got to as much

as we love technology and want to focus

on that, it's really important to foster

community building and long-term

relationships

um trust and to try and really actively

seek to repair the harms that have been

done.

>> Thank you for that intervention, Nicola.

Um colleagues, I'm going to close out

and then um return the floor to our

colleagues um from ASF. Um I hope that

this is not an abuse of of of power, but

I will not summarize as much as provide

a few comments. Um

firstly I think we should acknowledge

that um we have literacy and um we've

got digital literacy uh built on that

and then this idea of critical AI

literacies and we're in a country and in

a continent um but specifically in a

country where we struggle with literacy

um so there are these wonderful

opportunities that will come that you

know the university will change in many

ways um but I think as a baseline

acknowledgement. If we can't fix

literacy, if we can't fix primary uh

school education, then then we're going

to have a very hard time of it. A second

point to make is that we should

acknowledge that yes, there is value in

saying that if AI can answer it is it is

the wrong question. But that's not

always a good way to think about it. And

the reason is knowledge builds. So we do

need to do the basics first. Um and AI

can do that. So uh I have a young boy

and um a young son and he um he

everything he does can be done by by AI

and it's good that he does these things

because they are foundational. Um so as

we go through um the processes yes there

might be um you know the the calculator

analogy that is a bit problematic. We

all we're all familiar with that. But

you do your long division and once you

can do long division then you're allowed

to take out the calculator and use it.

So so I think in the if we are to be

developing critical thinkers there um

it's asking the wrong question to always

say that if AI can do it we're doing the

the the wrong thing. Okay. We've had a

fantastic smogus board. We've had how to

change assessments. Um, we've had

proturnit in and anti-turnitin. We've

had let's go vibe coding and we've had

hold let's hold back on that. We've had

let's implement remedial courses um and

let's have um mandatory uh

AI literacy for our students towards

which I'm I'm leaning. Um it was

interesting for me to note the let's

call it the ideological leaning you know

the assumption are humans good or are

humans inherently flawed and they'll

take shortcuts um that we've seen

between universities and obviously these

have um historical inertia that they

had. We had examples of some lighter

touch and some some not necessarily

heavier-handed but definitely more

proactive dealing. Um we've had

interesting nuances between guidelines

and frameworks. Um and of course the

very strong point that was made that

students need clarity. So we can have

guidelines and we can have a frameworks

but students want clarity. Um and if I

may add my two cents to that um is what

I'm advocating in the vitz sense. Um and

Vitz was left out by design uh um from

from this one is to have for different

modules so that the university has three

positions. There's a university position

um each faculty um has a strategy um and

within the schools the the various

modules have uh what I'm advocating a

traffic light system. So red we don't

use it at all because this is

foundational knowledge. So we use

procted exam or we switch it off or

whatever. Orange is that there's some

use and then uh green we embrace it and

that depends on where we are in Bloom's

taxonomy depends on what the skills are

of the students but to have that uh sort

of flexibility.

My final appeal to everyone here to the

universities to the members of ASF is

that we we share our resources, we share

our policies, we share our motivations

so that we understand where we come from

and that we support each other in our

academic ecosystem. And on that note, a

great thank you to ASF which has been

championing these debates and these

engagements. Um, and I turn it over to

Luis Feltzman. Thank you very much

everyone.

Thank you very much, Martin. And I think

you had a slip of a tongue there. So,

it's Susan Feltsman.

Good.

>> Oh, sorry. Susan,

>> it's okay. So, such a lot of people

actually, you know, get confused between

the two of them who've been working um

together forever. So, um thank you very

much, Martin, for so eloquently, um

guiding us through um this discussion

this afternoon. I don't think I have a

very easy task um to wrap up such a sort

of dynamic um webinar that we had this

afternoon. Absolutely fascinating etc.

And I really want to applaud um our

speakers for being so honest and to

allow us to have an honest discussion

about these things. Um I cannot refrain

to say it was it was actually really

very interesting to learn um that there

are still slight differences of opinion

and approaches within the different

universities and through our selection

of our participants um rather our um

speakers. We we we hoped that we could

highlight sort of the difference between

the different institutions. But although

we recognizing that there are

differences because of um resources and

just opinion and applying different

lenses, I think it is important just to

select a few strands where there were

really commonality um and we all

recognize those as well like like

ethical um approaches to um AI, the

literacies, the importance of training

and which I absolutely agree and I can

see different institutions apply this

very differently And of course then the

the phrase of the day and thank you very

much for that for our colleague from the

UAP and that is human in the loop very

important. We use that that um phrase

very um often when we talk with editors

etc and we talk about peer review. Um

and then of course equity and access and

the whole dilemma around digitization

and of course then the digital divide

which in my opinion is um really still a

major problem and I think that's

manifesting in in the different

institutions

um and then of course continuous support

but the one thing that really struck me

was the consultative process that each

and every institution went through

almost a bottom up approach and then

continuous discussion and emphasizing

that I think that that was very

important to me and to realize this is

not the beginning and the end um of it

all. It certainly is just the beginning

of many things to come. Um I also really

like the way where um there how the

differentiation took place. Um

interesting to see that the differences

of approaches in the differentiation

issue. Um but I think that that really

needs to be probed and to think about

maybe that is differentiation is still

the answer for where we are at the

moment. And then in terms of the honesty

that was expressed today and that is

really to say um that that we have to

have educative approach. It's our

responsibility to educate and to train

um our stakeholders at hand and also to

say to each other there are still so

many issues that we're actually um

ignoring and there's still a huge

uncertainty in this field regardless of

the amount of work that we do and then

the uncertainty that is um sort of

accompanied by it and then highlighting

the whole issue about trust. Um science

is in a very difficult position. science

is not necessarily trusted and how do we

in this age of new technologies

um

nurture um trust in science and and that

we actually grow them. So I think that

is very um important. Um we cannot stay

behind um we have to follow suit with

the rest of the world. Um, somebody said

to me the other day, it's not that AI

will replace the um, work as such, but

it will replace the human that has not

um, been working and engaging and uh,

with AI as such. So, so I think that's

that's also important. I just by closing

would like to mention that ASF will be

establishing a um, AI forum and then

certainly we've got your names and we

will invite you once um, the forum has

been established.

um that you can participate and that we

in different ways perhaps can continue

this discussion. I think this is very

important and certainly not the last. Um

thank you very much to our speakers um

this afternoon. It has been great. Um

thank you for um participating and and

really putting your heart on the table

so we could talk about these issues.

Thank you for that. And Martin, thank

you very much for so eloquently taking

us again through all of this. Um thank

you for ASA for granting this

opportunity and thank you to my

colleagues for arranging this webinar

and of course very important our

participants. Um we had a lovely bumper

number today joining and thank you very

much and we hope to see you soon and to

see you in the next webinar as well.

Thank you so much.

>> Thank you. Bye.

Loading...

Loading video analysis...