LongCut logo

AI For Everyone: The Full 1-Hour Masterclass by Andrew Ng (2026) #ai

By Xin Nova 小新

Summary

## Key takeaways - **AI's $13-22 Trillion Value by 2033**: According to a study by McKenzie Global Institute, AI would create an additional 13 to 22 trillion US of value annually by the year 2033. Of this, three to four trillion dollars is predicted to come from generative AI, but the larger portion will be from other forms like supervised learning. [00:39], [00:50] - **AI Impacts Every Industry, Even Hairdressing**: A lot of the value to be created lies outside the software industry in sectors such as retail, travel, transportation, automotive, materials manufacturing. My best example was the hairdressing industry, but a robotics professor said a robot can do your hairstyle. [01:21], [01:46] - **ANI One-Trick Ponies Drive Value**: A large amount of the value we see from AI today is artificial narrow intelligence or ANI, AIs that do one thing such as a smart speaker or self-driving car. These are one trick ponies, but when you find the appropriate trick, this can be incredibly valuable. [02:10], [02:25] - **AGI Decades Away, No Robot Anxiety**: AGI is the goal of building AI that could do any intellectual task that a human can or be super intelligent, but realistically we’re still very far from AGI. It’ll take multiple technological breakthroughs and may be decades, maybe even hundreds of years. [02:56], [03:48] - **Supervised Learning Powers Online Ads**: The most lucrative form of supervised learning may be online advertising where AI inputs information about an ad and about you and figures out would you click on this ad or not. This turns out to be very lucrative. [08:03], [08:15] - **Neural Nets Scale with Data & Compute**: With neural networks and deep learning, performance keeps getting better with more data if you train larger networks. The rise of fast computers and GPUs has enabled companies to train large neural nets on large data for high performance. [12:37], [13:42]

Topics Covered

  • AI Transforms Every Industry, Even Hairdressing
  • ANI Delivers Value, AGI Far Away
  • Supervised Learning Powers Most AI Value
  • Data Overhype: Iterate with AI Early
  • AI-First Companies Master Data Strategy

Full Transcript

Welcome to AI for everyone. AI is

changing the way we work and live and this non-technical course will teach you how to navigate the rise of AI. Whether

you want to know what's behind the buzzwords or whether you want to perhaps use AI yourself either in a personal context or in a corporation or other organization, this course will teach you

how. And if you want to understand how

how. And if you want to understand how AI is affecting society and how you can navigate that, you also learn that from this course. In this first week, we'll

this course. In this first week, we'll start by cutting through the hype and giving you a realistic view of what AI really is. Let's get started. Many

really is. Let's get started. Many

experts agree that AI would create a huge amount of value. For example,

according to a study by McKenzie Global Institute, AI would create an additional 13 to 22 trillion US of value annually by the year 2033. Of this 13 to22

trillion, three to4 trillion dollars is predicted to come from what's called generative AI, which is a relatively new type of AI technology that can produce highquality content, specifically text, images, and audio. But the larger

portion of value will be from other forms of AI than generative AI. For

example, a technology called supervised learning. We'll focus more on these

learning. We'll focus more on these other more mature types of AI technology in this course. AI is already creating tremendous amounts of value in the software industry and the McKenzie study points out that a lot of the value to be

created in the future lies outside the software industry. For example, in

software industry. For example, in sectors such as retail, travel, transportation automotive materials manufacturing and so on. I actually have a hard time thinking of an industry that

I don't think AI will have a huge impact on in the next several years. My friends

and I used to challenge each other to name an industry where we don't think AI will have a huge impact. And my best example was the hairdressing industry because you know how to use AI or robotics to automate hairdressing. But I

once said this on stage and one of my friends who is a robotics professor was in the audience that day and she actually stood up and she looked at me in the eye and she said, "You know, Andrew, most people's hairstyles I couldn't get a robot to cut her hair

that way, but she looked me and said, "Your hairstyle, Andrew, that a robot can do." There's a lot of excitement,

can do." There's a lot of excitement, but also a lot of unnecessary hype about AI. One of the reasons for this is

AI. One of the reasons for this is because AI is actually two separate ideas. A large amount of the value we

ideas. A large amount of the value we see from AI today is artificial narrow intelligence or ANI. These are AIs that do one thing such as a smart speaker or

self-driving car or AI for web search or AI applications in farming or in a factory. These types of AI are one trick

factory. These types of AI are one trick ponies. But when you find the

ponies. But when you find the appropriate trick, this can be incredibly valuable. With the rise of

incredibly valuable. With the rise of generative AI, things like chat GPT and bot, we're also starting to see AI that's a bit more general purpose. For

example, chat GPT can be a copy editor, brainstorming partner, text summarizer, and help with many other tasks. These

models have been an exciting development and is further expanding what we can now do with AI. In addition, AI also refers to the concept of AGI or artificial

general intelligence. This is the goal

general intelligence. This is the goal of building AI that could do any intellectual task that a human can or maybe even be super intelligent and do even more things than any human can. I'm

seeing tons of progress in AI artificial narrow intelligence as well as on generative AI and it feels like AI research is slowly taking baby steps tiny baby steps toward AGI which is

exciting but realistically we're still very far from AGI or artificial general intelligence. Unfortunately, the rapid

intelligence. Unfortunately, the rapid progress in a NI and generative AI which are incredibly valuable that has caused people to conclude that there's a lot of progress in AI which is actually true but that in turn has caused people to

falsely think that we might be on the verge of AGI as well which is leading to some overblown and unnecessary fears about evil sentient robots coming to take over humanity. I think AGI is an exciting goal for researchers to work

on, but it'll take multiple technological breakthroughs before we get there. And it may be decades, maybe

get there. And it may be decades, maybe even hundreds of years. I hope, but I'm not sure if we will get there in our lifetimes. But given how far away AGI

lifetimes. But given how far away AGI is, I think there is no need to undo the anxiety about it. In this week, you learn what AI can do and how to apply them to your problems. Later in this

course, you also see some case studies of how AI, these one-trick ponies can be used to build really valuable applications such as smart speakers and self-driving cars in detail. In this

week, you learn what is AI. You may have heard of machine learning. And the next video will teach you what is machine learning. You also learn what is data

learning. You also learn what is data and what types of data are valuable, but also what types of data are not valuable. you learn what it is that

valuable. you learn what it is that makes a company an AI company or an AI first company so that perhaps you can start thinking if there are ways to improve your company or other

organizations's ability to use AI and importantly you also learned this week what machine learning can and cannot do in our society newspapers as well as research papers tend to talk only about

the success stories of machine learning and AI and we hardly ever see any failure stories because they just aren't as interesting to report on but for you to have a realistic view of what AI and

what machine learning can and cannot do.

I think it's important that you see examples of both so that you can make more accurate judgments about what you may and maybe should not try to use these technologies for. Finally, a lot

of the recent rise of machine learning has been driven through the rise of deep learning, sometimes also called neuronet networks. In the final two optional

networks. In the final two optional videos of this week, you can also see an intuitive explanation of deep learning so that you will better understand what

they can do particularly for a set of narrow ANI tasks. So that's what you learn this week and by the end of this week you have a sense of AI technologies

and what they can and cannot do. In the

second week you learn how these AI technologies can be used to build valuable projects. You learn what it

valuable projects. You learn what it feels like to build an AI project as well as what you should do to make sure you select projects that are technically feasible as well as valuable to you or

your business or other organization.

After learning what it takes to build AI projects, in the third week, you learn how to build AI in your company. In

particular, if you want to take a few steps toward making your company good at AI, you see the AI transformation playbook and learn how to build AI teams

and also build complex AI products.

Finally, AI is having a huge impact on society. In the fourth and final week,

society. In the fourth and final week, you learn about how AI systems can be biased and how to diminish or eliminate such biases. You also learn how AI is

such biases. You also learn how AI is affecting developing economies and how AI is affecting jobs and be better able to navigate this rise of AI for yourself

and for your organization.

By the end of this four-week course, you'll be more knowledgeable and better qualified than even the CEOs of most large companies in terms of your understanding of AI technology as well

as your ability to help yourself or help your company or other organization navigate the rise of AI. And so I hope that after this course, you'll be in a position to provide leadership to others

as well as they navigate these issues.

Now, one of the major technologies driving the recent rise of AI is machine learning. But what is machine learning?

learning. But what is machine learning?

Let's take a look in the next video. The

rise of AI has been largely driven by one tool in AI called machine learning.

In this video, you learn what is machine learning. So that by the end, you

learning. So that by the end, you hopefully be able to start thinking how machine learning might be applied to your company or to your industry. The

most commonly used type of machine learning is a type of AI that learns A to B or input to output mappings. And this is called

supervised learning. Let's see some

supervised learning. Let's see some examples. If the input A is an email and

examples. If the input A is an email and the output B you want is is this email spam or not 01? Then this is the core piece of AI used to build a spam filter.

Or if the input A is an audio clip and the AI's job is output the text transcript, then this is speech recognition. More examples. If you want

recognition. More examples. If you want to input English and have it output a different language, Chinese, Spanish, something else, then this is machine translation. Or the most lucrative form

translation. Or the most lucrative form of supervised learning of this type of machine learning may be online advertising where all the large online ad platforms have a piece of AI that inputs some information about an ad and

some information about you and tries to figure out would you click on this ad or not and by showing you the ads that you're most likely to click on. This

turns out to be very lucrative. Maybe

not the most inspiring application but certainly having a huge economic impact today. Or if you want to build a

today. Or if you want to build a self-driving car, one of the key pieces of AI is an AI that takes this input an image and some information from the radar or from other sensors and outputs the position of other cars so your

self-driving car can avoid the other cars. Or in manufacturing, I've actually

cars. Or in manufacturing, I've actually done a lot of work in manufacturing where you would take as input a picture of something you've just manufactured, such as a picture of a cell phone coming off an assembly line. This is a picture

of a phone, not a picture taken by a phone. And you want to output, is there

phone. And you want to output, is there a scratch? Is there a dent? Is there

a scratch? Is there a dent? Is there

some other defect on this thing you've just manufactured? And this is visual

just manufactured? And this is visual inspection which is helping manufacturers reduce or prevent defects in the things that they're making.

Supervised learning also lies at the heart of generative AI systems like the chat GPT and B chatbots that generate text. These systems work by learning

text. These systems work by learning from huge amounts of text. say download

it from the internet so that when given a few words as the input the model can predict the next word that comes after.

These models which are called large language models or LLMs generate new text by repeatedly predicting what is the next word it should output. Given

the widespread attention on LLM, let's look briefly on the next slide in greater detail at how they work. Large

language models are built by using supervised learning to train a model to repeatedly predict the next word. For

example, if an AI system has read on the internet a sentence like my favorite drink is lychi bubble tea, then the single sentence would be turned into a lot of a tob data points for the model

to learn to predict the next word.

Specifically, given this sentence, we now have one data point that says given the phrase my favorite drink, what do you predict is the next word? In this

case, the right answer is is and given my favorite drink is, what do you predict is the next word? And the

correct answer is ly. and so on until you have used all the words in the sentence. So this one sentence is turned

sentence. So this one sentence is turned into multiple inputs A and outputs B for the model to learn given a few words as input what is the next word. When you

train a very large AI system on a lot of data, say hundreds of billions or even over a trillion words, then you get a large language model like chat GPT that

given an initial piece of text called a prompt is very good at generating some additional words in response to that prompt. The description I presented here

prompt. The description I presented here does omit some technical details like how the model learns to follow instructions rather than just predict the next word found on the internet and

also how developers make the model less likely to generate inappropriate outputs such as one that exhibit discrimination or hand out harmful instructions. If

you're interested, you can learn more about these details in the course generative AI for everyone. At the heart of OM though is this technology that learns from a lot of data to predict

what is the next word using supervised learning. So in summary supervised

learning. So in summary supervised learning just learns input output or A to B mappings. On one hand input output A to B seems quite limiting but when you

find the right application scenario this turns out to be incredibly valuable. Now

the idea of supervised learning has been around for many decades but it's really taken off in the last few years. Why is

this? When my friends ask me, hey Andrew, why is supervised learning taking off now? There's a picture I draw for them and I want to show you this picture now and you may be able to draw this picture for others that ask you the same question as well. Let's say on the

horizontal axis you plot the amount of data you have for a task. So for speech recognition, this might be the amount of audio data and transcripts you have. In

a lot of industries, the amount of data you have access to has really grown over the last couple decades thanks to the rise of the internet, the rise of computers. A lot of what used to be say

computers. A lot of what used to be say pieces of paper are now instead recorded on a digital computer. So we've just been getting more and more and more data. Now let's say on the vertical axis

data. Now let's say on the vertical axis you plot the performance of an AI system. It turns out that if you use a

system. It turns out that if you use a traditional AI system then the performance would grow like this. That

as you feed it more data it performance gets a bit better but beyond a certain point it did not get that much better.

So, it's as if your speech recognition system did not get that much more accurate or your online advertising system didn't get that much more accurate at showing the most relevant ads even as you showed them more data.

AI has really taken off recently due to the rise of neuronet networks and deep learning. I'll define these terms more

learning. I'll define these terms more precisely in a later video. So, don't

worry too much about what it means for now. But with modern AI, with neuronet

now. But with modern AI, with neuronet networks and deep learning, what we saw was that if you train a small neuronet network, then the performance kind of looks like this. Whereas you feed it more data, performance keeps getting

better for much longer. And if you train a even slightly larger neuronet network, say a medium-sized neuronet, then the performance may look like that. And if

you train a very large neuronet network, then the performance just, you know, kind of keeps on getting better and better. And for applications like speech

better. And for applications like speech recognition, online advertising, building self-driving car where having a high performance, highly accurate say speech recognition system is important.

This has enabled these AI systems to get much better and make say speech recognition products much more acceptable to users, much more valuable to companies and to users. Now, here are a couple implications of this figure. If

you want the best possible levels of performance, your performance to be up here to hit this level of performance, then you kind of need two things. One is

it really helps to have a lot of data.

So that's why sometimes you hear about big data, having more data almost always helps. And the second thing is you want

helps. And the second thing is you want to be able to train a very large neuronet network. And so the rise of

neuronet network. And so the rise of fast computers including MOS law but also the rise of specialized processes such as graphics processor units or GPUs which you hear more about in a later

video has enabled many companies not just a giant tech companies but many many other companies to be able to train large neuronets on a large enough amount of data in order to get very good

performance and drive business value. In

fact, it was also this type of scaling increasing the amount of data and the size of the models that was instrumental to the recent breakthroughs in training generative AI systems including the

large language models that we discussed just now. The most important idea in AI

just now. The most important idea in AI has been machine learning and specifically supervised learning which means A to B or input output mappings.

What enables it to work really well is data. In the next video, let's take a

data. In the next video, let's take a look at what is data and what data you might already have and how to think about feeding this into AI systems. Let's go on to the next video. You may

have heard that data is really important for building AI systems. But what is data really? Let's take a look. Let's

data really? Let's take a look. Let's

look at an example of a table of data, which we also call a data set. If you're

trying to figure out how to price houses that you're trying to buy or sell, you might collect a data set like this. And

this can be just a spreadsheet, like an Excel spreadsheet of data, where one column is the size of the house, say in square feet or square meters, and the second column is the price of the house.

And so if you're trying to build an AI system or machine learning system to help you set prices for houses or figure out if a house is priced appropriately, you might decide that the size of the house is A and the price of the house is

B and have an AI system learn this input to output or A to B mapping. Now rather

than just pricing a house based on the size, you might say, well, let's also collect data on the number of bedrooms of this house. In that case, A can be

both of these first two columns and B can be just the price of the house. So

given a table of data, given a data set, it's actually up to you, up to your business use case to decide what is A and what is B. Data is often unique to

your business. And this is an example of

your business. And this is an example of a data set that a real estate agency might have if they're trying to help price houses. And it's up to you to

price houses. And it's up to you to decide what is A and what is B and how to choose these definitions of A and B to make it valuable for your business.

As another example, if you have a certain budget and you want to decide what is the size of house you can afford, then you might decide that the input A is how much does someone spend

and B is just the size of the holes in square feet. And that would be a totally

square feet. And that would be a totally different choice of A and B that tells you given a certain budget, what's the size of the holes you should be maybe looking at. Here's another example of a

looking at. Here's another example of a data set. Let's say that you want to

data set. Let's say that you want to build a AI system to recognize cats in pictures. I'm not sure why you might

pictures. I'm not sure why you might want to do that, but maybe have a fun mobile app and you want to tag all the pictures of cats. So, you might collect

the data sets where the input A is a set of different images and the output B are labels that says first picture is a cat, there's not a cat, there's a cat,

there's not a cat. And if an AI input a picture A and output B, is it a cat or not? So you can tag all the cat pictures

not? So you can tag all the cat pictures on your photo feed or on your mobile app. In machine learning tradition,

app. In machine learning tradition, there's actually a lot of cats in machine learning. I think some of this

machine learning. I think some of this started when I was leading the Google brain team and we published the results with a somewhat infamous Google cat where an AI system learned to detect cats from watching YouTube videos. But

since then, there's been a tradition of using cats as a running example when talking about machine learning. Uh with

apologies to all the dog lovers out there. I love dogs, too. So data is

there. I love dogs, too. So data is important, but how do you get data? How

do you acquire data? Well, one way to get data is manual labeling. For

example, you might collect a set of pictures like these over here. And then

you might either yourself or have someone else go through these pictures and label each of them. So the first one is a cat, second one is not a cat, third one is a cat, fourth one is not a cat.

And by manually labeling each of these images, you now have a data set for building a cat detector. To do that, you actually need more than four pictures.

You might need hundreds of thousands of pictures. But manual labeling is a

pictures. But manual labeling is a triedand- trueue way of getting a data set where you have both A and B. Another

way to get a data set is from observing user behaviors or other types of behaviors. So for example, let's say you

behaviors. So for example, let's say you run a website that sells things online.

So an e-commerce or an electronic commerce website where you offer things to users at different prices and you can just observe if they buy your product or not. So just through the act of either

not. So just through the act of either buying or not buying your product, you may able to collect a data set like this where you can store the user ID, the time the user visit your website, the price you offer the product to the users

as well as whether or not they purchased it. And so just by using your website,

it. And so just by using your website, users can generate this data from you.

This was an example of observing user behaviors. We can also observe behaviors

behaviors. We can also observe behaviors of other things such as machines. If you

run a large machine in a factory and you want to predict if a machine is about to fail or have a fault then just by observing the behavior of a machine you can then record a data set like this

there's a machine ID there's a temperature of the machine there's a pressure within the machine and then did the machine fail or not and if your application is prevents the maintenance say you want to figure out if the

machine is about to fail then you could for example choose this as the input A and choose that as the output B to try to figure out if a machine is about to fail, in which case you might do

maintenance, preventative maintenance on the machine. The third and very common

the machine. The third and very common way of acquiring data is to download it from a website or to get it from a partner. Thanks to the open internet,

partner. Thanks to the open internet, there's just so many data sets that you can download freely, ranging from computer vision or image data sets to self-driving car data sets to speech recognition data sets to medical imaging

data sets to many, many more. And so if your application needs a type of data to just download off the web, keeping in mind licensing and copyright, then that could be a great way to get started on an application. And finally, if you work

an application. And finally, if you work with a partner, say you're working with a factory, then they may already have collected a big data sets of machines and temperatures and pressure and did the machines fail or not that they could

give to you. Data is important, but it's also a little bit overhyped and sometimes misused. Let me describe to

sometimes misused. Let me describe to you two of the most common misuses or the bad ways to think about data. When I

speak of CEOs of large companies, a few of them have actually said to me, "Hey, Andrew, give me three years to build up my IT team. We're collecting so much data and then after three years, I'll have this perfect data set and then

we'll do AI." Then it turns out that's a really bad strategy. Instead, what I recommend to every company is once you've started collecting some data, go ahead and start showing it or feeding it

to an AI team because often the AI team can give feedback to your IT team on what types of data to collect and what types of IT infrastructure to keep on building. For example, maybe an AI team

building. For example, maybe an AI team can look at your factory data and say, "Hey, you know what? If you can collect data from this big manufacturing machine, not just once every 10 minutes, but instead once every one minute, then

we could do a much better job building a preventative maintenance systems for you. So, there's often this interplay of

you. So, there's often this interplay of this back and forth between IT and AI teams. And my advice is usually try to get feedback from AI earlier because it can help you guide the development of

your IT infrastructure. Second, misuse

of data. Unfortunately, I've seen some CEOs read about the importance of data in the news and then say, "Hey, I have so much data. Surely an AI team can make it valuable." And unfortunately, this

it valuable." And unfortunately, this doesn't always work out. More data is usually better than less data, but I wouldn't take it for granted that just because you have many terabytes or gigabytes of data that an AI team can

magically make that valuable. So, my

advice is don't throw data in AI team and assume it will be valuable. In fact,

in one extreme case, I saw one company go and acquire a whole string of other companies in medicine on the thesis on the hypothesis that their data would be very valuable. And now a couple years

very valuable. And now a couple years later, as far as I know, the engineers have not yet figured out how to take all this data and actually create value out of it. So sometimes it works and

of it. So sometimes it works and sometimes it doesn't. But I would not overinvest in just acquiring data for the sake of data until unless you also get an AI team to take a look at it because they can help guide you to think

through what is the data that is actually the most valuable. Finally data

is messy. You may have heard the phrase garbage in garbage out. And if you have bad data then the AI will learn inaccurate things. Here are some

inaccurate things. Here are some examples of data problems. Let's say you have this data set of size of houses, number of bedrooms, and the price. You

can have incorrect labels or just incorrect data. For example, this house

incorrect data. For example, this house is probably not going to sell for 0.001,000 just for $1. Or data can also have missing values such as we have here a

whole bunch of unknown values. And so

your AI team will need to figure out how to clean up the data or how to deal with these incorrect labels and or missing values. And there are also multiple

values. And there are also multiple types of data. For example, sometimes you hear about images, audio and text.

These are types of data that humans find it very easy to interpret. There's a

term for this. This is called unstructured data. And there's a certain

unstructured data. And there's a certain types of AI techniques that could work with images to recognize cats or audio to recognize speech or text to understand if email was spam. And then

there are also data sets like the one on the right. This is an example of

the right. This is an example of structured data. And that basically

structured data. And that basically means data that lives in a giant spreadsheet. And the techniques for

spreadsheet. And the techniques for dealing with unstructured data are a little bit different than the techniques for dealing with structured data.

Generative AI today is used primarily to generate unstructured data such as text, images, and audio rather than structured data. In contrast, supervised learning

data. In contrast, supervised learning can work very well for both of these types of data, unstructured data and structured data. In this video, you

structured data. In this video, you learned what is data and you also saw how not to misuse data, for example, by overinvesting in an IT infrastructure in the hope that it will be useful for AI

in the future. Uh, but without actually checking that it really will be useful for the AI applications you want to build. And finally, you saw data is

build. And finally, you saw data is messy, but a good AI team will be able to help you deal with all of these problems. Now, AI has a complicated terminology where people throw around

terms like AI, machine learning, data science. What I want to do in the next

science. What I want to do in the next video is share with you what these terms actually mean so that you'll be able to confidently and accurately talk about these concepts with others. Let's go on to the next video. You might have heard

terminology from AI such as machine learning or data science or neuronet networks or deep learning. What do these terms mean? In this video, you'll see

terms mean? In this video, you'll see what does this terminology of the most important concepts of AI so that you speak with others about it and start thinking how these things could apply in your business. Let's get started. Let's

your business. Let's get started. Let's

say you have a housing data set like this with the size of the house, number of bedrooms, number of bathrooms, whether the houses newly renovated as well as the price. If you want to build

a mobile app to help people price houses, so this would be the input A and this would be the output B, then this would be a machine learning system. In

particular, it' be one of those machine learning systems that learns input to outputs or A to B mappings. So machine

learning often results in a running AI system. So it's a piece of software that

system. So it's a piece of software that any time of day, any time of night, you can automatically input a these properties of a house and output speed.

So if you have an AI system running serving dozens or hundreds of thousands of millions of users, that's usually a machine learning system. In contrast,

here's something else you might want to do, which is to have a team analyze your data set in order to gain insights. So,

a team might come up with a conclusion like, "Hey, did you know if you have two houses of a similar size, of a similar square footage? If the house is three

square footage? If the house is three bedrooms, then they cost a lot more than a house with two bedrooms, even if the square footage is the same. Or did you know that newly renovated homes have a

15% premium?" And this could help you

15% premium?" And this could help you make decisions such as given a similar square footage, do you want to build a two-bedroom or threebedroom size in order to maximize value? or is it worth an investment to renovate a home in the

hope that the renovation increases the price you can sell a house for. So these

would be examples of data science projects where the output of a data science project is a set of insights that can help you make business decisions such as what type of house to

build or whether to invest in renovation. The boundaries between these

renovation. The boundaries between these two terms machine learning and data science are actually a little bit fuzzy and these terms are not used consistently even in industry today. But

what I'm giving here is maybe the most commonly used definitions of these terms. But you will not find universal adherence to these definitions. To

formalize these two notions a bit more, machine learning is the field of study that gives computers the ability to learn without being explicitly programmed. This is a definition by

programmed. This is a definition by Arthur Samuel many decades ago. Arthur

Samuel was one of the pioneers of machine learning who was famous for building a checkers playing program that could play checkers even better than he himself the inventor could play the

game. So a machine learning project will

game. So a machine learning project will often result in a piece of software that runs that outputs B given A. In

contrast, data science is a science of extracting knowledge and insights from data. And so the output of a data

data. And so the output of a data science project is often a slide deck.

the PowerPoint presentation that summarizes conclusions for executives to take business actions or that summarizes conclusions for a product team to decide how to improve a website. Let me give an

example of machine learning versus data science in the online advertising industry. Today, the launch ad platforms

industry. Today, the launch ad platforms all have a piece of AI that quickly tells them what's the ad you are most likely to click on. So that's a machine learning system and this turns out to be incredibly lucrative AI system that

inputs information about you and about the ads and outputs. Would you click on this or not? These systems are running 24/7 and these are machine learning systems that drive ad revenue for these companies. So there's a piece of

companies. So there's a piece of software that runs. In contrast, I've also done data science projects in the online advertising industry. If

analyzing data tells you, for example, that the travel industry is not buying a lot of ads. But if you send more salespeople to sell ads to travel companies, you could convince them to use more advertising. Then that would be

an example of a data science project, a data science conclusion, the results in executives deciding to ask the sales team to spend more time reaching out to the travel industry. So even in one company, you may have different machine

learning and data science projects, both of which can be incredibly valuable. You

may have also heard of deep learning. So

what is deep learning? Let's say you want to predict housing prices. You want

to price houses. So you have an input that tells you the size of house, number of bedrooms, number of bathrooms, and whether it's newly renovated. One of the most effective ways to price houses given this input a would be to feed it

to this thing here in order to have it output the price. This big thing in the middle is called a neuronet network. And

sometimes we also call it an artificial neuronet network. And that's to

neuronet network. And that's to distinguish it from the neuronet network that is in your brain. So the human brain is made up of neurons. And so when we say artificial neuronet network, that's just to emphasize that this is

not the biological brain but instead a piece of software. And what a neuronet network does an artificial neuronet network does is takes input A which is

all of these four things and then output B which is the estimated price of the house. Now, in a later optional video this week, I'll show you

more what this artificial neuronet network really is. But all of human cognition is made up of neurons in your brain passing electrical impulses, passing little messages each other. And

when we draw a picture of an artificial neuronet network, there's a very loose analogy to the brain. And these little circles are called artificial neurons or just neurons for short that also passes

neurons to each other. And this big artificial neuronet network is just a big mathematical equation that tells it given the inputs A, how do you compute the price B? In case it seems like there

are a lot of details here, don't worry about it. We'll talk more about these

about it. We'll talk more about these details later. But the key takeaways are

details later. But the key takeaways are that a neuronet network is a very effective technique for learning A to B or input to output mappings. And today

the terms neuronet network and deep learning are used almost interchangeably. They mean essentially

interchangeably. They mean essentially the same thing. Many decades ago, this type of software was called a neuronet network. But in recent years, we found

network. But in recent years, we found that, you know, deep learning was just a much better sounding brand. And so that, for better or worse, is a term that's been taking off recently. So what do

neuronet networks or artificial neuronet networks have to do with the brain? It

turns out almost nothing. Neuronet

networks were originally inspired by the brain, but the details of how they work are almost completely unrelated to how biological brains work. So I choose very cautious today about making any analogies between artificial neuronet

networks and the biological brain even though there was some loose inspiration there. So AI has many different tools.

there. So AI has many different tools.

In this video you learned about what are machine learning and data science and also what is deep learning and what's a neuronet network. You might also hear in

neuronet network. You might also hear in the media other buzzwords like generative AI, unsupervised learning, reinforcement learning, graphical models, planning knowledge graphs and so on. You don't need to know what all of

on. You don't need to know what all of these other terms mean, but these are just other tools for getting AI systems to make computers act intelligently.

I'll try to give you a sense of what some of these terms mean in later videos as well. But the most important tools

as well. But the most important tools and terms I hope you remember from this are machine learning and data science as well as deep learning and neuronet networks which are very powerful way to

do machine learning and sometimes data science. If we were to draw a vin

science. If we were to draw a vin diagram showing how all these concepts fit together, this is what it might look like. AI is this huge set of tools for

like. AI is this huge set of tools for making computers behave intelligently.

Of AI, the biggest subset is early tools from machine learning. But AI does have other tools than machine learning such as some of these buzzwords are listed at the bottom. And of machine learning, the

the bottom. And of machine learning, the part of machine learning that's most important these days is neuronet networks or deep learning, which is a very powerful set of tools for carrying out supervised learning or A2B mappings

as well as some other things. But there

are also other machine learning tools that are not just deep learning tools.

So how does data science fit into this picture? There is inconsistency in how

picture? There is inconsistency in how the terminology is used. Some people

will tell you data science is a subset of AI. Some people will tell you AI is a

of AI. Some people will tell you AI is a subset of data science. It depends a bit on who you ask. But I would say that data science is maybe a crosscutting subset of all of these tools that uses

many tools from AI, machine learning and deep learning but has some other separate tools as well that solves a very set of important problems in driving business insights. In this video you saw what is machine learning, what

is data science and what is deep learning and neuronet networks. I hope

this gives you a sense of the most common and important terminology used in AI and you can start thinking about how these things might apply to your company. Now, what does it mean for a

company. Now, what does it mean for a company to be good at AI? Let's talk

about that in the next video.

What makes a company good at AI? And

perhaps even more importantly, what will it take for your company to become great at using AI? I had previously led the Google brain team and BU's AI group which had respectively helped Google and

BYU become great AI companies. So what

can you do for your company? There's a

lesson I had learned watching the rise of the internet that I think will be relevant to how all of us navigate the rise of AI. Let's take a look. A lesson

we learned from the rise of the internet was that if you take your favorite shopping mall, so you know my wife and I sometimes shop at Stanford shopping center and you build a website for the shopping mall, maybe sell things on the website, that by itself does not turn

the shopping mall into an internet company. In fact, a few years ago, I was

company. In fact, a few years ago, I was speaking with the CEO of a large retail company who said to me, "Hey Andrew, I have a website. I sell things on the website. Amazon has a website. Amazon

website. Amazon has a website. Amazon

sells things on a website. It's the same thing." But of course, it wasn't. in a

thing." But of course, it wasn't. in a

shopping mall with a website isn't the same thing as a first class internet company. So what is it that defines an

company. So what is it that defines an internet company if it isn't just whether or not you sell things on the website? I think an internet company is

website? I think an internet company is a company that does the things the internet lets you do really well. For

example, we engage in pervasive AB testing, meaning we routinely throw up two different versions of website and see which one works better because we can and so we learn much faster. Whereas

in a traditional shopping mall, you know, very difficult to have two shopping malls in two parallel universes and you can only maybe change things around every quarter or every six months. Internet companies tend to have

months. Internet companies tend to have very short iteration times. You can ship a new product every week or maybe even every day because you can. Whereas a

shopping mall can be redesigned and rearchitected only every several months.

Internet companies also tend to push decision-m down from the CEO to the engineers and to other specialized roles such as the product managers. This is in contrast to a traditional shopping mall where you can maybe have the CEO just

decide all the key decisions and then just everyone does what the CEO says.

And it turns out that traditional model doesn't work in the internet era because only the engineers and other specialized real estate product managers know enough about the technology and the product and

the users to make great decisions. So

these are some of the things that internet companies do in order to make sure they do the things that the internet lets them do really well. This

is a lesson we learned from the internet era. How about the AI era? I think that

era. How about the AI era? I think that today you can take any company and have it use a few neuronet networks, a few deep learning algorithms. That by itself does not turn the company into an AI

company. Instead, what makes a great AI

company. Instead, what makes a great AI company, sometimes an AI first company, is are you doing the things that AI lets you do really well? For example, AI companies are very good at strategic

data acquisition. This is why many of

data acquisition. This is why many of the large consumer tech companies may have free products that do not monetize and it allows them to acquire data that they can monetize elsewhere. So I've led

strategy teams where we would deliberately launch products that do not make any money just for the sake of data acquisition and thinking through how to get data is a key part of the great AI

companies. AI companies tend to have

companies. AI companies tend to have unified data warehouses. If you have 50 different databases or 50 different data warehouses under the control of 50 different vice presidents, then it'll be

impossible for an engineer to get the data into one place so that they can connect the dots and spot the patterns.

So many great AI companies have preemptively invested in bringing the data together into a single data warehouse to increase the odds that the teams can connect the dots subject of

course to privacy guarantees and also to data regulations such as GDPR in Europe.

AI companies are very good at spotting automation opportunities. We're very

automation opportunities. We're very good at seeing, oh, let's insert the supervised learning algorithm and have a A2B mapping here so that we don't have to have people do these tasks. Instead,

we can automate it. AI companies also have many new roles such as the MLE or machine learning engineer and new ways of dividing up tasks among different members of a team. So, for a company to

become good at AI means architecting the company to do the things that AI makes it possible to do really well. Now for a company to become good at AI does require a process. In fact 10 years ago

Google and BU as well as companies like Facebook and Microsoft I was not a part of were not great AI companies the way that they are today. So how can a company become good at AI? It turns out

that becoming good at AI is not a mysterious magical process. Instead

there is a systematic process through which many companies almost any big company can become good at AI. This is

the five-step AI transformation playbook that I recommend to companies that want to become effective at using AI. I'll

give a brief overview of the playbook here and then go into detail in a later week. Step one is to execute pilot

week. Step one is to execute pilot projects to gain momentum. So just do a few small projects to get a better sense of what AI can and cannot do and get us better sense of what doing an AI project feels like. And this you could do

feels like. And this you could do in-house or you can also do with an outsource team. But eventually you then

outsource team. But eventually you then need to do step two which is to build an in-house AI team and provide broad AI training not just to the engineers but also to the managers division leaders

and executives on how to think about AI.

After doing this or as you're doing this, you have a better sense of what AI is. And then it's important for many

is. And then it's important for many companies to develop an AI strategy. And

finally, to align internal and external communications so that all your stakeholders from employees, customers, and investors are aligned with how your company is navigating the rise of AI. AI

has created tremendous value in the software industry and will continue to do so. It will also create tremendous

do so. It will also create tremendous value outside the software industry. If

you can help your company become good at AI, I hope you can play a leading role in creating a lot of this value. In this

video, you saw what is it that makes a company a good AI company and also briefly the AI transformation playbook which I go into much greater detail on in a later week as a roadmap for helping

companies become great at AI. If you're

interested, there is also published online an AI transformation playbook that goes into these five steps in greater detail, but you'll see more of these in a later week as well. Now, one

of the challenges of doing AI projects such as the pilot projects in step one is understanding what AI can and cannot do. In the next video, I want to show

do. In the next video, I want to show you and give you some examples of what AI can and cannot do to help you better select projects for AI that may be effective for your company. Let's go on

to the next video. In this video and the next video, I hope to help you develop intuition about what AI can and cannot do. In practice, before I commit to a

do. In practice, before I commit to a specific AI project, I'll usually have either myself or engineers do technical diligence on a project to make sure that it is feasible. This means look at the data, look at the input and output A and

B and just thinking through this is something AI can really do. What I've

seen unfortunately is that some CEOs can have an overinflated expectation of AI and can ask engineers to do things that today's AI just cannot do. One of the challenges is that the media as well as

the academic literature tends to only report on positive results or success stories using AI. And we see a string of success stories and no failure stories.

People sometimes think AI can do everything. And unfortunately, that's

everything. And unfortunately, that's just not true. So, what I want to do in this and the next video is to show you a few examples of what today's AI technology can do, but also what it cannot do. And I hope that this will

cannot do. And I hope that this will help you hone your intuition about what might be more or less promising projects to select for your company. Previously

you saw this list of AI applications from spam routting to speech recognition to machine translation and so on. One

imperfect rule of thumb you can use to decide what supervised learning may or may not be able to do is that pretty much anything you could do with a second of thought we can probably now or soon automate using supervised learning using

this input output mapping. So, for

example, in order to determine the position of other cars, you know, that's something that you can do with less than a second in order to tell if a phone is scratched, you can look at it and you

can kind of tell in less than a second in order to understand or at least transcribe what was said. You know, it doesn't take that many seconds of thought. And while this is an imperfect

thought. And while this is an imperfect rule of thumb, it maybe gives you a way to quickly think of some examples of tasks that AI systems can do. Whereas in

contrast, something that AI today cannot do would be accurately predicting the stock market. Say predict the future

stock market. Say predict the future price of some company's stock given only the historical price of that stock. This

is something that a person probably can't do in 1 second or even longer than a second. And it's probably not possible

a second. And it's probably not possible to get machine learning to do this accurately either. Let's look in greater

accurately either. Let's look in greater detail at this. Say the task we want to tackle is given the recent price of a stock that's the input A. Predict the

price at a future point in time say a month into the future. That's the output B. What would happen if you were to

B. What would happen if you were to apply machine learning to this? Well, a

simpler algorithm might try to fit a straight line to the data. But depending

on what period of time you fit the data to, you might get a red line like this.

Or if you fit it to this narrow window, you might get this blue line. So

depending on the details of the implementation, this particular algorithm might give wildly varying output values. And the main problem with

output values. And the main problem with this application is that the past history of a stock price is just not very predictive of the future stock price, which is why attempts to use machine learning this way haven't been

successful. Future stock prices are so

successful. Future stock prices are so random that it's just hard for AI to predict it accurately. By the way, for completeness, I should say that predicting stock price based only on the historical price of the same stock seems

to be impossible. But there are stock traders that sometimes find other inputs. For example, if they manage to

inputs. For example, if they manage to legally obtain some web traffic or foot traffic data that helps them estimate what the company's sales were, then that in combination with the historical price

data might make it possible for the algorithm to have some predictive power.

However, these other inputs are typically complex or costly to acquire and still can't overcome the intrinsically somewhat random nature of the stock market. To hone your intuition

about how to quickly filter feasible or not feasible projects, here are a couple of other rules of thumb about what makes a machine learning problem easier or more likely to be feasible. One,

learning a simple concept is more likely to be feasible. But what does it mean for something to be a simple concept?

There's no formal definition of it, but if it is something that takes you less than a second of mental thought or maybe just a very small number of seconds of mental thought to come up with a conclusion, then that would be

suggestive of it being a simple concept.

So, if you're looking outside the window of a self-driving car to spot the other cars, well, that would seem like a relatively simple concept. Whereas, in

contrast, trying to come up with clever signals to predict a given company's sales, well, that seems like less of a simple concept. Second, a machine

simple concept. Second, a machine learning problem is more likely to be feasible if you have lots of data available. Here data means both the

available. Here data means both the input A and the output B that you want the system to have in your A to B or input output mapping. So for example, in

trying to determine whether a phone is scratched or not, the input A would be a set of images of phones and the output B could be a label identifying the phone as being scratched or not scratched.

Then if you have thousands of pictures of phones with both A and B, the odds of you building A machine learning system to detect scratches accurately would be much higher. AI is the new electricity

much higher. AI is the new electricity and is transforming every industry. But

it's also not magic and it can't do everything under the sun. I hope that this video started to help you hone your intuitions about what it can and cannot do and increase the odds of your selecting feasible and valuable projects

for maybe your teams to try working on.

In order to help you continue developing your intuition, I would like to show you more examples of what AI can and cannot do. Let's go on to the next video. One

do. Let's go on to the next video. One

of the challenges of becoming good at recognizing what AI can and cannot do is that it does take seeing a few examples of concrete successes and failures of AI. And if you work on an average of say

AI. And if you work on an average of say one new AI project a year, then to see three examples would take you three years of work experience. And that's

just a long time. What I hope to do both in the previous video and in this video is to quickly show you a few examples of AI successes and failures or what it can and cannot do so that in a much shorter time you can see multiple concrete

examples to help hone your intuition and select valuable projects. So let's take a look at a few more examples. Let's say

you're building a self-driving car.

Here's something that AI can do pretty well, which is to take a picture of what's in front of your car and maybe just using a camera, maybe using other senses as well, such as radar or LAR,

and then to figure out what is the position or where are the other cars.

So, this would be an AI where the input A is a picture of what's in front of your car or maybe both a picture as well as radar and other sensor readings and the output B is where are the other cars. And today, the self-driving car

cars. And today, the self-driving car industry has figured out how to collect enough data and has pretty good algorithms for doing this reasonably well. So, that's what AI today can do.

well. So, that's what AI today can do.

Here's an example of something that today's AI cannot do or at least would be very difficult using today's AI, which is to input a picture and output the intention or whatever the human is trying to gesture at your car. So,

here's a construction worker holding out a hand to ask your car to stop. Here's a

hitchhiker trying to wave a car over.

Here's a bicyclist raising the left hand to indicate that they want to turn left.

And so if you were to try to build a system to learn an A to B mapping where the input A is a short video of a human gesturing at your car and the output B is what's the intention or what does

this person want? That today is very difficult to do. Part of the problem is that the number of ways people gesture at you is very very large. Imagine all

the hand gestures someone could conceivably use ask you to slow down or go or stop. The number of ways that people could gesture at you is just very very large. And so it's difficult to

very large. And so it's difficult to collect enough data from enough thousands or tens of thousands of different people gesturing at you in all of these different ways to capture the

richness of human gestures. So learning

from a video to what this person wants is actually a somewhat complicated concept. In fact, even people have a

concept. In fact, even people have a hard time figuring out sometimes what someone waving at your car wants. And

then second, because this is a safety critical application, you would want an AI that is extremely accurate in terms of figuring out does a construction worker want you to stop or does he or she want you to go and that makes it

harder for an AI system as well. And so

today, if you collect just say 10,000 pictures of other cars, many teams will be able to build an AI system that at least has a basic capability at detecting other cars. In contrast, even

if you collect pictures or videos of 10,000 people, it's quite hard to track down 10,000 people waving at your car.

Even with that data set, I think it's quite hard today to build an AI system to recognize human intention from the gestures at the very high level of accuracy needed in order to drive safely around these people. So that's why today

many self-driving car teams have some component for detecting other cars and they do rely on that technology to drive safely. But very few self-driving car

safely. But very few self-driving car teams are trying to count on an AI system to recognize a huge diversity of human gestures and counting just on that to drive safely around people. Let's

look at one more example. Say you want to build an AI system to look at X-ray images and diagnose pneumonia. So all of these are chest X-rays. So the input A could be the X-ray image and the output

B can be the diagnosis. Does this

patient have pneumonia or not? So that's

something that AI can do. Something that

AI cannot do would be to diagnose pneumonia from 10 images of a medical textbook chapter explaining pneumonia. A

human can look at a small set of images, maybe just a few dozen images and read a few paragraphs for a medical textbook and start to get a sense. But I actually don't know given a medical textbook what

is A and what is B or how to really pose this as an AI problem that I know how to write a piece of software to solve. if

all you have is just 10 images and a few paragraphs of text that explain what pneumonia and a chest X-ray looks like.

Whereas a young medical doctor might learn quite well reading a medical textbook and just looking at, you know, maybe dozens of images. In contrast, an AI system isn't really able to do that

today. To summarize, here are some of

today. To summarize, here are some of the strengths and weaknesses of machine learning. Machine learning tends to work

learning. Machine learning tends to work well when you're trying to learn a simple concept such as something that you could do with less than a second of mental thought and when there's lots of data available. Machine learning tends

data available. Machine learning tends to work poorly when you're trying to learn a complex concept from small amounts of data. A second

underappreciated weakness of AI is that it tends to do poorly when it's asked to perform on new types of data that's different than the data it has seen in your data set. Let me explain with an example. Say you built a supervised

example. Say you built a supervised learning system that uses A to B to learn to diagnose pneumonia from images like these. These are, you know, pretty

like these. These are, you know, pretty high quality chest X-ray images. But

now, let's say you take this AI system and apply it at a different hospital or a different medical center where maybe the X-ray technician somehow strangely had the patients always lie at an angle or sometimes there are these defects.

Not sure if you can see the little scratches in the image, these little other objects lying on top of the patient. If the AI system has learned

patient. If the AI system has learned from data like that on your left, maybe taken from a high quality medical center and you take this AI system and apply it to a different medical center that

generates images like those on the right, then this performance will be quite poor as well. A good AI team will be able to ameliate or to reduce some of these problems, but doing this is not

that easy. And this is one of the things

that easy. And this is one of the things that AI is actually much weaker than humans. If a human has learned from

humans. If a human has learned from images on the left, they're much more likely to be able to adapt to images like those on the right as they figure out that the patient's just lying at an angle. But an AI system can be much less

angle. But an AI system can be much less robust than human doctors in generalizing or figuring out what to do with new types of data like this. I hope

these examples are helping you hone your intuitions about what AI can and cannot do. In case the boundary between what it

do. In case the boundary between what it can and cannot do still seems fuzzy to you, don't worry. That's completely

normal, completely okay. In fact, even today, I still can't look at a project and immediately tell if something is feasible or not. And I often still need weeks of small numbers of weeks of technical diligence before forming

stronger conviction about whether something is feasible or not. But I hope that these examples can at least help you start imagining some things in your company that might be feasible and might

be worth exploring more. The next two videos after this are optional and are a non-technical description of what are neuronet networks and what is deep learning. Please feel free to watch

learning. Please feel free to watch those. And then next week we'll go much

those. And then next week we'll go much more deeply into the process of what building an AI project would look like.

Look forward to seeing you next week.

The terms deep learning and neuronet network are used almost interchangeably in AI. And even though they're great for

in AI. And even though they're great for machine learning, there's also been a bit of hype and bit of mystique about them. This video will demystify deep

them. This video will demystify deep learning so that you have a sense of what deep learning and neuronet networks really are. Let's use an example from

really are. Let's use an example from demand prediction. Let's say you run a

demand prediction. Let's say you run a website that sells t-shirts and you want to know based on how you price the t-shirts, how many units you expect to sell, how many t-shirts you expect to

sell. You might then create a data set

sell. You might then create a data set like this where the higher the price of the t-shirt, the lower the demand. So

you might fit a straight line to this data showing that as the price goes up, the demand goes down. Now demand can never go below zero. So maybe you say that the demand will flatten out at zero

and beyond a certain point you expect, you know, pretty much no one to buy any t-shirts. It turns out this blue line is

t-shirts. It turns out this blue line is maybe the simplest possible neuronet network. You have as input the price A

network. You have as input the price A and you want it to output the estimated demand B. So the way you would draw this

demand B. So the way you would draw this as a neuronet network is that the price will be input to this little round thing there and this little round thing

outputs the estimated demand. In the

terminology of AI, this little round thing here is called a neuron or sometimes it's called an artificial neuron. And all it does is compute this

neuron. And all it does is compute this blue curve that I've drawn here on the left.

This is maybe the simplest possible neuronet network with a single artificial neuron that just inputs the price and outputs the estimated demand.

If you think of this orange circle, this artificial neuron as a little Lego brick, all that the neuronet network is is if you take a lot of these Lego bricks and stack them on top of each

other until you get a big power or a big network of these neurons. Let's look at a more complex example. Suppose that

instead of knowing only the price of the t-shirts, you also have the shipping cost that the customers will have to pay to get the t-shirts, maybe you spend

more or less on marketing in a given week. And you can also make the t-shirt

week. And you can also make the t-shirt out of a thick, heavy, expensive cotton or a much cheaper, more lightweight material. These are some of the factors

material. These are some of the factors that you think will affect the demand for your t-shirts. Let's see what a more complex neuronet network might look like. You know that your customers care

like. You know that your customers care a lot about affordability. So let's say you have one neuron and let me draw this one in blue whose job it is to estimate

the affordability of the t-shirts. And

because affordability and so affordability is mainly a function of the price of the shirts and of the shipping costs. A second thing that will

shipping costs. A second thing that will affect the demand for your t-shirts is awareness. How much are consumers aware

awareness. How much are consumers aware that you're selling this t-shirt? So the

main thing that affects awareness is going to be your marketing. So let me draw here a second artificial neuron that inputs your marketing budget. How

much you spent on marketing and outputs how aware are consumers of your t-shirt.

Finally, the perceived quality of your product will also affect demand. And

perceived quality would be affected by marketing. with the marketing tries to

marketing. with the marketing tries to convince people this is a high quality t-shirt and sometimes the price of something also affects perceived quality. So I'm going to draw here a

quality. So I'm going to draw here a third artificial neuron that inputs price marketing and material and tries to estimate the perceived

quality of your t-shirts. Finally, now that the earlier neurons, these three blue neurons have figured out how affordable, how much consumer awareness, and what's

the perceived quality, you can then have one more neuron over here that takes as input, these three factors and outputs the estimated demand. So this is a

neuronet network and its job is to learn to map from these four inputs, that's the input A, to the output B, the

demand. So it learns this input outputs

demand. So it learns this input outputs or A to B mapping. This is a fairly small neuronet network with just four artificial neurons. In practice,

artificial neurons. In practice, neuronet networks used today are much larger with easily thousands, tens of thousands or even much larger than that numbers of neurons. Now there's just one

final detail of this description that I want to clean up which is that in the way I've described a neuronet network it was as if you had to figure out that the key factors are affordability awareness

and perceived quality. One of the wonderful things about using neuronet networks is that to train a neuronet network in other words to build a machine learning system using a neuronet network all you have to do is give it

the input A and the output B and it figures out all of the things in the middle by itself. So to build a neuronet network, what you would do is feed it

lots of data with the input A and have a neuronet network that just looks like this with a few blue neurons feeding to a yellow output neuron. And then you have to give it data with the demand B

as well. And it's the software's job to

as well. And it's the software's job to figure out what these blue neurons should be computing so that it can completely automatically learn the most accurate possible function mapping from

the input A to the output B. And it

turns out that if you give this enough data and train a neuronet network that is big enough, this can do an incredibly good job mapping from inputs A to outputs B. So that's a neuronet network.

outputs B. So that's a neuronet network.

is a group of artificial neurons, each of which computes a relatively simple function. But when you stack enough of

function. But when you stack enough of them together, like Lego bricks, they can compute incredibly complicated functions that give you very accurate mappings from the input A to the output B. Now, in this video, you saw an

B. Now, in this video, you saw an example of neuronet networks applied to demand prediction. Let's go on to the

demand prediction. Let's go on to the next video to see a more complex example of neuronet networks applied to face recognition.

In the last video, you saw how a neuronet network can be applied to demand prediction. But how can a

demand prediction. But how can a neuronet network look at a picture and figure out what's in the picture or listen to an audio clip and understand what is said in an audio clip? Let's

take a look at a more complex example of applying a neuronet network to face recognition. Say you want to build a

recognition. Say you want to build a system to recognize people from pictures. How can a piece of software

pictures. How can a piece of software look at this picture and figure out the identity of the person in it? Let's zoom

in to a little square like that to better understand how a computer sees pictures. Where you and I see a human

pictures. Where you and I see a human eye, a computer instead sees that, it sees this grid of pixel brightness values that tells it for each of the pixels in the image, how bright is that

pixel. If it were a black and white or a

pixel. If it were a black and white or a grayscale image, then each pixel would correspond to a single number telling you how bright is that pixel. If is a color image, then each pixel would

actually have three numbers corresponding to how bright are the red, green, and blue elements of that pixel.

So the neuronet network's job is to take us input a lot of numbers like these and tell you the name of the person in the picture. In the last video, you saw how

picture. In the last video, you saw how a neuronet network can take us input four numbers corresponding to the price, shipping cost, amount of marketing, and the cloth material of a t-shirt and

output demand. In this example, the

output demand. In this example, the neuronet network just has to input a lot more numbers corresponding to all of the pixel brightness values of this picture.

If the resolution of this picture is 1,000 pixels by 1,000 pixels, then that's a million pixels. So, if it were a black and white or grayscale image,

this neuronet network was stick input a million numbers corresponding to the brightness of all 1 million pixels in this image. Or if it was a color image,

this image. Or if it was a color image, it would take as input three million numbers corresponding to the red, green, and blue values of each of these 1

million pixels in this image. Similar to

before, you will have many many of these artificial neurons computing various values. And it's not your job to figure

values. And it's not your job to figure out what these neurons should compute.

The neuronet network will figure it out by itself. But typically when you give

by itself. But typically when you give it an image the neurons in the earlier parts of the neuronet network will learn to detect edges in pictures and then a little bit later will learn to detect

parts of objects. So it may learn to detect eyes and noses and the shape of cheeks and the shape of mouths and then the later neurons further to the right will learn to detect different shapes of

faces and it will finally put all this together to output the identity of the person in the image. And again, part of the magic of neuronet networks is that you don't really need to worry about

what it is doing in the middle. All you

need to do is give it a lot of data of pictures like this A, as well as of the correct identity B, and the learning algorithm will figure out by itself what each of these neurons in the middle

should be computing. Congratulations on

finishing all the videos for this week.

You now know how machine learning and data science work. I look forward to seeing you in next week's videos as well, where you learn how to build your own machine learning or data science

project. See you next week.

project. See you next week.

Loading...

Loading video analysis...