LongCut logo

Google’s AI Course for Beginners (in 10 minutes)!

By Jeff Su

Summary

## Key takeaways - **AI is a broad field, ML is a subfield**: Artificial Intelligence (AI) is a broad field of study, similar to physics. Machine Learning (ML) is a subfield within AI, much like thermodynamics is a subfield of physics. [00:39], [00:48] - **Supervised vs. Unsupervised Learning**: Supervised learning models use labeled data to make predictions and refine them by comparing to the training data. Unsupervised learning models use unlabeled data to identify natural groupings within the data without refinement. [01:54], [03:18] - **Deep Learning & Semi-Supervised Learning**: Deep learning, a subset of machine learning, utilizes artificial neural networks inspired by the human brain. Semi-supervised learning combines a small amount of labeled data with a large amount of unlabeled data for training, as seen in fraud detection. [03:33], [04:04] - **Generative AI creates new content**: Generative AI models learn patterns from data and then create entirely new outputs like text, images, or audio, unlike discriminative models which only classify existing data. [05:18], [05:51] - **LLMs: Pre-trained then Fine-tuned**: Large Language Models (LLMs) are pre-trained on vast datasets for general language tasks and then fine-tuned with industry-specific data to perform specialized functions in fields like healthcare or finance. [07:21], [08:05]

Topics Covered

  • Unpacking AI: Understanding its Core Subfields
  • Labeled vs. Unlabeled Data: Two ML Paradigms
  • Deep Learning: Efficient Training with Less Labeled Data
  • Generative AI: Creating New Content, Not Just Classifying
  • LLM Fine-tuning: Customizing AI for Industry-Specific Needs

Full Transcript

if you don't have a technical background

but you still want to learn the basics

of artificial intelligence stick around

because we were distilling Google's

4-Hour AI course for beginners into just

10 minutes I was initially very

skeptical because I thought the course

would be too conceptual we're all about

practical tips on this channel and

knowing Google the course might just

disappear after 1 hour but I found the

underlying Concepts actually made me

better at using tools like Chachi BT and

Google bard and cleared up a bunch of

misconceptions I didn't know I had about

AI machine learning and large language

models so starting with the broadest

possible question what is artificial

intelligence it turns out and I'm so

embarrassed to admit I didn't know this

AI is an entire field of study like

physics and machine learning is a

subfield of AI much like how

thermodynamics is a subfield of physics

going down another level deep learning

is a subset of machine learning and deep

learning models can be further broken

down into something called

discriminative models and generative

models large language models llms also

fall under deep learning and right at

the intersection between generative and

llms is the technology that powers the

applications we're all familiar with

chat gbt and Google bard let me know in

the comments if this was news to you as

well now that we have an understanding

of the overall landscape and you see how

the different disciplines sit in

relation to each other let's go over the

key takeaways you should know for each

level in a nutshell machine learning is

a program that uses input data to train

a model that trained model can then make

predictions Based on data it has never

seen before for example if you train a

model based on Nike sales data you can

then use that model to predict how well

a new shoe from Adidas would sell based

on Adidas sales data two of the most

common types of machine learning models

are supervised and unsupervised learning

models the key difference between the

two is supervised models use labeled

data and unsupervised models use

unlabeled data in this supervised

example we have historical data points

that plot the total bill amount at a

restaurant against the tip amount and

here the data is labeled Blue Dot equals

the order was picked up and yellow dot

equals the order was delivered using a

supervised learning model we can now

predict how much tip we can expect for

the next order given the bill amount and

whether it's picked up or delivered for

unsupervised learning models we look at

the raw data and see if a naturally

falls into groups in this example we

plotted the employee tenure at a company

against their income we see this group

of employees have a relatively High

income to years work ratio versus this

group we can also see all these are

unlabeled data if they were labeled we

would see male female years worked

company function Etc we can now ask this

unsupervised learning model to solve a

problem like if a new employee joins are

they on the FasTrack or not if they

appear on on the left then yes if they

appear on the right then no Pro tip

another big difference between the two

models is that after a supervised

learning model makes a prediction it

will compare that prediction to the

training data used to train that model

and if there's a difference it tries to

close that Gap unsupervised learning

models do not do this by the way this

video is not sponsored but it is

supported by those of you who subscribe

to my paid productivity newsletter on

Google tips Link in the description if

you want to learn more now we have a

basic Gra as of machine learning it's a

good time to talk about deep learning

which is just a type of machine learning

that uses something called artificial

neural networks don't worry all you have

to know for now is that artificial

neural networks are inspired by the

human brain and looks something like

this layers of nodes and neurons and the

more layers there are the more powerful

the model and because we have these

neural networks we can now do something

called semisupervised learning whereby a

deep learning model is trained on a

small amount of labeled data and a large

amount of unlabeled data for example a

bank might use deep learning models to

detect fraud the bank spends a bit of

time to tag or label 5% of transactions

as either fraudulent or not fraudulent

and they leave the remaining 95% of

transactions unlabeled because they

don't have the time or resources to

label every transaction the magic

happens when the Deep learning model

uses the 5% of label data to learn the

basic concepts of the task okay these

transactions are good and these are bad

okay apply those learnings to the

remaining 95% of unlabeled data and

using this new aggregate data set the

model makes predictions for future

transactions that's pretty cool and

we're not done because deep learning can

be divided into two types discriminative

and generative models discriminative

models learn from the relationship

between labels of data points and only

has the ability to classify those data

points fraud not fraud for example you

have a bunch of pictures or data points

you purposefully label some of them as

cats and some of them as dogs a

discriminative model will learn from the

label cat or dog and if you submit a

picture of a dog it will predict the

label for that new data point a dog we

finally get to generative AI unlike

discriminative models generative models

learn about the patterns in the training

data then after they receive some input

for example a text prompt from us they

generate something new based on the

patterns they just learned going back to

the animal example the pictures or data

points are not labeled as cater doog so

a generative model will look for

patterns oh these data points all have

two ears four legs a tail likes dog food

and Barks when as to generate something

called a dog the generative model

generates a completely new image based

on the patterns it just learned there's

a super simple way to determine if

something is generative AI or not if the

output is a number a class ification

spam not spam or a probability it is not

generative AI it is Gen AI when the

output is natural language text or a

speech an image or audio basically

generative AI generates new samples that

are similar to the data it was trained

on moving on to different generative AI

model types most of us are familiar with

textto text models like Chach BT and

Google bard other common model types

include text to image models like midj

Dolly and stable diffusion these can not

only generate images but edit images as

well text to video models surprise

surprise can generate and edit video

footage examples include Google's

imageen video Cog video and the Very

creatively named make a video text to 3D

models are used to create game assets

and a little known example would be open

ai's shape e model and finally text to

task models are trained to perform a

specific task for example if you type

Gmail summarize my unread emails Google

bard will look through your inbox and

summarize your unread emails moving over

to large language models don't forget

that llms are also a subset of deep

learning and although there is some

overlap llms and geni are not the same

thing an important distinction is that

large language models are generally

pre-trained with a very large set of

data and then fine-tune for specific

purposes what does that mean imagine you

have a pet dog it can be pre-trained

with basic commands like sit come down

and stay it's a good boy and a

generalist but if that same good boy

goes on to become a police dog a guide

dog or hunting dog they need to receive

specific training so they're fine tuned

for that specialist role a similar idea

applies to large language models they're

first pre-trained to solve common

language problems like text

classification question answering

document summarization and text

generation then using smaller industry

specific data sets these llms are

fine-tuned to solve specific problems in

Retail Finance Healthcare entertainment

and other fields in the real world this

might mean a hospital uses a pre-trained

large language model from one of the big

tech companies and fine-tunes that model

with its own first-party medical data to

improve diagnostic accuracy from X-rays

and other medical tests this is a

win-win scenario because large companies

can spend billions developing general

purpose large language models then sell

those llms to smaller institutions like

retail companies Banks hospitals who

don't have the resources to develop

their own large language models but they

have the domain specific data sets to

fine-tune those models Pro tip if you do

end up taking the full course I'll link

it down below it's completely free when

you're taking notes you can right click

on the video player and copy video URL

at the current time so can quickly

navigate back to that specific part of

the video there are five modules total

and you get a badge after completing

each module the content overall is a bit

more on the theoretical side so you

definitely want to check out this video

on how to master prompting next see you

on the next video in the

meantime have a great one

Loading...

Loading video analysis...