LongCut logo

Algorithms and Data Structures Tutorial - Full Course for Beginners

By freeCodeCamp.org

Summary

## Key takeaways - **Algorithms are more than just code**: Algorithms are defined as a set of steps or instructions for completing a task, applicable to everyday routines like recipes or driving directions, not just computer programs. [02:59:29] - **Algorithmic thinking is key**: Understanding algorithms involves not just knowing existing solutions but also recognizing when to apply them by analyzing the problem at hand. [05:34:39] - **Linear Search: Simple but Slow**: Linear search, like checking every book in a bookstore sequentially, has a worst-case runtime of O(n) because it might need to examine every element. [14:24:42], [25:27] - **Binary Search: Faster with Sorted Data**: Binary search, which repeatedly halves the search range, is significantly faster with a worst-case runtime of O(log n), but requires the data to be sorted first. [30:31:34], [36:39] - **Big O simplifies complexity**: Big O notation provides a theoretical upper bound on an algorithm's complexity, simplifying its performance as a function of input size (n) into a single variable like O(n) or O(log n). [39:34], [41:51] - **Linked Lists: Flexible inserts, slower access**: Linked lists offer efficient O(1) insertion at the head but require O(n) time for searching or inserting at arbitrary positions due to sequential traversal. [32:36:07], [33:33]

Topics Covered

  • Why should you learn established algorithms?
  • Algorithmic thinking is more valuable than memorizing code.
  • Always measure an algorithm by its worst-case scenario.
  • Logarithmic algorithms scale with incredible efficiency.
  • An algorithm's memory usage can be a hidden cost.

Full Transcript

this is a full-length course from

treehouse we at free code camp are

longtime fans of their learning platform

they were kind enough to let our

non-profit make this course freely

available on our youtube channel if you

like this course treehouse has a lot

more courses like this one the link is

in the description

along with time codes to the different

sections in this course

[Music]

hi my name is passan i'm an instructor

here at treehouse and welcome to

introduction to algorithms

whether you are a high school or college

student a developer in the industry or

someone who is learning to code you have

undoubtedly run into the term algorithm

for many people this word is kind of

scary it represents this body of

knowledge that seems just out of reach

only people with computer science

degrees know about algorithms now to

others this brings up feelings of

imposter syndrome

you might already know how to code but

you're not a real developer because you

don't know anything about algorithms

personally it made me frame certain jobs

as above my skill level because the

interview contained algorithm questions

well whatever your reasons are in this

course our goal is to dispel all those

feelings and get you comfortable with

the basics of algorithms

like any other subject i like to start

my courses with what the course is and

is not

in this course we're going to cover the

very basic set of knowledge that you

need as a foundation for learning about

algorithms

this course is less about specific

algorithms and more about the tools you

will need to evaluate algorithms

understand how they perform

compare them to each other and make a

statement about the utility of an

algorithm in a given context

now don't worry none of this will be

theoretical and we will learn these

concepts by using well-known algorithms

in this course we will also be writing

code so i do expect you to have some

programming experience if you intend to

continue with this topic

you can definitely stick around even if

you don't know how to code but you might

want to learn the basics of programming

in the meantime

in this course we will be using the

python programming language python reads

a lot like regular english and is the

language you will most likely encounter

when learning about algorithms these

days

if you don't know how to code or if you

know how to code in a different language

check out the notes section of this

video for links to other content that

might be useful to you

as long as you understand the

fundamentals of programming you should

be able to follow along pretty well

if you're a javascript developer or a

student who's learning javascript for

example chances are good that you'll

still be able to understand the code we

write later i'll be sure to provide

links along the way if you need anything

to follow up on

let's start with something simple

what is an algorithm

an algorithm is a set of steps or

instructions for completing a task

this might sound like an over

simplification but really that's

precisely what an algorithm is

a recipe is an algorithm your morning

routine when you wake up is an algorithm

and the driving directions you follow to

get to a destination is also an

algorithm

in computer science the term algorithm

more specifically means the set of steps

a program takes to finish a task

if you've written code before any code

really generally speaking you have

written an algorithm

given that much of the code we write can

be considered an algorithm what do

people mean when they say you should

know about algorithms

now consider this

let's say i'm a teacher in a classroom

and i tell everyone i have an assignment

for them on their desks they have a

picture of a maze and their task is to

come up with a way to find the quickest

way out of the maze

everyone does their thing and comes up

with a solution

every single one of these solutions is a

viable solution and is a valid example

of an algorithm the steps one needs to

take to get out of the maze but from

being in classrooms or any group of any

sort you know that some people will have

better ideas than others we all have a

diverse array of skill sets

over time our class picks the best of

these solutions and any time we want to

solve a maze we go with one of these

solutions this is what the field of

algorithms is about

there are many problems in computer

science but some of them are pretty

common regardless of what project you're

working on

different people have come up with

different solutions to these common

problems and over time the field of

computer science has identified several

that do the job well for a given task

when we talk of algorithms we're

referring to two points

we're primarily saying there's an

established body of knowledge on how to

solve particular problems well and it's

important to know what the solutions are

now why is it important

if you're unaware that a solution exists

you might try to come up with one

yourself and there's a likelihood that

your solution won't be as good or

efficient whatever that means compared

to those that have been thoroughly

reviewed

but there's a second component to it as

well

part of understanding algorithms is not

just knowing that an algorithm exists

but understanding when to apply it

understanding when to apply an algorithm

requires properly understanding the

problem at hand and this arguably is the

most important part of learning about

algorithms and data structures

as you progress through this content you

should be able to look at a problem and

break it down into distinct steps

when you have a set of steps you should

then be able to identify which algorithm

or data structure is best for the task

at hand

this concept is called algorithmic

thinking and it's something we're going

to try and cultivate together as we work

through our content

lastly learning about algorithms gives

you a deeper understanding about

complexity and efficiency in programming

having a better sense of how your code

will perform in different situations is

something that you'll always want to

develop in hone

algorithmic thinking is why algorithms

also come up in big tech interviews

interviewers don't care as much that you

are able to write a specific algorithm

in code but more about the fact that you

can break a seemingly insurmountable

problem into distinct components and

identify the right tools to solve each

distinct component

and that is what we plan on doing in

this course though we're going to focus

on some of the tools and concepts you'll

need to be aware of before we can dive

into the topic of algorithms if you're

ready let's get started

hey again in this video we're going to

do something unusual we're going to play

a game and by we i mean me and my two

friends here brittany and john

this game is really simple and you may

have played it before it goes something

like this i'm going to think of a number

between 1 and 10 and they have to guess

what the number is easy right

when they guess a number i'll tell them

if their guess is too high or too low

the winner is the one with the fewest

tries all right john let's start with

you i'm thinking of a number between one

and ten what is it

between you and me the answer is three

uh quick question does the range include

one and ten

that is a really good question so what

john did right there was to establish

the bounds of our problem

no solution works on every problem and

an important part of algorithmic

thinking is to clearly define what the

problem set is and clarify what values

count as inputs

yeah 1 and ten are both included is it

one too low is it two too low is it

three correct

okay so that was an easy one it took

john three tries to get the answer let's

switch over to brittany and play another

round using the same number as the

answer okay brittany i'm thinking of a

number between 1 and 10 inclusive so

both 1 and 10 are in the range what

number am i thinking of

is it 5 too high

2 too low

is it 3 correct all right so what we had

there was two very different ways of

playing the same game

somehow with even such a simple game we

saw different approaches to figuring out

a solution

to go back to algorithmic thinking for a

second this means that with any given

problem there's no one best solution

instead what we should try and figure

out is what solution works better for

the current problem

in this first pass at the game they both

took the same amount of turns to find

the answer so it's not obvious who has

the better approach and that's mostly

because the game was easy

let's try this one more time now this

time the answer is 10.

all right john you first is it one too

low is it two still too low is it three

too low is it four too low is it five

still too low is it six too low is it

seven too low is it eight low is it nine

do low is it ten correct you got it okay

so now same thing but with britney this

time

is it five too low

eight too low is it nine still too low

it's ten

all right so here we start to see a

difference between their strategies when

the answer was three they both took the

same number of turns this is important

when the number was larger but not that

much larger 10 in this case we start to

see that britney strategy did better she

took four tries while john took 10.

we've played two rounds so far and we've

seen a different set of results based on

the number they were looking for

if you look at john's way of doing

things then the answer being 10 the

round we just played is his worst case

scenario he will take the maximum number

of turns 10 to guess it

when we picked a random number like

three it was hard to differentiate which

strategy was better because they both

performed exactly the same

but in john's worst case scenario a

clear winner in terms of strategy

emerges

in terms of algorithmic thinking we're

starting to get a sense that the

specific value they're searching for may

not matter as much as where that value

lies in the range that they've been

given

identifying this helps us understand our

problem better

let's do this again for a range of

numbers from one to one hundred we'll

start by picking five as an answer to

trick them

okay so this time we're going to run

through the exercise again this time

from one to one hundred and both one and

one hundred are included is it one at

this point without even having to run

through it we can guess how many tries

john is going to take since he starts at

one and keeps going he's going to take

five tries as we're about to see is it

five cool correct

okay now for brittany's turn

is it 50 too high is it 25 still too

high

is it 13 too high is it seven

too high

is it four too low

is it six too high is it five correct

let's evaluate john took five tries

brittany on the other hand takes seven

tries so john wins this round but again

in determining whose strategy is

preferred there's no clear winner right

now

what this tells us is that it's not

particularly useful to look at the easy

answers where we arrive at the number

fairly quickly because it's at the start

of the range

instead let's try one where we know john

is going to do poorly let's look at his

worst case scenario where the answer is

100 and see how britney performs in such

a scenario

okay john let's do this one more time

one through 100 again

is it one we can fast forward this scene

because well we know what happens john

takes 100 tries

hi brittany you're up

is it 50 too low is it 75 too low 88 too

low

94 too low is it 97 too low

99 too low

100.

okay so that took brittney seven turns

again and this time she is the clear

winner if you compare their individual

performances for the same number set

you'll see that britney's approach

leaves john's in the dust

when the answer was five so right around

the start of the range john took five

turns but when the answer was 100 right

at the end of the range he took 100

tries it took him 20 times the amount of

tries to get that answer compared to

britney

on the other hand if you compare

britney's efforts when the number was 5

she took seven tries but when the number

was 100 she took the same amount of

tries this is pretty impressive if we

pretend that the number of tries is the

number of seconds it takes britney and

john to run through their attempts this

is a good estimate for how fast their

solutions are

ok we've done this a couple times and

brittany and john are getting tired

let's take a break in the next video

we'll talk about the point of this

exercise

in the last video we ran through an

exercise where i had some of my

co-workers guess what number i was

thinking so was the point of that

exercise you might be thinking hey i

thought i was here to learn about

algorithms

the exercise we just did was an example

of a real life situation you will run

into when building websites apps and

writing code

both approaches taken by john and

brittany to find the number i was

thinking of are examples of searching

for a value

it might be weird to think that there's

more than one way to search but as you

saw in the game the speed at which the

result was obtained differed between

john and brittany think about this

problem from the perspective of a

company like facebook

at the time of this recording facebook

has 2.19 billion active users

let's say you're traveling in a

different country and meet someone you

want to add on facebook

you go into the search bar and type out

this person's name

if we simplify how the facebook app

works it has to search across these 2.19

billion records and find the person you

are looking for

the speed at which you find this person

really matters imagine what kind of

experience it would be if when you

search for a friend facebook put up a

spinning activity indicator and said

come back in a couple hours

i don't think we'd use facebook as much

if that was the case

from the company's perspective working

on making search as fast as possible

using different strategies really

matters

now i said that the two strategies

britney and john used were examples of

search

more specifically these are search

algorithms

the strategy john took where he started

at the beginning of the range and just

counted one number after the other is a

type of search called linear search

it is also called sequential search

which is a better description of how it

works or even simple search since it

really is quite simple

but what makes his approach an algorithm

as opposed to just looking for something

remember we said that an algorithm is a

set of steps or instructions to complete

a task

linear search is a search algorithm and

we can define it like this

we start at the beginning of the list or

the range of values

then we compare the current value to the

target

if the current value is the target value

that we're looking for we're done

if it's not we'll move on sequentially

to the next value in the list and then

repeat step 2.

if we reach the end of the list then the

target value is not in the list

this definition has nothing to do with

programming and in fact you can use it

in the real world for example

i could tell you to walk into a

bookstore and find me a particular book

and one of the ways you could do it is

using the linear search algorithm

you could start at the front of the

bookstore and read the cover or the

spine of every book to check that it

matches the book that you're looking for

if it doesn't you go to the next book

and repeat until you find it or run out

of books

what makes this an algorithm is the

specificity of how it is defined

in contrast to just jumping into a

problem and solving it as we go along an

algorithm follows a certain set of

guidelines and we use the same steps to

solve the problem each time we face it

an important first step to defining the

algorithm isn't the algorithm itself but

the problem we're trying to solve

our first guideline is that an algorithm

must have a clear problem statement

it's pretty hard to define an

instruction set when you don't have a

clear idea of what problem you're trying

to solve

in defining the problem we need to

specify how the input is defined and

what the output looks like when the

algorithm has done its job

for linear search the input can be

generally described as a series of

values and the output is a value

matching the one we're looking for

right now we're trying to stay away from

anything code related so this problem

statement definition is pretty generic

but once we get to code we can actually

tighten this up

once we have a problem an algorithm is a

set of steps that solves this problem

given that the next guideline is that an

algorithm definition must contain a

specific set of instructions in a

particular order

we really need to be clear about the

order in which these instructions are

executed

taking our simple definition of linear

search if i switched up the order and

said move sequentially to the next value

before specifying that first comparison

step if the first value were the target

one our algorithm wouldn't find it

because we moved to the second value

before comparing

now you might think okay that's just an

avoidable mistake and kind of common

sense

the thing is computers don't know any of

that and just do exactly as we tell them

so specific order is really important

the third guideline is that each step in

our algorithm definition must not be a

complex one and needs to be explicitly

clear what i mean by that is that you

shouldn't be able to break down any of

the steps into further into additional

subtasks

each step needs to be a distinct one we

can't define linear search as search

until you find this value because that

can be interpreted in many ways and

further broken down into many more steps

it's not clear

next and this one might seem obvious but

algorithms should produce a result

if it didn't how would we know whether

the algorithm works or not

to be able to verify that our algorithm

works correctly we need a result

now when using a search algorithm the

end result can actually be nothing which

indicates that the value wasn't found

but that's perfectly fine

there are several ways to represent

nothing in code and as long as the

algorithm can produce some results we

can understand its behavior

the last guideline is that the algorithm

should actually complete and cannot take

an infinite amount of time

if we let john loose in the world's

largest library and asked him to find a

novel we have no way of knowing whether

he succeeded or not unless he came back

to us with a result

okay so quick recap what makes an

algorithm an algorithm and not just

something you do

one it needs to have a clearly defined

problem statement input and output

when using linear search the input needs

to be just a series of values but to

actually use brittany's strategy there's

one additional precondition so to speak

if you think about her strategy it

required that the numbers be sorted in

ascending order

this means that where the input for john

is just a series of values to solve the

problem the input to brittany's

algorithm needs to be a sorted series of

values

so clearly defined problem statement

clearly defined input and clearly

defined output

second the steps in the algorithm need

to be in a very specific order

the steps also need to be distinct you

should not be able to break it down into

further subtasks

next the algorithm should produce a

result

and finally the algorithm should

complete in a finite amount of time

these guidelines not only help us define

what an algorithm is but also helps us

verify that the algorithm is correct

executing the steps in an algorithm for

a given input must result in the same

output every time

if in the game i played the answer was

50 every time then every single time

john must take 50 turns to find out that

the answer is 50. if somehow he takes 50

turns in one round then 30 the next and

we technically don't have a correct

algorithm

consistent results for the same set of

values is how we know that the algorithm

is correct

i should stress that we're not going to

be designing any algorithms on our own

and we'll start off and spend most of

our time learning the tried and true

algorithms that are known to efficiently

solve problems

the reason for talking about what makes

for a good algorithm though is that the

same set of guidelines makes for good

algorithmic thinking which is one of the

most important skills we want to

cultivate

when we encounter a problem before

rushing in and thinking about solutions

what we want to do is work through the

guidelines

first we break down the problem into any

possible number of smaller problems

where each problem can be clearly

defined in terms of an input and an

output

now that we know how to generally define

an algorithm let's talk about what it

means to have a good algorithm

an important thing to keep in mind is

that there's no one single way to

measure whether an algorithm is the

right solution because it is all about

context

earlier we touched on two concepts

correctness and efficiency

let's define correctness more clearly

because before we can evaluate an

algorithm on efficiency we need to

ensure its correctness

before we define our algorithms we start

by defining our problem

in the definition of that problem we

have a clearly defined input satisfying

any preconditions and a clearly defined

output

an algorithm is deemed correct if on

every run of the algorithm against all

possible values in the input data we

always get the output we expect

part of correctness means that for any

possible input the algorithm should

always terminate or end

if these two are not true then our

algorithm isn't correct

if you were to pick up an algorithm's

textbook and look up correctness you

will run into a bunch of mathematical

theory this is because traditionally

algorithm correctness is proved by

mathematical induction which is a form

of reasoning used in mathematics to

verify that a statement is correct

this approach involves writing what is

called a specification and a correctness

proof

we won't be going into that in this

course

proof through induction is an important

part of designing algorithms but we're

confident that you can understand

algorithms both in terms of how and when

to use them without getting into the

math so if you pick up a textbook and

feel daunted don't worry i do too but we

can still figure things out without it

all right so once we have a correct

algorithm we can start to talk about how

efficient an algorithm is

remember that this efficiency ultimately

matters because they help us solve

problems faster and deliver a better end

user experience in a variety of fields

for example algorithms are used in the

sequencing of dna and more efficient

sequencing algorithms allow us to

research and understand diseases better

and faster but let's not get ahead of

ourselves we'll start simple by

evaluating john's linear search

algorithm in terms of its efficiency

first what do we mean by efficiency

there are two measures of efficiency

when it comes to algorithms time and

space sounds really cool and very sci-fi

huh

efficiency measured by time something

you'll hear called time complexity is a

measure of how long it takes the

algorithm to run

time complexity can be understood

generally outside the context of code

and computers because how long it takes

to complete a job is a universal measure

of efficiency the less time you take the

more efficient you are

the second measure of efficiency is

called space complexity and this is

pretty computer specific

it deals with the amount of memory taken

up on the computer

good algorithms need to balance between

these two measures to be useful

for example you can have a blazingly

fast algorithm but it might not matter

if the algorithm consumes more memory

than you have available

both of these concepts time and space

complexity are measured using the same

metric but it is a very technical

sounding metric so let's build up to it

slowly and start simple

a few videos ago i played a game with

brittany and john where they tried to

guess the number i was thinking of

effectively they were searching for a

value

so how do we figure out how efficient

each algorithm is and which algorithm

was more suited to our purposes

if we consider the number of tries they

took to guess or search for the value as

an indicator of the time they take to

run through the exercise this is a good

indicator of how long the algorithm runs

for a given set of values

this measurement is called the running

time of an algorithm and we'll use it to

define time complexity

in the game we play it four rounds let's

recap those here focusing on john's

performance

in round one we had 10 values the target

was 3 and john took 3 turns

in round 2 we had 10 values the target

was 10 and john took 10 turns

in round 3 we had 100 values the target

was

john took five tries and finally in

round four when the target was 100 given

100 values john took 100 tries

on paper it's hard to gauge anything

about this performance

when it comes to anything with numbers

though i like to put it up on a graph

and compare visually

on the vertical or y-axis let's measure

the number of tries it took john to

guess the answer or the running time of

the algorithm on the horizontal or

x-axis what do we put

for each turn we have a number of values

as well as a target value

we could plot the target value on the

horizontal axis but that leaves some

context and meaning behind it's far more

impressive that john took five tries

when the range went up to 100 then when

he took three tries for a maximum of 10

values

we could plot the maximum range of

values but then we're leaving out the

other half of the picture

there are data points however that

satisfy both requirements

if we only plot the values where the

target the number john was looking for

was the same as the maximum range of

values we have a data point that

includes both the size of the data set

as well as his effort

there's an additional benefit to this

approach as well

there are three ways we can measure how

well john does or in general how well

any algorithm does

first we can check how well john does in

the best case or good scenarios from the

perspective of his strategy

in the range of 100 values the answer

being a low number like three at the

start of the range is a good scenario he

can guess it fairly quickly one is his

best case scenario

or we could check how well he does on

average we could run this game a bunch

of times and average out the running

time

this would give us a much better picture

of john's performance over time but our

estimates would be too high if the value

he was searching for was at the start of

the range or far too low if it was at

the end of the range

let's imagine a scenario where facebook

naively implements linear search when

finding friends

they looked at the latest u.s census saw

that 50 of names start with the letters

a through j which is the first 40 of the

alphabet and thought okay on average

linear search serves us well

but what about the rest of those whose

names start with the letter after j in

the alphabet

searching for my name would take longer

than the average and much longer for

someone whose name starts with the

letter z

so while measuring the run time of an

algorithm on average might seem like a

good strategy it won't necessarily

provide an accurate picture

by picking the maximum in the range

we're measuring how our algorithm does

in the worst case scenario

analyzing the worst case scenario is

quite useful because it indicates that

the algorithm will never perform worse

than we expect there's no room for

surprises

back to our graph we're going to plot

the number of tries a proxy for running

time of the algorithm against the number

of values in the range which will

shorten to n

n here also represents john's worst case

scenario when n is 10 he takes 10 turns

when n is 100 he takes 100 turns

but these two values alone are

insufficient to really get any sort of

visual understanding moreover it's not

realistic

john may take a long time to work

through 100 numbers but a computer can

do that in no time

to evaluate the performance of linear

search in the context of a computer we

should probably throw some harder and

larger ranges of values at it

the nice thing is by evaluating a worst

case scenario we don't actually have to

do that work

we know what the result will be for a

given value of n using linear search it

will take n tries to find the value in

the worst case scenario so let's add a

few values in here to build out this

graph

okay so we have a good picture of what

this is starting to look like as the

values get really large the running time

of the algorithm gets large as well

we sort of already knew that

before we dig into this runtime any

deeper let's switch tracks and evaluate

brittany's work

by having something to compare against

it should become easier to build a

mental model around time complexity

the algorithm john used linear search

seemed familiar to us and you could

understand it because it's how most of

us search for things in real life anyway

brittany's approach on the other hand

got results quickly but it was a bit

harder to understand so let's break it

down

just like john's approach britney

started with a series of values or a

list of numbers as her input

where john just started at the beginning

of the list and searched sequentially

brittany's strategy is to always start

in the middle of the range

from there she asks a comparison

question

is the number in the middle of the range

equal to the answer she's looking for

and if it's not is it greater than or

less than the answer

if it's greater than she can eliminate

all the values less than the one she's

currently evaluating if it's lesser than

the answer she can eliminate all the

values greater than the one she's

currently evaluating

with the range of values that she's left

over with she repeats this process until

she arrives at the answer

let's visualize how she did this by

looking at round three

in round three the number of values in

the range was 100 the answer was 5.

the bar here represents the range of

values one of the left 100 at the right

and this pointer represents the value

britney chooses to evaluate

so she starts in the middle at 50. she

asks is it equal to the answer i say

it's too high so this tells her that the

value she is evaluating is greater than

our target value which means there's no

point in searching any of the values to

the right of 50 that is values greater

than 50 in this range so she can discard

those values altogether

she only has to consider values from 1

to 50 now

the beauty of this strategy and the

reason why britney was able to find the

answer in such few turns is that with

every value she evaluates she can

discard half of the current range

on her second turn she picks the value

in the middle of the current range which

is 25. she asks the same question i say

that the value is too high again and

this tells her that she can discard

everything greater than 25 and the range

of values drops from 1 to 25.

again she evaluates the number in the

middle roughly so that'd be 13 here i

tell her this is still too high she

discards the values greater moves to

value at 7 which is still too high

then she moves to 4 which is now too low

she can discard everything less than 4

which leaves the numbers 4 through 7.

here she picked 6 which was too high

which only leaves one value 5.

this seems like a lot of work but being

able to get rid of half the values with

each turn is what makes this algorithm

much more efficient

now there's one subtlety to using binary

search and you might have caught on to

this

for this search method to work as we've

mentioned the values need to be sorted

with linear search it doesn't matter if

the values are sorted since a linear

search algorithm just progresses

sequentially checking every element in

the list if the target value exists in

the list it will be fouled but let's say

this range of values 100 was unsorted

britney would start at the middle with

something like 14 and ask if this value

was too low or too high i say it's too

high so she discards everything less

than 14.

now this example starts to fall apart

here because well britney knows what

numbers are less than 14 and greater

than one she doesn't need an actual

range of values to solve this a computer

however does need that

remember search algorithms are run

against lists containing all sorts of

data it's not always just a range of

values containing numbers

in a real use case of binary search

which we're going to implement in a bit

the algorithm wouldn't return the target

value because we already know that it's

a search algorithm so we're providing

something to search for instead what it

returns is the position in the list that

the target occupies without the list

being sorted a binary search algorithm

would discard all the values to the left

of 14 which over here could include the

position where our target value is

eventually we'd get a result back saying

the target value doesn't exist in the

list which is inaccurate

earlier when defining linear simple

search i said that the input was a list

of values and the output was the target

value or more specifically the position

of the target value in the list

so with binary search there's also that

precondition the input list must be

sorted so let's formally define binary

search

first the input a sorted list of values

the output the position in the list of

the target value we're searching for or

some sort of values indicate that the

target does not exist in the list

remember our guidelines for defining an

algorithm let me put those up again

really quick

the steps in the algorithm need to be in

a specific order the steps also need to

be very distinct

the algorithms should produce a result

and finally the algorithm should

complete in a finite amount of time

let's use those to define this algorithm

step one we determine the middle

position of the sorted list

step two we compare the element in the

middle position to the target element

step three if the elements match we

return the middle position and end

if they don't match in step 4 we check

whether the element in the middle

position is smaller than the target

element

if it is then we go back to step 2 with

a new list that goes from the middle

position of the current list to the end

of the current list

in step five if the element in the

middle position is greater than the

target element then again we go back to

step two with a new list that goes from

the start of the current list to the

middle position of the current list

we repeat this process until the target

element is found or until a sub list

contains only one element

if that single element sublist does not

match the target element then we end the

algorithm indicating that the element

does not exist in the list

okay so that is the magic behind how

britney managed to solve the round much

faster

in the next video let's talk about the

efficiency of binary search

[Music]

we have a vague understanding that

britney's approach is better in most

cases but just like with linear search

it helps to visualize this

much like we did with linear search when

determining the efficiency of an

algorithm and remember we're still only

looking at efficiency in terms of time

time complexity as it's called we always

want to evaluate how the algorithm

performs in the worst case scenario now

you might be thinking well that doesn't

seem fair because given a series of data

if the target value we're searching for

is somewhere near the front of the list

then linear search may perform just as

well if not slightly better than binary

search and that is totally true

remember a crucial part of learning

algorithms is understanding what works

better in a given context

when measuring efficiency though we

always use the worst case scenarios as a

benchmark because remember it can never

perform worse than the worst case

let's plot these values on the graph we

started earlier with the number of tris

or the runtime of the algorithm on the y

axis and the maximum number of values in

the series or n on the horizontal axis

to represent the worst case scenario we

have two data points when n equals 10

britney took four tries using binary

search and when n equals 100 it took

seven tries

but even side by side these data points

are sort of meaningless

remember that while there is quite a

difference between the run time of

linear search and binary search at an n

value of 100 for a computer that

shouldn't matter

what we should check out is how the

algorithm performs at levels of n that

might actually slow a computer down

as n grows larger and larger how do

these algorithms compare to one another

let's add that to the graph

okay now a picture starts to emerge

as n gets really large the performance

of these two algorithms differs

significantly

the difference is kind of staggering

actually

even with the simple game we saw that

binary search was better but now we have

a much more complete idea of how much

better

for example when n is 1000 the runtime

of linear search measured by the number

of operations or turns is also 1000.

for binary search it takes just 10

operations

now let's look at what happens when we

increase n by factor of 10

at 10 000 linear search takes 10 000

operations while binary search takes 14

operations

and increased by a factor of 10 in

binary search only needs four more

operations to find a value

if we increase it again by a factor of

10 once more to an n value of 100 000

binary search takes only 17 operations

it is blazing fast

what we've done here is plotted on a

graph how the algorithm performs as the

input set it is working on increases

in other words we've plotted the growth

rate of the algorithm also known as the

order of growth

different algorithms grow at different

rates and by evaluating their growth

rates we get a much better picture of

their performance because we know how

the algorithm will hold up as n grows

larger

this is so important in fact it is the

standard way of evaluating an algorithm

and brings us to a concept called big o

you might have heard this word thrown

about and if you found it confusing

don't worry we've already built up a

definition in the past few videos we

just need to bring it all together

let's start with a common statement

you'll see in studies on algorithms

big o is a theoretical definition of the

complexity of an algorithm as a function

of the size

wow what a mouthful this sounds really

intimidating but it's really not let's

break it down

big o is a notation used to describe

complexity and what i mean by notation

is that it simplifies everything we've

talked about down into a single variable

an example of complexity written in

terms of big o looks like this

as you can see it starts with an

uppercase letter o that's why we call it

big o it's literally a big o

the o comes from order of magnitude of

complexity so that's where we get the

big o from now complexity here refers to

the exercise we've been carrying out in

measuring efficiency

if it takes brittany 4 tries when n is

10

how long does the algorithm take when n

is 10 million

when we use big o for this the variable

used which we'll get to distills that

information down so that by reading the

variable you get a big picture view

without having to run through data

points and graphs just like we did

it's important to remember that

complexity is relative

when we evaluate the complexity of the

binary search algorithm we're doing it

relative to other search algorithms not

all algorithms

bigo is a useful notation for

understanding both time and space

complexity but only when comparing

amongst algorithms that solve the same

problem

the last bit in that definition of big o

is a function of the size and all this

means is that big o measures complexity

as the input size grows because it's not

important to understand how an algorithm

performs in a single data set but in all

possible data sets

you will also see big o referred to as

the upper bound of the algorithm and

what that means is that big o measures

how the algorithm performs in the worst

case scenario

so that's all big o is

nothing special it's just a notation

that condenses the data points and

graphs that we've built up down to one

variable okay so what do these variables

look like

for john's strategy linear search we say

that it has a time complexity of big o

and then n so that's again big o with an

n inside parentheses

for britney strategy binary search we

say that it has a time complexity of big

o of log n that's big o with something

called a log and an n inside parentheses

now don't worry if you don't understand

that we'll go into that in more detail

later on in the course

each of these has a special meaning but

it helps to work through all of them to

get a big picture view so over the next

few videos let's examine what are called

common complexities or common values of

big o that you will run into and should

internalize

in our discussions of complexity we made

one assumption that the algorithm as a

whole had a single measure of complexity

that isn't true and we'll get at how we

arrive at these measures for the entire

algorithm at the end of this exercise

but each step in the algorithm has its

own space and time complexity

in linear search for example there are

multiple steps and the algorithm goes

like this

start at the beginning of the list or

range of values compare the current

value to the target if the current value

is the target value that we're looking

for we're done

if it's not we'll move on sequentially

to the next value in the list and repeat

step two

if we reach the end of the list then the

target value is not in the list

let's go back to step two for a second

comparing the current value to the

target

does the size of the data set matter for

this step

when we're at step two we're already at

that position in the list and all we're

doing is reading the value to make a

comparison reading the value is a single

operation and if we were to plot it on a

graph of runtime per operations against

n it looks like this a straight line

that takes constant time regardless of

the size of n since this takes the same

amount of time in any given case we say

that the run time is constant time it

doesn't change

in big o notation we represent this as

big o with a 1 inside parentheses now

when i first started learning all this i

was really confused as to how to read

this even if it was in my own head

should i say big o of one

when you see this written you're going

to read this as constant time so reading

a value in a list is a constant time

operation

this is the most ideal case when it

comes to run times because input size

does not matter and we know that

regardless of the size of n the

algorithm runtime will remain the same

the next step up in complexity so to

speak is the situation we encountered

with the binary search algorithm

traditionally explaining the time

complexity of binary search involves

math i'm going to try to do it both with

and without

when we played the game using binary

search we notice that with every turn we

were able to discard half of the data

but there's another pattern that emerges

that we didn't explore

let's say n equals 10. how long does it

take to find an item at the 10th

position of the list we can write this

out so we go from 10 to 5 to 8 to 9 and

then down to 10.

here it takes us four tries to cut down

the list to just one element and find

the value we're looking for

let's double the value of n to 20 and

see how long it takes for us to find an

item at the 20th position so we start at

20 and then we pick 10 from there we go

to 15 17 19 and finally 20.

so here it takes us five tries

okay let's double it again so that n is

40 and we try to find the item in the

40th position

so when we start at 40 the first

midpoint we're going to pick is 20 from

there we go to 30 then 35 37 39 and then

40.

notice that every time we double the

value of n the number of operations it

takes to reduce the list down to a

single element only increases by 1.

there's a mathematical relationship to

this pattern and it's called a logarithm

of n

you don't really have to know what

logarithms truly are but i know that

some of you like underlying explainers

so i'll give you a quick one

if you've taken algebra classes you may

have learned about exponents here's a

quick refresher

2 times 1 equals 2. now this can be

written as 2 raised to the first power

because it is our base case two times

one is two

now two times two is four this can be

written as two raised to the second

power because we're multiplying two

twice first we multiply two times one

then the result of that times 2.

2 times 2 times 2 is 8 and we can write

this as 2 raised to the 3rd power

because we're multiplying 2 3 times

in 2 raised to 2 and 2 raised to 3 the 2

and 3 there are called exponents and

they define how the number grows

with 2 raised to 3 we start with the

base value and multiply itself 3 times

the inverse of an exponent is called a

logarithm so if i say log to the base 2

of 8 equals 3 i'm basically saying the

opposite of an exponent

instead of saying how many times do i

have to multiply this value i'm asking

how many times do i have to divide 8 by

two to get the value one

this takes three operations

what about the result of log to the base

two of sixteen that evaluates to four

so why does any of this matter

notice that this is sort of how binary

search works

log to the base 2 of 16 is 4.

if n was 16 how many triads does it take

to get to that last element

well we start in the middle at 8 that's

too low so we move to 12 then we move to

14 then to 15 and then to 16 which is 5

tries or log to the base 2 of 16 plus 1.

in general for a given value of n the

number of tries it takes to find the

worst case scenario

is log of n plus one

and because this pattern is overall a

logarithmic pattern we say that the

runtime of such algorithms is

logarithmic

if we plot these data points on our

graph a logarithmic runtime looks like

this

in big o notation we represent a

logarithmic runtime as big o of log n

which is written as big o with log n

inside parentheses or even sometimes as

l n n inside parentheses

when you see this read it as logarithmic

time

as you can see on the graph as n grows

really large the number of operations

grows very slowly and eventually

flattens out

since this line is below the line for a

linear runtime which we'll look at in a

second you might often hear algorithms

with logarithmic runtimes being called

sublinear

logarithmic or sub-linear runtimes are

preferred to linear because they're more

efficient but in practice linear search

has its own set of advantages which

we'll take a look at in the next video

next up let's look at the situation we

encountered with the linear search

algorithm

we saw that in the worst case scenario

whatever the value of n was john took

exactly that many tries to find the

answer

as in linear search when the number of

operations to determine the result in

the worst case scenario is at most the

same as n

we say that the algorithm runs in linear

time

we represent this as big o of n now you

can read that as big o of n like i just

said or you can say linear time which is

more common

when we put that up on a graph against

constant time and logarithmic time we

get a line that looks like this

any algorithm that sequentially reads

the input will have linear time

so remember anytime you know a problem

involves reading every item in a list

that means a linear run time as you saw

from the game we played brittany's

strategy using binary search was clearly

better and we can see that on the graph

so if we had the option why would we use

linear search which runs in linear time

remember that binary search had a

precondition the input set had to be

sorted

while we won't be looking at sorting

algorithms in this course as you learn

more about algorithms you'll find that

sorting algorithms have varying

complexities themselves just like search

does so we have to do additional work

prior to using binary search

for this reason in practice linear

search ends up being more performant up

to a certain value of n because the

combination of sorting first and then

searching using binary search adds up

the next common complexity you will hear

about is when an algorithm runs in

quadratic time if the word quadratic

sounds familiar to you it's because you

might have heard about it in math class

quadratic is a word that means an

operation raised to the second power or

when something is squared

let's say you and your friends are

playing a tower defense game and to

start it off you're going to draw a map

of the terrain

this map is going to be a grid and you

pick a random number to determine how

large this grid is let's set n the size

of the grid to four

next you need to come up with a list of

coordinates so you can place towers and

enemies and stuff on this map so how

would we do this

if we start out horizontally we'd have

coordinate points that go 1 1 1 2 1 3

and 1 4.

then you go up one level vertically and

we have points 2 1 2 2 2 3 and 2 4.

go up one more and you have the points 3

1 3 2 3 3 and 3 4 and on that last row

you have the points 4 1 4 2 4 3 and 4 4.

notice that we have a pattern here

for each row we take the value and then

create a point by adding to that every

column value

the range of values go from 1 to the

value of n

so we can generally think of it this way

for the range of values from 1 to n for

each value in that range we create a

point by combining that value with the

range of values from 1 to n again

doing it this way for each value in the

range of 1 to n we create an n number of

values and we end up with 16 points

which is also n times n or n squared

this is an algorithm with a quadratic

runtime because for any given value of n

we carry out n squared number of

operations

now i picked a relatively easy so to

speak example here because in english at

least we often denote map sizes by

height times width so we would call this

a 4 by 4 grid which is just another way

of saying 4 squared or n squared

in big o notation we would write this as

big o of n squared or say that this is

an algorithm with a quadratic runtime

many search algorithms have a worst case

quadratic runtime which you'll learn

about soon

now in addition to quadratic runtimes

you may also run into cubic runtimes as

you encounter different algorithms in

such an algorithm for a given value of n

the algorithm executes n raised to the

third power number of operations

these aren't as common as quadratic

algorithms though so we won't look at

any examples but i think it's worth

mentioning

thrown up on our graph quadratic and

cubic runtimes look like this

so this is starting to look pretty

expensive computationally as they say we

can see here that for small changes in n

there's a pretty significant change in

the number of operations that we need to

carry out

the next worst case runtime we're going

to look at is one that's called

quasi-linear and a sort of easier to

understand for lack of better word by

starting with the big o notation

quasi-linear runtimes are written out as

big o of n times log n

we learned what log n was right a

logarithmic runtime whereas n grew the

number of operations only increased by a

small factor with a quasi-linear runtime

what we're saying is that for every

value of n we're going to execute a log

n number of operations hence the run

time of n times log n

so you saw earlier with the quadratic

runtime that for each value of n we

conducted n operations it's sort of the

same in that as we go through the range

of values in n we're executing login

operations

in comparison to other runtimes a

quasi-linear algorithm has a runtime

that lies somewhere between a linear

runtime and a quadratic runtime

so where would we expect to see this

kind of runtime in practical use

well sorting algorithms is one place you

will definitely see it

merge sort for example is a sorting

algorithm that has a worst case runtime

of big o of n log n

let's take a look at a quick example

let's say we start off with a list of

numbers that looks like this and we need

to sort it

merge sort starts by splitting this list

into two lists down the middle

it then takes each sub list and splits

that in half down the middle again

it keeps doing this until we end up with

a list of just a single number

when we're down to single numbers we can

do one sort operation and merge these

sub-lists back in the opposite direction

the first part of merge sort cuts those

lists into sub-lists with half the

numbers

this is similar to binary search where

each comparison operation cuts down the

range to half the values

you know the worst case runtime in

binary search is log n so these

splitting operations have the same

runtime big o of log n or logarithmic

but splitting into half isn't the only

thing we need to do with merge sort we

also need to carry out comparison

operations so we can sort those values

and if you look at each step of this

algorithm we carry out an n number of

comparison operations and that brings

the worst case runtime of this algorithm

to n times log n also known as quasi

linear don't worry if you didn't

understand how merge sort works that

wasn't the point of this demonstration

we will be covering merge sorts soon in

a future course

the run times we've looked at so far are

all called polynomial runtimes an

algorithm is considered to have a

polynomial runtime if for a given value

of n its worst case runtime is in the

form of n raised to the k power where k

just means some value so it could be n

squared where k equals 2 for a quadratic

runtime n cubed for a cubic runtime and

so on

all of those are in the form of n raised

to some power

anything that is bounded by this and

what i mean by that is if we had a

hypothetical line on our graph of n

raised to the k power anything that

falls under this graph is considered to

have a polynomial runtime

algorithms with an upper bound or a

runtime with a big o value that is

polynomial are considered efficient

algorithms and are likely to be used in

practice

now the next class of runtimes that

we're going to look at are a runtimes

that we don't consider efficient and

these are called exponential runtimes

with these runtimes as n increases

slightly the number of operations

increases exponentially and as we'll see

in a second these algorithms are far too

expensive to be used

an exponential runtime is an algorithm

with a big o value of some number raised

to the nth power

imagine that you wanted to break into a

locker that had a padlock on it let's

assume you forgot your code

this lock takes a two digit code and the

digit for the code ranges from zero to

nine

you start by setting the dials to zero

and then with the first dial remaining

on zero you change the second dial to

one and try and open it if it doesn't

work you set it to two then try again

you would keep doing this and if you

still haven't succeeded with the second

dial set to 9 then you go back to that

first dial set it to 1 and start the

second dial over

the range of values you'd have to go

through is 0 0 to 9 9 which is 100

values

this can be generalized as 10 to the

second power since there are 10 values

on each dial raised to two dials

searching through each individual value

until you stumble on the right one is a

strategy called brute force and brute

force algorithms have exponential run

times

here there are two dials so n is 2 and

each dial has 10 values so again we can

generalize this algorithm as 10 raised

to n where n represents the number of

dials

the reason that this algorithm is so

inefficient is because with just one

more dial on the lock the number of

operations increases significantly

with three dials the number of

combinations in the worst case scenario

where the correct code is the last digit

in the range is 10 raised to 3 or 1 000

values

with an additional wheel it becomes 10

raised to 4 or 10 000 values

as n increases the number of operations

increases exponentially to a point where

it's unsolvable in a realistic amount of

time

now you might think well any computer

can crack a four digit numerical lock

and that's true because n here is

sufficiently small but this is the same

principle that we use for passwords

in a typical password field implemented

well users are allowed to use letters of

the english alphabet so up to 26

characters numbers from 0 to 9 and a set

of special characters of which there can

be around 33

so typically that means each character

in a password can be one out of 69

values

this means that for a one character

password it takes 69 to the nth power so

1 which equals 69 operations in the

worst case scenario to figure out the

password

just increasing n to 2 increases the

number of operations needed to guess the

password to 69 squared or

4761 operations

now usually on a secure website there

isn't really a limit but in general

passwords are limited to around 20

characters in length

with each character being a possible 69

values and there being 20 characters the

number of operations needed to guess the

password in the worst case scenario is

69 raised to the 20th power or

approximately 6 followed by 36 zeros

number of operations

an intel cpu with five cores can carry

out roughly about 65 000 million

instructions per second that's a funny

number i know to crack our 20-digit

passcode in this very simplistic model

it would take this intel cpu

to race to 20th power years to brute

force the password

so while this algorithm would eventually

produce a result it is so inefficient

that it's pointless

this is one of the reasons why people

recommend you have longer passwords

since brute forcing is exponential in

the worst case each character you add

increases the number of combinations by

an exponent

the next class of exponential algorithms

is best highlighted by a popular problem

known as the traveling salesman

the problem statement goes like this

given a list of cities and the distance

between each pair of cities what is the

shortest possible route that visits each

city and then returns to the origin city

this seems like a simple question but

let's start with a simple case three

cities a b and c

to figure out what the shortest route is

we need to come up with all the possible

routes

with three cities we have six routes in

theory at least some of these routes can

be discarded because abc is the same as

c b a but in the opposite direction

but as we do know sometimes going from a

to c through b may go through a

different route than c to a through b so

we'll stick to the six routes and from

there we could determine the shortest no

big deal

now if we increase this to four cities

we jump to 24 combinations

the mathematical relationship that

defines this is called a factorial and

is written out as n followed by an

exclamation point

factorials are basically n times n minus

one repeated until you reach the number

one so for example the factorial of

three is three times two times one which

is six which is the number of

combinations we came up with for three

cities

the factorial of four is four times

three times two times one or 24 which is

the number of combinations we arrived at

with four cities

in solving the traveling salesman

problem the most efficient algorithm

will have a factorial runtime or a

combinatorial runtime as it's also

called

at low values of n algorithms with a

factorial runtime may be used but with

an n value of say 200 it would take

longer than humans have been alive to

solve the problem

for sake of completeness let's plot a

combinatorial runtime on our graph so

that we can compare

an algorithm such as one that solves the

traveling salesman problem as a worst

case run time of big o of n factorial

studying exponential runtimes like this

are useful for two reasons

first in studying how to make such

algorithms efficient we develop

strategies that are useful across the

board and can potentially be used to

make existing algorithms even more

efficient

second it's important to be aware of

problems that take a long time to solve

knowing right off the bat that a problem

is somewhat unsolvable in a realistic

time means you can focus your efforts on

other aspects of the problem

as beginners though we're going to steer

clear of all this and focus our efforts

on algorithms with polynomial runtimes

since we're much more likely to work

with and learn about such algorithms

now that we know some of the common

complexities in the next video let's

talk about how we determine the

complexity of an algorithm because there

are some nuances

over the last few videos we took a look

at common complexities that we would

encounter in studying algorithms but the

question remains how do we determine

what the worst case complexity of an

algorithm is

earlier i mentioned that even though we

say that an algorithm has a particular

upper bound or worst case runtime each

step in a given algorithm can have

different run times

let's bring up the steps for binary

search again

assuming the list is sorted the first

step is to determine the middle position

of the list

in general this is going to be a

constant time operation

many programming languages hold on to

information about the size of the list

so we don't actually need to walk

through the list to determine the size

now if we didn't have information about

the size of the list we would need to

walk through counting each item one by

one until we reached the end of the list

and this is a linear time operation but

realistically this is a big o of 1 or

constant time

step 2 is to compare the element in the

middle position to the target element

we can assume that in most modern

programming languages this is also a

constant time operation because the

documentation for the language tells us

it is

step 3 is our success case and the

algorithm ends

this is our best case and so far we have

only incurred two constant time

operations

so we would say that the best case run

time of binary search is constant time

which is actually true

but remember that best case is not a

useful metric

step 4 if we don't match is splitting

the list into sub-lists

assuming the worst case scenario the

algorithm would keep splitting into

sub-lists until a single element list is

reached with the value that we're

searching for

the run time for this step is

logarithmic since we discard half the

values each time

so in our algorithm we have a couple

steps that are constant time and one

step that is logarithmic overall

when evaluating the run time for an

algorithm we say that the algorithm has

as its upper bound the same runtime as

the least efficient step in the

algorithm

think of it this way let's say you're

participating in a triathlon which is a

race that has a swimming running and a

cycling component

you could be a phenomenal swimmer and a

really good cyclist but you're a pretty

terrible runner

no matter how fast you are at swimming

or cycling your overall race time is

going to be impacted the most by your

running race time because that's the

part that takes you the longest

if you take an hour 30 to finish the

running component 55 minutes to swim and

38 minutes to bike it won't matter if

you can fine tune your swimming

technique down to finish in 48 minutes

and your cycle time to 35 because you're

still bounded at the top by your running

time which is close to almost double

your bike time

similarly with the binary search

algorithm it doesn't matter how fast we

make the other steps they're already as

fast as they can be

in the worst case scenario the splitting

of the list down to a single element

list is what will impact the overall

running time of your algorithm

this is why we say that the time

complexity or run time of the algorithm

in the worst case is big o of log n or

logarithmic

as i alluded to though your algorithm

may hit a best case runtime and in

between the two best and worst case have

an average run time as well

this is important to understand because

algorithms don't always hit their worst

case but this is getting a bit too

complex for us for now we can safely

ignore average case performances and

focus only on the worst case in the

future if you decide to stick around

we'll circle back and talk about this

more

now that you know about algorithms

complexities and big o let's take a

break from all of that and write code in

the next video

[Music]

so far we've spent a lot of time in

theory and while these things are all

important things to know you get a much

better understanding of how algorithms

work when you start writing some code as

i mentioned earlier we're going to be

writing python code in this and all

subsequent algorithm courses

if you do have programming experience

but in another language check the notes

section of this video for an

implementation in your language

if you don't have any experience i'll

try my best explain as we go along

on the video you're watching right now

you should see a launch workspaces

button

we're going to use a treehouse coding

environment call workspaces to write all

of our code

if you're familiar with using python in

a local environment then feel free to

keep doing so workspaces is an

in-browser coding environment and will

take care of all the setup and

installation so you can focus on just

writing and evaluating code workspaces

is quite straightforward to use on the

left here we have a file navigator pane

which is currently empty since we

haven't created a new file

on the top we have an editor where we

write all our code and then below that

we have a terminal or a command line

prompt where we can execute the scripts

that we write let's add a new file here

so at the top in the editor area we're

going to go to file new file and we'll

name this linear

underscore search

dot pi

in here we're going to define our linear

search algorithm as a standalone

function

we start with the keyword def which

defines a function or a block of code

and then we give it the name linear

underscore search

this function will accept two arguments

first the list we're searching through

and then the target value we're looking

for both of these arguments are enclosed

in a set of parentheses and there's no

space between the name of the function

and the arguments

after that we have a colon

now there might be a bit of confusion

here since we already have this target

value what are we searching for unlike

the game we played at the beginning

where john's job was to find the value

in a true implementation of linear

search we're looking for the position in

the list where the value exists

if the target is in the list then we

return its position

and since this is a list that position

is going to be denoted by an index value

now if the target is not found we're

going to return none the choice of what

to return in the failure case may be

different in other implementations of

linear search

you can return -1 since that isn't

typically an index value

you can also raise an exception which is

python speak for indicating an error

occurred

now i think for us the most

straightforward value we can return here

is none now let's add a comment to

clarify this so hit enter to go to the

next line

and then we're going to add

three

single quotes

and then below that on the next line

we'll say returns

the position or the index

position

of the target

if found

else returns none

and then on the next line we'll close

off those three quotes

this is called a doc string and is a

python convention for documenting your

code the linear search algorithm is a

sequential algorithm that compares each

item in the list until the target is

found

to iterate or loop or walk through our

list sequentially we're going to use a

for loop

now typically when iterating over a list

in python we would use a loop like this

we'd say for item in list

this assigns the value at each index

position to that local variable item

we don't want this though since we

primarily care about the index position

instead we're going to use the range

function in python to create a range of

values that start at 0 and end at the

number of items in the list

so we'll say 4 i i stands for index here

in range

starting at 0 and going all the way up

to the length of the list

we can get the number of items in the

list using the len function

now going back to our talk on complexity

and how individual steps in an algorithm

can have its own run times this is a

line of code that we would have to be

careful about

python keeps track of the length of a

list so this function call here len list

is a constant time operation now if this

were a naive implementation let's say we

wrote the implementation of the list

and we iterate over the list every time

we call this length function then we've

already incurred a linear cost

okay so once we have a range of values

that represent index positions in this

list we're going to iterate over that

using the for loop and assign each index

value to this local variable i using

this index value we can obtain the item

at that position using subscript

notation on the list

now this is also a constant time

operation because the language says so

so we'll do if list so once we have this

value which we'll get by using subscript

notation so we'll say list i

once we have this value we'll check if

it matches the target so if the value at

i

equals target

well if it does then we'll return that

index value because we want the position

and once we hit this return statement

we're going to terminate our function if

the entire for loop is executed and we

don't hit this return statement then the

target does not exist in the list so at

the bottom here we'll say return none

even though all the individual

operations in our algorithm run in

constant time

in the worst case scenario this for loop

here will have to go through the entire

range of values and read every single

element in the list

therefore giving the algorithm a big o

value of n or running in linear time now

if you've written code before you've

definitely written code like this a

number of times and i bet you didn't

know but all along you are implementing

what is essentially a well-known

algorithm

so i hope this goes to show you that

algorithms are pretty approachable topic

like everything else this does get

advanced but as long as you take things

slow there's no reason for it to be

impossible remember that not any block

of code counts as an algorithm to be a

proper implementation of linear search

this block of code must return a value

must complete execution in a finite

amount of time and must output the same

result every time for a given input set

so let's verify this with a small test

let's write a function called verify

that accepts an index value

if the value is not none it prints the

index position if it is none it informs

us that the target was not found in the

list so def verify

and this is going to take an index value

and we'll say if index is not none

then we'll print

target

found at index

oops that's a colon here

index else

that needs to go back

there we go

else we'll say target

not found in list

okay using this function let's define a

range of numbers now so this will be a

list numbers

and we'll just go from 1 to

let's say 10.

now if you've written python code before

you know that i can use a list

comprehension to make this easier but

we'll keep things simple

we can now use our linear search

function to search for the position of a

target value in this list so we can say

result

equal

linear underscore search

and we're going to pass in the numbers

list that's the one we're searching

through and we want to look for the

position where the value 12 exists

and then we'll verify

this result

if our algorithm works correctly the

verify function should inform us that

the target did not exist so make sure

you save the file which you can do by

going up to file and save or hitting

command s

and then below in the terminal

you're going to type out python

linear search or you can hit tab and it

should auto complete linear search dot

pi

as you can see correct the target was

not found in the list so the output of

our script is what we expect

for our second test let's search for the

value 6 in the list so you can copy this

command c to copy and then paste it

again and we'll just change 12 here to 6

and then come back down to the terminal

hit the up arrow to execute the same

command again and hit enter you'll

notice that i forgot to hit save so it

did not account for that new change

we'll try that again

and there you'll see that if it works

correctly which it did the index should

be number five run the program on your

end and make sure everything works as

expected

our algorithm returned a result in each

case it executed in a finite time and

the results were the ones we expect in

the next video let's tackle binary

search

in the last video we left off with an

implementation of linear search

let's do the same for binary search so

that we get an understanding of how this

is represented in code

so we'll do this in a new file back to

file new file

and we'll name this one binary

search

dot

py

like before we're going to start with a

function named binary search so we'll

say def

binary underscore search

that takes a list and a target

if you remember binary search works by

breaking the array or list down into

smaller sets until we find the value

we're looking for

we need a way to keep track of the

position of the list that we're working

with so let's create two variables first

and last to point to the beginning and

end of the array so first equal

zero now if you're new to programming

list positions are represented by index

values that start at zero instead of one

so here we're setting first to zero to

point to the first element in the list

last is going to point to the last

element in the list so we'll say last

equal

len list

minus one now this may be confusing to

you so a quick sidebar to explain what's

going on

let's say we have a list containing 5

elements if we called len on that list

we should get 5 back because there are 5

elements

but remember that because the position

numbers start at 0 the last value is not

at position 5 but at 4. in nearly all

programming languages getting the

position of the last element in the list

is obtained by determining the length of

the list and deducting 1 which is what

we're doing

okay so we know what the first and last

positions are when we start the

algorithm

for our next line of code we're going to

create a while loop

a while loop takes a condition and keeps

executing the code inside the loop until

the condition evaluates to false

for our condition we're going to say to

keep executing this loop until the value

of first is less than or equal to the

value of last

so while first less than or equal to

last

well why you ask why is this our

condition well let's work through this

implementation and then a visualization

should help

inside the while loop we're going to

calculate the midpoint of our list since

that's the first step of binary search

midpoint equal

so we'll say first

plus last

and then we'll use the floor division

double slash here

divided by two

now the two forward slashes here are

what python calls a floor division

operator what it does is it rounds down

to the nearest whole number so if we

have an eight element array first is

zero last is 7 if we divided 0 plus 7

which is 7 by 2 we would get 3.5 now 3.5

is not a valid index position so we

round that down to 3 using the floor

division operator okay so now we have a

midpoint the next step of binary search

is to evaluate whether the value at this

midpoint is the same as the target we're

looking for so say if list

value at midpoint

equals the target

well if it is then we'll go ahead and

return the midpoint

so we'll say return

midpoint

the return statement terminates our

algorithm and over here we're done this

is our best case scenario

next we'll say else if

list at midpoint

or value at midpoint is less than the

target now here if the value is less the

value at midpoint is less than the

target then we don't care about any of

the values lower than the midpoint so we

redefine first

to point to the value after the midpoint

so we'll say midpoint plus 1.

now if the value at the midpoint is

greater than the target then we can

discard the values after the midpoint

and redefine last to point to the value

prior to the midpoint so we'll say else

last equal midpoint

minus 1.

let's visualize this we're going to

start with a list of nine integers

to make this easier to understand let's

specify these integers to be of the same

value as its index position so we have a

range of values from 0 to 8.

our target is the worst case scenario

we're looking for the position of the

value 8. at the start our algorithm sets

first to point to the index 0 and last

to point to the length of the list minus

1 which is 8.

next we hit our while loop the logic of

this loop is going to be executed as

long as the value of first is not

greater than the value of last or as

we've defined it we're going to keep

executing the contents of the loop as

long as first is less than or equal to

last

on the first pass this is true so we

enter the body of the loop

the midpoint is first plus last divided

by two and rounded down so we get a nice

even four the value at this position is

four now this is not equal to the target

so we move to the first else if

four is less than eight so now we

redefine first to point to midpoint plus

one which is five

first is still less than last so we run

through the body of the loop again the

midpoint is now six

six is less than eight so we move first

to point to seven

seven is still less than or equal to

eight so we go for another iteration of

the loop

the midpoint is seven oddly enough and

seven is still less than the target so

we move first to point to eight first is

equal to last now but our condition says

keep the loop going as long as first is

less than or equal to last so this is

our final time through the loop

the midpoint is now 8 which makes the

value at the midpoint equal to the

target and we finally exit our algorithm

and return the position of the target

now what if we had executed all this

code and never hit a case where midpoint

equal the target well that would mean

the list did not contain the target

value so after the while loop at the

bottom

will return

none

we have several operations that make up

our binary search algorithm so let's

look at the runtime of each step we

start by assigning values to first and

last

the value assigned to last involves a

call to the len function to get the size

of the list but we already know this is

a constant time operation in python so

both of these operations run in constant

time

inside the loop we have another value

assignment and this is a simple division

operation so again the runtime is

constant

in the next line of code we're reading a

value from the list and comparing the

midpoint to the target both of these

again are constant time operations the

remainder of the code is just a series

of comparisons and value assignments and

we know that these are all constant time

operations as well

so if all we have are a series of

constant time operations why does this

algorithm have in the worst case a

logarithmic runtime

it's hard to evaluate by just looking at

the code but the while loop is what

causes the run time to grow

even though all we're doing is a

comparison operation by redefining first

and last

over here or rather in the last two

steps over here we're asking the

algorithm to run as many times as it

needs until first is equal or greater

than last

now each time the loop does this the

size of the data set the size of the

list grows smaller by a certain factor

until it approaches a single element

which is what results in the logarithmic

runtime

okay just like with linear search let's

test that our algorithm works so we'll

go back to linear search.hi

and we're going to copy paste

so command c to copy if you're on a mac

then go back to binary search and at the

bottom

oops

we're going to paste in that verify

function

okay we'll also go back and grab this

numbers

you know what let's go ahead and copy

all all of these things so numbers and

the two verify cases we'll paste that in

as well

and the only thing we need to change

here is instead of calling linear search

this is going to call binary search

okay we'll hit command s to save the

file and then i'm going to drag up my

console and we'll run python binary

search dot

and hit enter and you'll see like just

like before we get the same results back

now note that an extremely important

distinction needs to be made here

the numbers list that we've defined

for our test cases

right here

has to be sorted the basic logic of

binary search relies on the fact that if

the target is greater than the midpoint

then our potential values lie to the

left or vice versa since the values are

sorted in ascending order if the values

are unsorted our implementation of

binary search may return none even if

the value exists in the list

and just like that you've written code

to implement two search algorithms how

fun was that

hopefully this course has shown you that

it isn't a topic to be afraid of and

that algorithms like any other topic

with code can be broken down and

understood piece by piece

now we have a working implementation of

binary search but there's actually more

than one way to write it so in the next

video let's write a second version

i'm going to create a new file

as always file new file

and we'll name this recursive

underscore binary underscore search dot

p

y

okay so we're going to add our new

implementation here so that we don't get

rid of that first implementation we

wrote let's call this new function

recursive binary search

unlike our previous implementation this

version is going to behave slightly

differently in that it won't return the

index value of the target element if it

exists

instead it will just return a true value

if it exists and a false if it doesn't

so recursive

underscore binary underscore search

and like before this is going to take a

list it accepts a list and a target to

look for in that list

we'll start the body of the function by

considering what happens if an empty

list is passed in in that case we would

return false so i would say if the

length of the list which is one way to

figure out if it's empty if it's equal

to zero

then we'll return false

now you might be thinking that in the

previous version of binary search we

didn't care if the list was empty well

we actually did but in a roundabout sort

of way so in the previous version of

binary search our function had a loop

and that loop condition was true when

first was less than or equal to last so

as long as it's less than or equal to

last we continue the loop

now if we have an empty list then first

is greater than last and the loop would

never execute and we return none at the

bottom

so this is the same logic we're

implementing here we're just doing it in

a slightly different way if the list is

not empty we'll implement an else clause

now here we'll calculate the midpoint

by dividing the length of the list by 2

and rounding down

again there's no use of first and last

here so we'll say length of list

and then using the floor division

operator we'll divide that by 2.

if the value at the midpoint which we'll

check by saying if list

using

subscript notation we'll say midpoint as

the index now if this value at the

midpoint is the same as the target

then we'll go ahead and return true

so far this is more or less the same

except for the value that we're

returning

let me actually get rid of all that

okay

all right so if this isn't the case

let's implement an else clause now here

we have two situations so first if the

value at the midpoint is less than the

target so if

value at midpoint

is less than the target

then we're going to do something new

we're going to call this function again

this recursive binary search function

that we're in the process of defining

we're going to call that again and we're

going to give it the portion of the list

that we want to focus on in the previous

version of binary search we moved the

first value to point to the value after

the midpoint

now here we're going to create a new

list using what is called a slice

operation and create a sub list that

starts at midpoint plus 1 and goes all

the way to the end

we're going to specify the same target

as a search target and when this

function call is done we'll return the

value so we'll say return the return is

important

then we'll call this function again

recursive

binary search

and this function takes a list and here

we're going to use that subscript

notation to perform a slice operation by

using two indexes a start and an end so

we'll say our new list that we're

passing in needs to start at midpoint

plus one

and then we'll go all the way to the end

and this is a

python syntactic sugar so to speak if i

don't specify an end index python knows

to just go all the way to the end all

right so this is our new list that we're

working with

and we need a target we'll just pass it

through if you're confused bear with me

just like before we'll visualize this at

the end

okay we have another else case here

and this is a scenario where the value

at the midpoint is greater than the

target

which means we only care about the

values in the list from the start going

up to the midpoint now in this case as

well we're going to call the binary

search function again and specify a new

list to work with this time the list is

going to start at the beginning and then

go all the way up to the midpoint so it

looks the same we'll say return

recursive

binary search

we're going to pass in a list here so if

we just put a colon here

without a start index python knows to

start at the beginning and we're going

to go all the way up to the midpoint

the target here is the same

and this is our new binary search

function so let's see if this works

actually

yes

down here we'll make some space

and we'll define a verify function

we're not going to copy paste the

previous one

because we're not returning none or an

integer here so we'll verify the result

that we pass in and we'll say print

target found

and this is just going to say true or

false whether we found it

okay so like before we need a numbers

list

and we'll do something one two three

four all the way up to eight

okay and now let's test this out so

we'll call

our recursive

binary search function

and we'll pass in the numbers list

and the target here is 12.

we're going to verify this

verify the result make sure it works and

then we'll call it again this time

making sure that we give it a target

that is actually in the list so here

we'll say 6

and we'll verify this again

make sure you hit command s to save

and then in the console below we're

going to type out

python recursive binarysearch.pi

run it and you'll see that we've

verified that search works

while we can't verify the index position

of the target value which is a

modification to how our algorithm works

we can guarantee by running across all

valid inputs that search works as

intended

so why write a different search

algorithm here a different binary search

algorithm and what's the difference

between these two implementations anyway

the difference lies in these last four

lines of code that you see here

we did something unusual here now before

we get into this a small word of advice

this is a confusing topic and people get

confused by it all the time

don't worry that doesn't make you any

less of a programmer in fact i have

trouble with it often and always look it

up including when i made this video

this version of binary search is a

recursive binary search

a recursive function is one that calls

itself

this is hard for people to grasp

sometimes because there's few easy

analogies that make sense but you can

think of it and sort this way so let's

say you have this book that contains

answers to multiplication problems

you're working on a problem and you look

up an answer

in the book the answer for your problem

says add 10 to the answer for problem 52

okay so you look up problem 52 and there

it says add 12 to the answer for problem

85

well then you go and look up the answer

to problem 85 and finally instead of

redirecting you somewhere else that

answer says 10. so you take that 10 and

then you go back to problem 52 because

remember the answer for problem 52 was

to add 12 to the answer for problem 85

so you take that 10 and then you now

have the answer to problem 85 so you add

10 to 12 to get 22.

then you go back to your original

problem where it said to add 10 to the

answer for problem 52 so you add 10 to

22 and you get 32 to end up with your

final answer so that's a weird way of

doing it but this is an example of

recursion

the solution to your first lookup in the

book was the value obtained by another

lookup in the same book which was

followed by yet another lookup in the

same book the book told you to check the

book until you arrived at some base

value

our function works in a similar manner

so let's visualize this with an example

of list

like before we have a nine element list

here with values zero through eight

the target we're searching for is the

value eight

we'll check if the list is empty by

calling len on it this list is not empty

so we go to the else clause next we

calculate the midpoint 9 divided by 2 is

4.5 rounded down is 4 so our first

midpoint value is 4.

we'll perform our first check is the

value at the midpoint equal to the

target

not true so we go to our else clause

we'll perform another check here is the

value at the midpoint less than the

target now in our case this is true

earlier when we evaluated this condition

we simply change the value of first

here we're going to call the recursive

binary search function again and give it

a new list to work with

the list starts at midpoint plus 1 so at

index position 5 all the way to the end

notice that this call to recursive

binary search inside of recursive binary

search includes a return statement

this is important and we'll come back to

that in a second

so now we're back at the top

of a new call to recursive binary search

with effectively a new list although

technically just a sub list of the first

one

the list here contains the numbers 6 7

and 8.

starting with the first check the list

is not empty so we move to the else

the midpoint in this case length of the

list 3 divided by 2 rounded down is 1.

is the value of the midpoint equal to

the target well the value at that

position is 7 so no in the else we

perform the first check is the value at

the midpoint less than the target indeed

it is so we call recursive binary search

again and provided a new list

this list starts at midpoint plus 1 and

goes to the end so in this case that's a

single element list

since this is a new call to recursive

binary search we start back up at the

top

is the list empty no the midpoint is

zero

is the value at the midpoint the same as

the target it is so now we can return

true

remember a minute ago i pointed out that

when we call recursive binary search

from inside the function itself it's

preceded by a return statement

that plays a pretty important role here

so back to our visualization

we start at the top and recall binary

search with a new list but because

that's got a return statement before it

what we're saying is hey when you run

binary search on this whatever value you

get back return it to the function that

called you

then at the second level we call binary

search again along with another return

statement like with the first call we're

instructing the function to return a

value back to the code that called it

at this level we find the target so the

function returns true back to the caller

but since this inner function was also

called by a function with instructions

to return it keeps returning that true

value back up until we reach the very

first function that called it going back

to our book of answers recursive binary

search instructs itself to keep working

on the problem until it has a concrete

answer

once it does it works its way backwards

giving the answer to every function that

called it until the original caller has

an answer

now like i said at the beginning this is

pretty complicated so you should not be

concerned if this doesn't click honestly

this is not one thing that you're going

to walk away with knowing fully how to

understand recursion after your first

try i'm really not lying when i say i

have a pretty hard time with recursion

now before we move on i do want to point

out one thing

even though the implementation of

recursion is harder to understand

it is easier in this case to understand

how we arrive at the logarithmic run

time since we keep calling the function

with smaller lists let's take a break

here in the next video let's talk a bit

more about recursion and why it matters

[Music]

in the last video we wrote a version of

binary search that uses a concept called

recursion

recursion might be a new concept for you

so let's formalize how we use it

a recursive function is one that calls

itself

in our example the recursive binary

search function called itself inside the

body of the function

when writing a recursive function you

always need a stopping condition

and typically we start the body of the

recursive function with this stopping

condition it's common to call this

stopping condition the base case

in our recursive binary search function

we had two stopping conditions

the first was what the function should

return if an empty list is passed in

it seems weird to evaluate an empty list

because you wouldn't expect to run

search on an empty list but if you look

at how our function works recursive

binary search keeps calling itself and

with each call to itself the size of the

list is cut in half

if we searched for a target that didn't

exist in the list then the function

would keep halving itself until it got

to an empty list

consider a three element list with

numbers one two three where we're

searching for a target of four

on the first pass the midpoint is 2 so

the function would call itself with the

list 3.

on the next pass the midpoint is 0 and

the target is still greater so the

function would call itself this time

passing in an empty list because an

index of 0 plus 1 in a single element

list doesn't exist

when we have an empty list this means

that after searching through the list

the value wasn't found

this is why we define an empty list as a

stopping condition or a base case that

returns false if it's not an empty list

then we have an entirely different set

of instructions we want to execute

first we obtain the midpoint of the list

once we have the midpoint we can

introduce our next base case or stopping

condition

if the value at the midpoint is the same

as the target then we return true

with these two stopping conditions we've

covered all possible paths of logic

through the search algorithm you can

either find the value or you don't

once you have the base cases the rest of

the implementation of the recursive

function is to call the function on

smaller sub-lists until we hit one of

these base cases going back to our

visualization for a second we see that

recursive binary search calls itself a

first time which then calls itself again

for the initial list we started with the

function only calls itself a few times

before a stopping condition is reached

the number of times a recursive function

calls itself is called recursive depth

now the reason i bring all of this up is

because if after you start learning

about algorithms you decide you want to

go off and do your own research you may

start to see a lot of algorithms

implemented using recursion

the way we implemented binary search the

first time is called an iterative

solution

now when you see the word iterative it

generally means the solution was

implemented using a loop structure of

some kind

a recursive solution on the other hand

is one that involves a set of stopping

conditions and a function that calls

itself computer scientists and computer

science textbooks particularly from back

in the day

favor and are written in what are called

functional languages

in functional languages we try to avoid

changing data that is given to a

function

in our first version of binary search we

created first and last variables using

the list and then modified first and

last as we needed to arrive at a

solution

functional languages don't like to do

this all this modification of variables

and prefer a solution using recursion

a language like python which is what

we're using is the opposite and doesn't

like recursion in fact python has a

maximum recursion depth after which our

function will halt execution python

prefers an iterative solution now i

mentioned all of this for two reasons

if you decide that you want to learn how

to implement the algorithm in a language

of your choice that's not python then

you might see a recursive solution as

the best implementation in that

particular language

i'm an ios developer for example and i

work with a language called swift

swift is different from python in that

it doesn't care about recursion depth

and does some neat tricks where it

doesn't even matter how many times your

function calls itself

so if you want to see this in swift code

then you need to know how recursion

works

well and now you have some idea now the

second reason i bring it up is actually

way more important and to find out on to

the next video

at the beginning of this series i

mentioned that there were two ways of

measuring the efficiency of an algorithm

the first was time complexity or how the

run time of an algorithm grows as n

grows larger

the second is space complexity

we took a pretty long route to build up

this example but now we're in a good

place to discuss space complexity

space complexity is a measure of how

much working storage or extra storage is

needed as a particular algorithm grows

we don't think about it much these days

but every single thing we do on a

computer takes up space in memory in the

early days of computing considering

memory usage was of paramount importance

because memory was limited and really

expensive

these days were spoiled our devices are

rich with memory this is okay when we

write everyday code because most of us

aren't dealing with enormously large

data sets

when we write algorithms however we need

to think about this because we want to

design our algorithms to perform as

efficiently as it can as the size of the

data set n grows really large

like time complexity space complexity is

measured in the worst case scenario

using big-o notation

since you are familiar with the

different kinds of complexities let's

dive right into an example

in our iterative implementation of

binary search the first one we wrote

that uses a while loop let's look at

what happens to our memory usage as n

gets large

let's bring up that function

let's say we start off with a list of 10

elements now inspecting the code we see

that our solution relies heavily on

these two variables first and last

first points to the start of the list

and last to the end

when we eliminate a set of values we

don't actually create a sub list instead

we just redefine first

and last as you see here

to point to a different section of the

list

since the algorithm only considers the

values between first and last when

determining the midpoint

by redefining first and last as the

algorithm proceeds we can find a

solution using just the original list

this means that for any value of n

the space complexity of the iterative

version of binary search is constant or

that the iterative version of binary

search takes constant space

remember that we would write this as big

o of one

this might seem confusing because as n

grows we need more storage to account

for that larger list size

now this is true but that storage is not

what space complexity cares about

measuring

we care about what additional storage is

needed as the algorithm runs and tries

to find a solution

if we assume something simple say that

for a given size of a list represented

by a value n it takes n amount of space

to store it whatever that means

then for the iterative version of binary

search regardless of how large the list

is at the start middle and end of the

algorithm process the amount of storage

required does not get larger than n

and this is why we consider it to run in

constant space

now this is an entirely different story

with the recursive version however in

the recursive version of binary search

we don't make use of variables to keep

track of which portion of the list we're

working with

instead we create new lists every time

with a subset of values or sub-lists

with every recursive function call

let's assume we have a list of size n

and in the worst case scenario the

target element is the last in the list

calling the recursive implementation of

binary search on this list and target

would lead to a scenario like this

the function would call itself and

create a new list that goes from the

midpoint to the end of the list

since we're discarding half the values

the size of the sub list is n by 2.

this function will keep calling itself

creating a new sub list that's half the

size of the current one until it arrives

at a single element list and a stopping

condition

this pattern that you see here where the

size of the sublist is reduced by a

factor on each execution of the

algorithmic logic well we've seen that

pattern before do you remember where

this is exactly how binary search works

it discards half the values every time

until it finds a solution now we know

that because of this pattern the running

time of binary search is logarithmic

in fact the space complexity of the

recursive version of binary search is

the same

if we start out with a memory allocation

of size n that matches the list

on each function call of recursive

binary search we need to allocate

additional memory of size n by 2 n by 4

and so on until we have a sub list that

is either empty or contains a single

value because of this we say that the

recursive version of the binary search

algorithm runs in logarithmic time with

a big o of log n

now there's an important caveat here

this totally depends on the language

remember how i said that a programming

language like swift can do some tricks

to where recursion depth doesn't matter

the same concept applies here if you

care to read more about this concept

it's called tail

optimization it's called tail

optimization because if you think of a

function as having a head and a tail

if the recursive function call is the

last line of code in the function as it

is in our case

we call this tail recursion since it's

the last part of the function that calls

itself

now the trick that swift does to reduce

the amount of space and therefore

computing overhead to keep track of this

recursive calls is called tail call

optimization or tail call elimination

it's one of those things that you'll see

thrown around a loss in algorithm

discussions but may not always be

relevant to you

now what if any of this is relevant to

us well python does not implement tail

call optimization so the recursive

version of binary search takes

logarithmic space

if we had to choose between the two

implementations given that time

complexity or run time of both versions

the iterative and the recursive version

are the same we should definitely go

with the iterative implementation in

python since it runs in constant space

okay that was a lot but all of this with

all of this we've now established two

important ways to distinguish between

algorithms that handle the same task and

determine which one we should use

we've arrived at what i think is a good

spot to take a long break and let all of

these new concepts sink in but before

you go off to the next course let's take

a few minutes to recap everything we've

learned so far

while we did implement two algorithms in

this course in actual code much of what

we learned here was conceptual and will

serve as building blocks for everything

we're going to learn in the future so

let's list all of it out

the first thing we learned about and

arguably the most important was

algorithmic thinking

algorithmic thinking is an approach to

problem solving that involves breaking a

problem down into a clearly defined

input and output along with a distinct

set of steps that solves the problem by

going from input to output

algorithmic thinking is not something

you develop overnight by taking one

course so don't worry if you're thinking

i still don't truly know how to apply

what i learned here

algorithmic thinking sinks in after you

go through several examples in a similar

fashion to what we did today

it also helps to apply these concepts in

the context of a real example which is

another thing we will strive to do

moving forward

regardless it is important to keep in

mind that the main goal here is not to

learn how to implement a specific data

structure or algorithm off the top of

your head i'll be honest i had to look

up a couple code snippets for a few of

the algorithms myself in writing this

course

but in going through this you now know

that binary search exists and can apply

to a problem where you need a faster

search algorithm

unlike most courses where you can

immediately apply what you have learned

to build something cool learning about

algorithms and data structures will pay

off more in the long run

the second thing we learned about is how

to define and implement algorithms we've

gone over these guidelines several times

i won't bore you here again at the end

but i will remind you that if you're

often confused about how to effectively

break down a problem in code to

something more manageable following

those algorithm guidelines is a good

place to start

next we learned about big o and

measuring the time complexity of

algorithms this is a mildly complicated

topic but once you've abstracted the

math away it isn't as hazy a topic as it

seems

now don't get me wrong the math is

pretty important but only for those

designing and analyzing algorithms

our goal is more about how to understand

and evaluate algorithms

we learned about common run times like

constant linear logarithmic and

quadratic runtimes these are all fairly

new concepts but in time you will

immediately be able to distinguish the

runtime of an algorithm based on the

code you write and have an understanding

of where it sits on an efficiency scale

you will also in due time internalize

runtimes of popular algorithms like the

fact that binary search runs in

logarithmic time and constant space

and be able to recommend alternative

algorithms for a given problem

all in all over time the number of tools

in your tool belt will increase

next we learned about two important

search algorithms and the situations in

which we select one over the other

we also implemented these algorithms in

code so that you got a chance to see

them work

we did this in python but if you are

more familiar with a different language

and haven't gotten the chance to check

out the code snippets we've provided you

should try your hand at implementing it

yourself it's a really good exercise to

go through

finally we learned about an important

concept and a way of writing algorithmic

code through recursion recursion is a

tricky thing and depending on the

language you write code with you may run

into it more than others

it is also good to be aware of because

as we saw in our implementation of

binary search

whether recursion was used or not

affected the amount of space we used

don't worry if you don't fully

understand how to write recursive

functions i don't truly know either the

good part is you can always look these

things up and understand how other

people do it

anytime you encounter recursion in our

courses moving forward you'll get a full

explanation of how and why the function

is doing what it's doing

and that brings us to the end of this

course i'll stress again that the goal

of this course was to get you prepared

for learning about more specific

algorithms by introducing you to some of

the tools and concepts you will need

moving forward

so if you're sitting there thinking i

still don't know how to write many

algorithms or how to use algorithmic

thinking that's okay we'll get there

just stick with it

as always have fun and happy coding

[Music]

hi my name is passant i'm an instructor

at treehouse and welcome to the

introduction to data structures course

in this course we're going to answer one

fundamental question why do we need more

data structures than a programming

language provides

before we answer that question some

housekeeping if you will

in this course we're going to rely on

concepts we learned in the introduction

to algorithms course

namely big-o notation space and time

complexity and recursion

if you're unfamiliar with those concepts

or just need a refresher check out the

prerequisites courses listed

in addition this course does assume that

you have some programming experience

we're going to use data structures that

come built into nearly all programming

languages as our point of reference

while we will explain the basics of how

these structures work we won't be going

over how to use them in practice

if you're looking to learn how to

program before digging into this content

check the notes section of this video

for helpful links

if you're good to go then awesome let's

start with an overview of this course

the first thing we're going to do is to

explore a data structure we are somewhat

already familiar with arrays

if you've written code before there's a

high chance you have used an array

in this course we're going to spend some

time understanding how arrays work what

are the common operations on an array

and what are the run times associated

with those operations

once we've done that we're going to

build a data type of our own called a

linked list

in doing so we're going to learn that

there's more than one way to store data

in fact there's way more than just one

way

we're also going to explore what

motivates us to build specific kinds of

structures and look at the pros and cons

of these structures

we'll do that by exploring four common

operations accessing a value searching

for a value inserting a value and

deleting a value

after that we're actually going to

circle back to algorithms and implement

a new one a sorting algorithm

in the introductions to algorithms

course we implemented a binary search

algorithm a precondition to binary

search was that the list needed to be

sorted

we're going to try our hand at sorting a

list and open the door to an entirely

new category of algorithms

we're going to implement our sorting

algorithm on two different data

structures and explore how the

implementation of one algorithm can

differ based on the data structure being

used

we'll also look at how the choice of

data structure potentially influences

the run time of the algorithm

in learning about sorting we're also

going to encounter another general

concept of algorithmic thinking called

divide and conquer

along with recursion dividing conquer

will be a fundamental tool that we will

use to solve complex problems all in due

time in the next video let's talk about

arrays

a common data structure built into

nearly every programming language is the

array

arrays are a fundamental data structure

and can be used to represent a

collection of values but it is much more

than that arrays are also used as

building blocks to create even more

custom data types and structures

in fact in most programming languages

text is represented using the string

type and under the hood strings are just

a bunch of characters stored in a

particular order in an array

before we go further and dig into arrays

what exactly is a data structure

a data structure is a way of storing

data when programming it's not just a

collection of values and the format

they're stored in but the relationship

between the values in the collection as

well as the operations applied on the

data stored in the structure

an array is one of very many data

structures in general an array is a data

structure that stores a collection of

values where each value is referenced

using an index or a key

a common analogy for thinking about

arrays is as a set of train cars

each car has a number and these cars are

ordered sequentially

inside each car the array or the train

in this analogy stores some data

while this is the general representation

of an array it can differ slightly from

one language to another but for the most

part all these fundamentals remain the

same in a language like swift or java

arrays are homogeneous containers which

means they can only contain values of

the same type

if you use an array to store integers in

java it can only store integers

in other languages arrays are

heterogeneous structures that can store

any kind of value in python for example

you can mix numbers and text with no

issues

now regardless of this nuance the

fundamental concept of an array is the

index

this index value is used for every

operation on the array from accessing

values to inserting updating and

deleting

in python the language we're going to be

using for this course it's a tiny bit

confusing

the type that we generally refer to as

an array in most languages is best

represented by the list type in python

python does have a type called array as

well but it's something different so

we're not going to use it

while python calls it a list when we use

a list in this course we'll be talking

about concepts that apply to arrays as

well in other languages so definitely

don't skip any of this there's one more

thing

in computer science a list is actually a

different data structure than an array

and in fact we're going to build a list

later on in this course

generally though this structure is

called a linked list as opposed to just

list so hopefully the terminology isn't

too confusing

to properly understand how arrays work

let's take a peek at how arrays are

stored under the hood

an array is a contiguous data structure

this means that the array is stored in

blocks of memory that are right beside

each other with no gaps

the advantage of doing this is that

retrieving values is very easy

in a non-contiguous data structure we're

going to build one soon the structure

stores a value as well as a reference to

where the next value is

to retrieve that next value the language

has to follow that reference also called

a pointer to the next block of memory

this adds some overhead which as you

will see increases the runtime of common

operations a second ago i mentioned that

depending on the language arrays can

either be homogeneous containing the

same type of value or heterogeneous

where any kind of value can be mixed

this choice also affects the memory

layout of the array

for example in a language like c swift

or java where arrays are homogeneous

when an array is created since the kind

of value is known to the language

compiler and you can think of the

compiler as the brains behind the

language

it can choose a contiguous block of

memory that fits the array size and

values created

if the values were integers assuming an

integer took up space represented by one

of these blocks then for a five item

array the compiler can allocate five

blocks of equally sized memory

in python however this is not the case

we can put any value in a python list

there's no restriction

the way this works is a combination of

contiguous memory and the pointers or

references i mentioned earlier

when we create a list in python there is

no information about what will go into

that array which makes it hard to

allocate contiguous memory of the same

size

there are several advantages to having

contiguous memory

since the values are stored beside each

other accessing the values happens in

almost constant time so this is a

characteristic we want to preserve

the way python gets around this is by

allocating contiguous memory and storing

init not the value we want to store but

a reference or a pointer to the value

that's stored somewhere else in memory

by doing this it can allocate equally

sized contiguous memory since regardless

of the value size the size of the

pointer to that value is always going to

be equal this incurs an additional cost

in that when a value is accessed we need

to follow the pointer to the block of

memory where the value is actually

stored but python has ways of dealing

with these costs that are outside the

scope of this course

now that we know how an array stores its

values let's look at common operations

that we execute on an array

regardless of the kind of data structure

you work with all data structures are

expected to carry out four kinds of

operations at minimum

we need to be able to access and read

values stored in the structure we need

to be able to search for an arbitrary

value

we also need to be able to insert a

value at any point into the structure

and finally we need to be able to delete

structures

let's look at how these operations are

implemented on the array structure in

some detail starting with access

elements in an array are identified

using a value known as an index and we

use this index to access and read the

value

most programming languages follow a

zero-based numbering system when it

comes to arrays and all this means is

that the first index value is equal to

zero not one

generally speaking when an array is

declared a base amount of contiguous

memory is allocated as the array storage

computers refer to memory through the

use of an address but instead of keeping

a reference to all the memory allocated

for an array the array only has to store

the address of the first location

because the memory is contiguous using

the base address the array can calculate

the address of any value by using the

index position of that value as an

offset

if you want to be more specific think of

it this way

let's say we want to create an array of

integers and then each integer takes up

a certain amount of space in memory that

we'll call m

let's also assume that we know how many

elements we're going to create so the

size of the array is some number of

elements we'll call n

the total amount of space that we need

to allocate is n times the space per

item m

if the array keeps track of the location

in memory where the first value is held

so let's label that m0 then it has all

the information it needs to find any

other element in the list

when accessing a value in an array we

use the index

to get the first element in the list we

use the zeroth index to get the second

we use the index value 1 and so on

given that the array knows how much

storage is needed for each element it

can get the address of any element by

starting off with the address for the

first element and adding to that the

index value times the amount of storage

per element

for example to access the second value

we can start with m0 and to that add m

times the index value 1 giving us m1 as

the location in memory for the second

address

this is a very simplified model but

that's more or less how it works

this is only possible because we know

that array memory is contiguous with no

gaps

let's switch over to some code

as i mentioned earlier we're going to be

using python in this course

if you don't know how to code or you're

interested in this content but know a

language other than python check the

notes section of this video for more

information while the code will be in

python the concepts are universal and

more importantly simple enough that you

should have no issue following along in

your favorite programming language

and to get started click on the launch

workspaces button on the video page that

you're watching right now

this should spin up an instance of a

treehouse workspace an in-browser coding

environment right now your workspace

should be empty and that's expected so

let's add a new file in here i'm going

to go to file new file

and we'll call this arrays

dot py pi

creating a list in python is quite

simple so we'll call this new underscore

list

we use a set of square brackets around a

set of values to create a list so one

and we comma separate them so space two

and space three this allocates a base

amount of memory for the array to use or

when i say array know that in python i

mean a list

since this is python the values aren't

stored in memory

instead the values 1 2 and 3 are stored

elsewhere in memory and the array stores

references to each of those objects

to access a value we use a subscript

along with an index value so to get the

first value we use the index 0 and if we

were to assign this to another variable

we would say result

equal new list

we write out new lists since this is the

array that we're accessing the value

from and then a subscript notation which

is a square bracket

and then the index value

as we saw since the array has a

reference to the base location in memory

the position of any element can be

determined pretty easily

we don't have to iterate over the entire

list

all we need to do is a simple

calculation of an offset from the base

memory since we're guaranteed that the

memory is contiguous

for this reason access is a constant

time operation on an array or a python

list

this is also why an array crashes if you

try to access a value using an index

that is out of bounds of what the array

stores

if you've used an array before you've

undoubtedly run into an error or a crash

where you try to access a value using an

index that was larger than the number of

elements in the array since the array

calculates the memory address on the fly

when you access a value with an out of

bounds index as it's called the memory

address returned is not one that's part

of the array structure and therefore

cannot be read by the array now in

python this is represented by an index

error and we can make this happen by

using an index we know our array won't

contain

now i'm writing out my code here inside

of a text editor which obviously doesn't

run the code so let's drag up this

console area here

and i'm going to write python

to bring up the python interpreter

and in here we can do the same thing so

i can say new

list equal one

comma two comma three and now this is an

interpreter so it's actually going to

evaluate our code

all right so now we have a new list if i

type out new list it gets printed out

into the console

okay i can also do new list square

bracket 0 and you'll see that i get the

value 1 which is the value stored at the

zeroth index

now to highlight that index error we can

do new list

and inside the square brackets we can

provide an index that we know our array

doesn't contain so here i'll say index

10

and if i hit enter you'll see it say

index error list index out of range

and those are the basics of how we

create and read values from an array in

the next video let's take a look at

searching

in the last video we learned what

happens under the hood when we create an

array and read a value using an index

in this video we're going to look at how

the remaining data structure operations

work on arrays

if you took the introduction to

algorithms course we spent time learning

about two search algorithms linear

search and binary search

while arrays are really fast at

accessing values they're pretty bad at

searching

taking an array as is the best we can do

is use linear search for a worst case

linear runtime linear search works by

accessing and reading each value in the

list until the element in concern is

found

if the element we're looking for is at

the end of the list then every single

element in the list will have been

accessed and compared

even though accessing and comparing our

constant time operations having to do

this for every element results in an

overall linear time

let's look at how search works in code

in python we can search for an item in

an array in one of two ways

we can use the in operator to check

whether a list contains an item so i can

say if

one in new underscore

list

then print

true

the in operator actually calls a

contains method that is defined on the

list type which runs a linear search

operation

in addition to this we can also use a

for loop to iterate over the list

manually and perform a comparison

operation

so i can say

for

n in new list

if n equals one then print

true

and then after that break out of the

loop

this is more or less the implementation

of linear search

if the array were sorted however we

could use binary search but because sort

operations incur a cost of their own

languages usually stay away from sorting

the list and running binary search since

for smaller arrays linear search on its

own may be faster

now again remember that

since this is an editor this is just a

text file none of these lines of code

are evaluated so you can try that out in

here so we'll copy that we can come down

here and say python and hit enter and

then when it starts up we can paste in

our list

and now we can try what we just did so

if one in new list

print

true

and there you go it prints true now

because we've already learned about

linear and binary search in a previous

course there's nothing new going on here

what's more interesting to look at in my

opinion is inserting and deleting values

in an array let's start with inserting

in general most array implementations

support three types of insert operations

the first is a true insert using an

index value where we can insert an

element anywhere in the list this

operation has a linear runtime imagine

you wanted to insert an item at the

start of the list when we insert into

the first position what happens to the

item that is currently in that spot

well it has to move to the next spot at

index value one what happens to the

second item at index position one

that one moves to the next spot at index

position two

this keeps happening until all elements

have been shifted forward one index

position

so in the worst case scenario inserting

at the zeroth position of an array every

single item in the array has to be

shifted forward and we know that any

operation that involves iterating

through every single value means a

linear runtime

now the second way we can insert an item

into an array is by appending appending

although technically an insert operation

in that it inserts an item into an

existing array doesn't incur the same

runtime cost because appends simply add

the item to the end of the list

we can simplify and say that this is

constant time this is a constant time

operation but it depends on the language

implementation of array

to highlight why that matters let's

consider how lists in python work in

python when we create a list the list

doesn't know anything about the size of

the list and how many elements we're

going to store

creating a new empty list like so so

numbers equal and two empty brackets

so this creates a list and allocates a

space of size n plus one

since n here is zero there are no

elements in this array in this list

space is allocated for a one element

list to start off

because the space allocated for the list

and the space used by the list are not

the same

what do you think happens when we ask

python for the length of this list so i

can say len numbers

we correctly get 0 back

this means that the list doesn't use the

memory allocation as an indicator of its

size because as i mentioned it has

allocated space for a one element list

but it returns zero so it determines it

in other ways

okay so numbers this list currently has

space for one element

let's use the append method defined on

the type to insert a number at the end

of the list so you can say numbers dot

append and i'll pass in 2.

now the memory allocation and the size

of the list are the same since the list

contains one element

now what if i were to do something like

this numbers.append

there needs to be a dot

and i'll add another value 200.

now since the list only has an

allocation for one item at this point

before it can add the new element to the

list it needs to increase the memory

allocation and thereby the size of the

list it does this by calling a list

resize operation list resizing is quite

interesting because it shows the

ingenuity in solving problems like this

python doesn't resize the list to

accommodate just the element we want to

add

instead in this case it would allocate

four blocks of memory to increase the

size to a total of four contiguous

blocks of memory

it does this so that it doesn't have to

resize the list every single time we add

an element but at very specific points

the growth pattern of the list type in

python is 0 4 8 16 25 35 46 and so on

this means that as the list size

approaches these specific values

resize is called again if you look at

when the size of the list is four

this means that when appending four more

values until the size of eight

each of those append operations do not

increase the amount of space taken

at specific points however when resizing

is triggered space required increases as

memory allocation increases

this might signify that the append

method has a non-constant space

complexity but it turns out that because

some operations don't increase space and

others do

when you average all of them out append

operations take constant space

we say that it has an amortized constant

space complexity this also happens with

insert operations

if we had a four element array we would

have four elements and a memory

allocation of four

an insert operation at that point

doesn't matter where it happens on the

list but at that point it would trigger

a resize

inserting is still more expensive though

because after the resize every element

needs to be shifted over one

the last insert operation that is

supported in most languages is the

ability to add one list to another

in python this is called an extend and

looks like this

so i'll say numbers now if you let me

actually clear out the console

oh actually you will let's exit python

we'll clear this out so we're back at

the top and we'll start again

so i'll say numbers

and we'll set it to an empty list and

now we can say numbers dot extend

and as an argument we're going to pass

in a new list entirely so here we'll say

4 comma 5 comma 6

and then once i hit enter if i were to

print out numbers you'll see that it now

contains the values 4 5 and 6.

so extend takes another list to add

extend effectively makes a series of

append calls on each of the elements in

the new list until all of them have been

appended to the original list this

operation has a run time of big o of k

where k represents the number of

elements in the list that we're adding

to our existing list

the last type of operation we need to

consider are delete operations deletes

are similar to inserts in that when a

delete operation occurs the list needs

to maintain correct index values so

where an insert shifts every element to

the right a delete operation shifts

every element to the left

just like an insert as well if we delete

the first element in the list every

single element in the list needs to be

shifted to the left

delete operations have an upper bound of

big o of n also known as a linear

runtime now that we've seen how common

operations work on a data structure that

we're quite familiar with let's switch

tracks and build our own data structure

[Music]

over the next few videos we're going to

build a data structure that you may have

worked with before a linked list

before we get into what a linked list is

let's talk about why we build data

structures instead of just using the

ones that come built into our languages

each data structure solves a particular

problem

we just went over the basics of the

array data structure and looked at the

cost of common operations that we carry

out on arrays

we found that arrays were particularly

good at accessing reading values happens

in constant time but arrays are pretty

bad at inserting and deleting both of

which run in linear time

linked lists on the other hand are

somewhat better at this although there

are some caveats and if we're trying to

solve a problem that involves far more

inserts and deletes than accessing a

linked list can be a better tool than an

array

so what is a linked list

a linked list is a linear data structure

where each element in the list is

contained in a separate object called a

node a node models two pieces of

information an individual item of the

data we want to store and a reference to

the next node in the list

the first node in the linked list is

called the head of the list while the

last node is called the tail

the head and the tail nodes are special

the list only maintains a reference to

the head although in some

implementations it keeps a reference to

the tail as well

this aspect of linked lists is very

important and as you'll see most of the

operations on the list need to be

implemented quite differently compared

to an array

the opposite of the head the tail

denotes the end of the list

every node other than the tail points to

the next node in the list but tail

doesn't point to anything this is

basically how we know it's the end of

the list nodes are what are called

self-referential objects the definition

of a node includes a link to another

node and self-referential here means the

definition of node includes the node

itself linked lists often come in two

forms a singly linked list where each

node stores a reference to the next node

in the list or a doubly linked list

where each node stores a reference to

both the node before and after if an

array is a train with a bunch of cars in

order then a linked list is like a

treasure hunt

when you start the hunt you have a piece

of paper with the location of the first

treasure you go to that location and you

find an item along with a location to

the next item of treasure

when you finally find an item that

doesn't also include a location you know

that the hunt has ended

now that we have a high level view of

what a linked list is let's jump into

code and build one together we'll focus

on building a singly linked list for

this course there are advantages to

having a doubly linked list but we don't

want to get ahead of ourselves

let's start here by creating a new file

we're going to put all our code for our

linked list so we'll call this linked

underscore list

dot pi and first we're going to create a

class to represent a node

say class

node

now node is a simple object in that it

won't model much so first we'll add a

data variable

it's an instance variable here called

data and we'll assign the value none

initially

and then we'll add one more we'll call

this next node and to this we'll assign

none as well so we've created two

instance variables data to hold on to

the data that we're storing and next

node to point to the next node in the

list

now we need to add a constructor to make

this class easy to create so we'll add

an init

method here that takes self and some

data to start off

and all we're going to do is assign

data to that instance variable we

created so that's all we need to model

node

before we do anything else though let's

document this so right after the class

definition let's create a docs string so

three quotes

next line and we'll say an object

for storing

a single

node of a linked list

and then on the next line we'll say

models two attributes

data

and

the link to the next

node in the list

and then we'll close this doc string off

with three more quotation marks okay

using the node class is fairly

straightforward so we can create a new

instance of node with some data to store

now the way we're going to do this is

we're going to bring up the console

and we're going to type out like we've

been typing out before python followed

by the name of the script that we wrote

which is linked list linked underscore

list.pi but before we do that we're

going to pass an argument to the python

command we're going to say dash or

python i and then the name of the script

linked underscore list dot pi so what

this does is this is going to run the

python repl

the read evaluate print loop in the

console but it's going to load the

contents of our file into that so that

we can use it

so i'll hit enter and we have a new

instance going and now we can use the

node in here so we can say n1

equal node

and since we defined that constructor we

can pass it some data so we'll say 10

here

now if we try to inspect this object the

representation returned isn't very

useful

which will make things really hard to

debug as our code grows so for example

if i type out n1 you'll see that

we have a valid instance here but it's

not very helpful the way it's printed

out

so we can customize this by adding a

representation of the object using the

wrapper function now in the terminal

still we'll type out exit

like that hit enter to exit the console

and then down here

let's add in some room

okay and here we'll say def

double underscore

wrapper another set of double

underscores

and then this function takes the

argument self

and in here we can provide a string

representation of what we want printed

to the console when we inspect that

object inside of it inside of a console

so here we'll say return

again

this is a string representation so

inside quotes we'll say

node so this represents a node instance

and the data it contains here we'll say

percent s

which is a python way of substituting

something into a string string

interpolation and outside of the string

we can say percent again

and here we're saying we want to replace

this percent s with

self.data okay

let's hit save and before we move on

let's verify that this works so i'm

going to come in here

type clear to get rid of everything

and then we'll do what we did again and

you can just hit the up arrow a couple

times to get that command

all right so hit enter and now just so

you know every time you run this you

start off you know from scratch so n1

that we created earlier not there

anymore so let's go ahead and create it

n1 equal node

10

and we can type n1 again and hit enter

and you have a much better

representation now so we can see that we

have a node and it contains the data 10.

we can also create another one n2 equal

node that contains the data 20 and now

we can say n1.next n1.nextnode

equal n2 so n1 now points to n2 and if

we say n1.nextnode

you'll see that it points to that node

the node containing 20.

nodes are the building blocks for a list

and now that we have a node object we

can use it to create a singly linked

list so again i'm going to exit out of

this

and then go back to the text editor

and here we'll create a new class so

class

linked

list

the linked list class is going to define

a head and this attribute models the

only node that the list is going to have

a reference to so here we'll say head

and we'll assign none initially and then

like we did earlier let's create a

constructor

so double underscore init double

underscore this takes self

and then inside like before we'll say

self dot head

equal none this is the same

as doing this so we can actually get rid

of that and just use the constructor

okay so again this head attribute models

the only node that the list will have a

reference to since every node points to

the next node to find a particular node

we can go from one node to the next in a

process called list traversal

so in the class constructor here we've

set the default value of head to none so

that new lists created are always empty

again you'll notice here that i didn't

explicitly declare the head attribute at

the top of the class definition

and don't worry that's not an oversight

the self.head in the initializer means

that it's still created okay so that's

all there is to modeling a linked list

now we can add methods that make it

easier to use this data structure

first a really simple docstring to

provide some information

so here we'll to create a docstring

three quotation marks

and then we'll say singly linked list

and then close it off

a common operation carried out on data

structures is checking whether it

contains any data or whether it's empty

at the moment to check if a list is

empty we would need to query these

instance variables head and so on every

time

ideally we would like to not expose the

inner workings of our data structure to

code that uses it

instead let's make this operation more

explicit by defining a method so we'll

say def is empty

and this method takes self as an

argument and here we'll say return

self.head double equal none

all we're doing here is checking to see

if head is none

if it is this condition evaluates to

true which indicates the list is empty

now before we end this video let's add

one more convenience method to calculate

the size of our list the name

convenience method indicates that what

this method is doing is not providing

any additional functionality that our

data structure can't handle right now

but instead making existing

functionality easier to use

we could calculate the size of our

linked list by traversing it every time

using a loop until we hit a tail node

but doing that every time is a hassle

okay so we'll call this method

size and as always it takes self

unlike calling len on a python list not

to be confused with a linked list which

is a constant time operation our size

operation is going to run in linear time

the only way we can count how many items

we have is to visit each node and call

next until we hit the tail node

so we'll start by getting a reference to

the head we'll say current

equal self.head let's also define a

local variable named count with an

initial value of 0 that will increment

every time we visit a node once we hit

the tail count will reflect the size of

that list

next we'll define a while loop that will

keep going until there are no more nodes

so say while current

while current is the same as writing out

while current does not equal none but

it's more succinct so we'll go with this

former

if the ladder is more precise for you

you can go with that

now inside this loop we'll increment the

count value so count plus equal one

plus equal if you haven't encountered it

before is the same as writing count

equal count plus one so if count is zero

initially so it's zero plus one is one

and then we'll assign that back to count

okay so count plus equal one

next we're going to assign the next node

in the list to current so current equal

current dot next

node

this way once we get to the tail and

call next node current will equal none

and the while loop terminates so the end

we can return

count

as you can see we need to visit every

node to determine the size meaning our

algorithm runs in linear time so let's

document this

up in our docs string which we'll add

now to size

we'll say

returns

the number of nodes in the list

takes

linear time

let's take a break here we can now

create lists check if they're empty and

check the size

in the next video let's start

implementing some common operations

at the moment we can create an empty

list but nothing else let's define a

method to add data to our list

technically speaking there are three

ways we can add data to a list

we can add nodes at the head of the list

which means that the most recent node we

created will be the head and the first

node we created will be the tail

or we could flip that around most recent

nodes are the tail of the list and the

first node to be added is the head i

mentioned that one of the advantages of

linked lists over arrays is that

inserting data into the list is much

more efficient than to the array

this is only true if we're inserting at

the head or the tail

technically speaking this isn't an

insert and you'll often see this method

called add prepend if the data is added

to the head or append if it's added to

the tail

a true insert is where you can insert

the data at any point in the list which

is our third way of adding data we're

going to circle back on that if we

wanted to insert at the tail then the

list needs a reference to the tail node

otherwise we would have to start at the

head and walk down the length of the

list or traverse it to find the tail

since our list only keeps a reference to

the head we're going to add new items at

the head of the list

now before we add our new method i

forgot that i didn't show you in the

last video how to actually use the code

we just added and how to check every

time you know when we add new code that

it works correctly

so like before we're gonna bring up the

console and here we're gonna say python

dash i

linked underscore list dot pi which

should load it

load the contents of our file

and now we'll start here by creating a

linked list so l equal linked list

and then we'll use a node so n1 equal

node

with the value 10

and now we can assign n1 to the nodes or

to the linked lists head attribute so l1

dot head equal n1

and then

we can see if size works correctly so if

we call l1 dot size and since this is a

method we need a set of parentheses at

the end

and enter you'll see that we get back

one correctly okay so it works

now let's add our new method which we're

going to call add

add is going to accept some data to add

to the list inside of a node

so we'll say def

add

and every python method takes self as an

argument and then we want to add some

data to this node so we're going to say

data for the second argument

inside the method first we'll create a

new node to hold on to the data so new

underscore node equal

node with the data

before we set the new node as the head

of the list we need to point the new

node's next property at whatever node is

currently at head this way when we set

the new node as the head of the list we

don't lose a reference to the old head

so new underscore node dot next node

equal self.head

now if there was no node at head this

correctly sets next node to none

now we can set the new node as the head

of the node so say self.head equal

new underscore node because the insert

operation is simply a reassignment of

the head and next node properties this

is a constant time operation so let's

add that in as a docs string

first what the method does so it adds a

new node

containing data

at the head of the list

this operation takes

constant time which is our best case

scenario

okay let's test this out so i'm going to

bring the console back up we'll exit out

of

our current reply

and we'll load

the contents of the file again

and now we don't need to create a node

like we did earlier so we can say l

equal linked

list

l.add one

okay let's see if this works we'll call

size

and if it worked

the linked list should now have a size

of one there we go you can also do

l.add2

l.add three

and l dot size should now be three there

we go now if we i were to type l and

just hit print

again what we get in the repel is

nothing useful

so like before we'll implement the

wrapper function for our linked list

now

i'm just going to copy paste this in and

we'll walk through it

okay so this is what our implementation

of wrapper looks like for the linked

list object you can grab this code from

the notes section of this video

okay so at the top you'll see a docs

string where it says it returns a string

representation of the list and like

everything we need to do with a linked

list we need to traverse it so this is

going to take linear time we start by

creating an empty list now i need to

distinguish this is a python list not a

linked list so we create an empty list

called nodes and two nodes we're going

to add strings that have a description

that provide a description of each node

but we're not going to use the

description that we implemented in the

node class because we're going to

customize it a bit here

next we start by assigning self.head to

current so we sort of have a pointer to

the head node as long as current does

not equal none which means we're not at

the tail we're going to implement some

logic

so in the first scenario if the node

assigned to current is the same as the

head

then we're going to append this string

to our nodes list

and the string is simply

going to say that hey this is a head

node and it contains some data which

will extract using current.data

next scenario is if the node assigned to

current's next node is none meaning

we're at the tail node then we'll assign

a different kind of string so it's the

same as earlier except we're saying tail

here and then finally in any other

scenario which means we're not at the

head or not of the tail we'll simply

print the node's value inside and again

we'll extract it using current.data with

every iteration of the loop we'll move

current forward by calling

current.nextnode and reassigning it

and then at the very end when we're done

we'll join all the strings that are

inside the nodes list together using the

python join method and we'll say that

with every join so when you join these

two strings together to make one string

you need to put this set of characters

in between all right so let's see what

this looks like

so i'm going to come down here exit out

of the console again

clear it out

load the contents of the file again and

let's try that so we'll say l equal

linked list

all right so l dot add one l dot add two

l dot add three that seems enough

and then now if i type out l and hit

enter we get a nice string

representation of the list so you can

see that we add every new node to the

head so we added one first one ends up

being the tail because it keeps getting

pushed out

then two and then finally three so three

is at the head

so far we've only implemented a single

method which functions much like the

append method on a python list or an

array

except it adds it to the start of the

linked list it pre-pens it

like append this happens in constant

time in the next video let's add the

ability to search through our list for

the search operation we're going to

define a method that takes a value to

search for and returns either the node

containing the value if the value is

found or none if it isn't

so right after actually you know what

we'll make sure wrapper is the last

function our last method

in our class so we'll add it above it so

here we'll say def search

self

and then key

in the last video we implemented the

wrapper method to provide a string

representation of the list

so we're going to use similar logic here

to implement the search function we'll

start by setting a local variable

current to point to the head of the list

while the value assigned to current is a

valid node that is it isn't none

we'll check if the data on that node

matches the key that we're searching for

so while current

we'll say if

current.data

is the key

then we'll return current

if it does match we'll go ahead and

return it like we've done here but if it

doesn't we'll assign the next node in

the list to current and check again so

say else

current equal current dot next node

once we hit the tail node and haven't

found the key

current gets set to none and the while

loop exits

at this point we know the list doesn't

contain the key so we can return

none

okay that completes the body of our

method

let's add a docs string to document this

so up at the top we'll say search

for the first node

containing data that matches

the key

now this is important because if our

linked list contains more than one node

with the same value it doesn't matter

we're going to return the first one with

this implementation

we'll also say here that it returns the

node or none

if not found

in the worst case scenario we'll need to

check every single node in the list

before we find the key or fail and as a

result this operation runs in linear

time so i'll say takes

o of n or linear time

so far we haven't seen anything that

indicates this data structure has any

advantage over an array or a python list

but we knew that

i mentioned the strength of linked lists

comes in inserts and deletes at specific

positions we'll check that out in the

next video but as always before we end

this one let's make sure everything

works

so we'll load the contents of the file

again

l equal linked

list

and then we'll say l.add 10

l dot add

20 2 doesn't matter l dot add

45 and one more metal dot add

15.

now we can say

l.search and we need to give it a value

so we'll say 45 and this returns a node

or none so we'll say n equal

and then we'll hit enter if this works

n should be a node

okay weirdly n

does not work here

at least it says it's not a node which

means i made a mistake in typing out our

code

and looking at it immediately it's

fairly obvious so this return none needs

to be outside of the while loop okay so

i'm going to hit save now so make sure

it's on the same indentation here which

means it's outside the while loop

and then we'll run through this again

okay so l is linked list

l.add 10

l dot add 2

l.add

45 and what was the last one we did i

believe it was 15

and now we should be able to say

l.search remember we're assigning this

to a node to a variable so l.search

45

and there you go we get that node back

and we can hit l

and we'll see a representation of our

list

okay so again in the next video inserts

and deletes at specific positions

insert operations on linked lists are

quite interesting

unlike arrays where when you insert an

element into the array all elements

after the particular index need to be

shifted with a linked list we just need

to change the references to next on a

few nodes and we're good to go

since each node points to the next one

by swapping out these references we can

insert a node at any point in the list

in constant time

much like binary search though there's a

catch

to find the node at that position we

want to insert we need to traverse the

list and get to that point

we just implemented our search algorithm

for the linked list type and we know

that this runs in linear time so while

actually inserting is fast finding the

position in the list you want to insert

it is not

this is why i mentioned that there were

some caveats to inserting

anyway let's see what this looks like in

code

we'll define a method named insert that

takes data to insert along with an index

position so we'll do this after search

right here

say def

insert

and this takes some data to insert and a

position to insert it at

you may be thinking wait a minute linked

lists don't have index positions right

and you're correct but we can mimic that

behavior by just counting the number of

times we access next node

if the index value passed into this

argument is 0 that means we want to

insert the new node at the head of the

list this is effectively the same

behavior as calling add which means the

logic is the same so we don't need to

repeat it we can call the add method we

wrote earlier so we'll say if

index if index equals 0 or if index is 0

then self dot add

data

if the index is greater than 0 then we

need to traverse the list to find the

current node at that index

so if index is greater than zero

now before we do that we need to create

a new node containing the data we want

to insert so we'll say new equal node

with some data

i'm going to assign index the argument

passed to our function to a local

variable named position and the head of

the list to a variable named current

position

equal index

current equal self.head

every time we call

current.nextnode meaning we're moving to

the next node in the list we'll decrease

the value of position by 1.

when position is zero we'll have arrived

at the node that's currently at the

position we want to insert in

in reality though we don't want to

decrease it all the way to zero

imagine we have a list with five nodes

and we want to insert a node at position

3. to insert a node at position 3 we

need to modify the nodes at positions 2

and 3.

node 2's next node attribute is going to

point to the new node and the new node's

next node attribute will point to node

3.

in this way an insert is a constant time

operation we don't need to shift every

single element we just modify a few next

node references

in a doubly linked list we can use node

3 to carry out both of these operations

node 3 in a doubly linked list would

have a reference to node 2 and we can

use this reference to modify all the

unnecessary links

and a singly linked list though which is

what we have if we kept decreasing

position until we're at 0 we arrive at

node 3.

we can then set the new node's next node

property to point to node 3 but we have

no way of getting a reference to node 2

which we also need

for this reason it's easier to decrease

position to just 1 when it equals 1 and

stop at node 2. so in here we'll say

while

position

is greater than one

now while the position is greater than

one we'll keep calling next node and

reassigning the current node so current

equal node.next

node and at the same time we'll

decrement position so position

equal to position

minus one which you can also succinctly

write as minus equal

one

this way when the position equals one

the loop exits and current will refer to

the node at the position before the

insert point so outside the while loop

we'll say previous equal current

and next equal current dot next

node

to make things more clear what i've done

here is name the node before the new one

previous and the node after the new one

next

all that's left to do now is to insert

the new node between previous and next

so we'll say previous dot next

node

equal

new

and then new dot next node

equal next

now it seems like there's an issue with

variable naming here and i'm most

probably conflicting with some globally

named next variable so actually go ahead

and call this next node

and

previous node so that we don't mess

things up here

previous node

so the dot next node is obviously the

attribute on a node but this is just a

local variable let's document this

method so up at the top

we'll add a docs string and it will say

inserts a new node

containing

data at index

position

insertion takes

constant time

but finding the node

at the insertion point

takes linear

time

let's add this to the next line

there we go

and then we'll say therefore it takes an

overall

linear time

this is why even though we can easily

insert a new node without having to

shift the rest ultimately adding to

either the head or the tail if you have

a reference is much more efficient

we have one more operation to add to our

linked list that will make it a robust

data structure

much like inserts removing a node is

actually quite fast and occurs in

constant time but to actually get to the

node that we want to remove and modify

the next connections we need to traverse

the entire list in our worst case so in

the worst case this takes linear time

let's add this operation to our data

structure

there are two ways we can define the

remove method one where we provide a key

to remove as an argument and one where

we provide an index now in the former

the key refers to the data the node

stores so in order to remove that node

we would first need to search for data

that matches the key i'm going to

implement that first method which we'll

call remove and i'll leave it up to you

to get some practice in and implement a

remove at index method to complete our

data structure so we'll add this after

the insert method right here

remove

is going to accept a key which we'll

need to search for before we can remove

a node earlier we defined a search

method that found a node containing data

that matches a key but we can't use that

method as is for the implementation of

remove when we remove a node much like

the insert operation we need to modify

the next node references

the node before the match needs to point

to the node after the match

if we use the search method we defined

earlier we get the node we want to

remove as a return value but because

this is a singly linked list we can't

obtain a reference to the previous node

like i said earlier if this was a doubly

linked list we could use the search

method since we would have a reference

to that previous node

we'll start here by setting a local

variable named current to point to the

head let's also define a variable named

previous

that will set to none to keep track of

the previous node as we traverse the

list

finally let's declare a variable named

found that we'll set to false

found is going to serve as a stopping

condition for the loop that we'll define

we'll use the loop to keep traversing

the linked list as long as found is

false meaning we haven't found the key

that we're looking for once we've found

it we'll set found to true and the loop

terminates so let's set up our loop so

we'll say while current

and

not

found

here we're defining a while loop that

contains two conditions

first we tell the loop to keep iterating

as long as current does not equal none

when current equals none this means

we've gone past the tail node and the

key doesn't exist

the second condition asks the loop to

keep evaluating as long as not found

equals true now this might be tricky

because it involves a negation here

right now found is set to false so not

found not false equals true this not

operator flips the value

when we find the key and we set found to

true

not true not found we'll equal false

then and the loop will stop

the end in the while loop means that

both conditions current being a valid

node and not found equalling true both

have to be true

if either one of them evaluates to false

then the loop will terminate

now inside the loop there are three

situations that we can run into

first the key matches the current node's

data and current is still at the head of

the list

this is a special case because the head

doesn't have a previous node and it's

the only node being referenced by the

list let's handle this case so we'll say

if current.data

double equals the key and current

is self.head which you can write out as

current equal self.head or current is

self.head

now if we hit this case

we'll indicate that we found the key by

setting found to true

and then this means that on the next

pass

this is going to evaluate to false

because not true will be false

and then the loop terminates once we do

that we want to remove the current node

and since it's the head node all we need

to do is point head to the second node

in the list which we can get by

referencing the next node attribute on

current self.head equal current.nextnode

so when we do this there's nothing

pointing to that first node so it's

automatically removed the next scenario

is when the key matches data in the node

and it's a node that's not the head

so here we'll say else if current dot

data equal key

if the current node contains the key

we're looking for we need to remove it

to remove the current node we need to go

to the previous node and modify its next

node reference to point to the node

after current

but first we'll set found

to true

and then we'll switch out the references

so

previous.nextnode

equal

current.nextnode

so far we haven't written any code to

keep track of the previous node

we'll do that in our else case here

so if we hit the else case it means that

the current node we're evaluating

doesn't contain the data that matches

the key so in this case we'll make

previous point to the current node and

then set current to the next node so

previous equal current

and current equal current.nextnode

and that's it for the implementation of

remove

now we're not doing anything at the

moment with the node we're removing but

it's common for remove operations to

return the value being removed so at the

bottom

outside the while loop

let's return

current

and with that we have a minimal

implementation of a linked list and your

first custom data structure how cool is

that

there's quite a bit we can do here to

improve our data structure particularly

in making it easy to use but this is a

good place to stop

before we move on to the next topic

let's document our method so the top

another docs string

and here we'll say removes node

containing data that matches the key

also it returns the node or none

if the key doesn't exist

and finally this takes

linear time because in the worst case

scenario we need to search the entire

list

if you'd like to get in some additional

practice implementing functionality for

linked lists two methods you can work on

are remove it index and node at index to

allow you to easily delete or read

values in a list at a given index

now that we have a linked list let's

talk about where you can use them the

honest answer is not a lot of places

linked lists are really useful

structures to build for learning

purposes because they're relatively

simple and are a good place to start to

introduce the kinds of operations we

need to implement for various data

structures it is quite rare however that

you will need to implement a linked list

on your own

there are typically much better and by

that i mean much more efficient data

structures that you can use

in addition many languages like java for

example provide an implementation of a

linked list already

now that we have a custom data structure

let's do something with it let's combine

the knowledge we have and look at how a

sorting algorithm can be implemented

across two different data structures

[Music]

now that we've seen two different data

structures let's circle back and apply

what we know about algorithms to these

new concepts

one of the first algorithms you learned

about was binary search and we learned

that with binary search there was one

precondition the data collection needs

to be sorted

over the next few videos let's implement

the merge sort algorithm which is one of

many sorting algorithms on both arrays

or python lists and the singly linked

list we just created

this way we can learn a new sorting

algorithm that has real world use cases

and see how a single algorithm can be

implemented on different data structures

before we get into code let's take a

look at how merge sort works

conceptually and we'll use an array to

work through this

we start with an unsorted array of

integers and our goal is to end up with

an array sorted in ascending order

merge sort works like binary sort by

splitting up the problem into sub

problems but it takes the process one

step further

on the first pass we're going to split

the array into two smaller arrays now in

binary search one of these subarrays

would be discarded but that's not what

happens here

on the second pass we're going to split

each of those subarrays into further

smaller evenly sized arrays and we're

going to keep doing this until we're

down to single element arrays

after that the merge sort algorithm

works backwards repeatedly merging the

single element arrays and sorting them

at the same time

since we start at the bottom by merging

to single element arrays we only need to

make a single comparison to sort the

resulting merge array by starting with

smaller arrays that are sorted as they

grow

merge sort has to execute fewer sort

operations than if it sorted the entire

array at once

solving a problem like this by

recursively breaking down the problem

into subparts until it is easily solved

is an algorithmic strategy known as

divide and conquer but instead of

talking about all of this in the

abstract let's dive into the code this

way we can analyze the runtime as we

implement it

for our first implementation of merge

sort we're going to use an array or a

python list

while the implementation won't be

different conceptually for a linked list

we will have to write more code because

of list traversal and how nodes are

arranged so once we have these concepts

squared away we'll come back to that

let's add a new file here

we'll call this merge underscore sort

dot pi

in our file let's create a new function

named merge sort that takes a list and

remember when i say list unless i

specify linked list i mean a python list

which is the equivalent of an array so

we'll say def

merge underscore sort

and takes a list

in the introduction to algorithms course

we started our study of each algorithm

by defining the specific steps that

comprise the algorithm

let's write that out as a docstring

in here the steps of the algorithm so

that we can refer to it right in our

code

this algorithm is going to sort the

given list in an ascending order so

we'll start by putting that in here as a

simple definition

sorts a list in ascending

order

there are many variations of merge sort

and in the one we're going to implement

we'll create and return a new sorted

list other implementations will sort the

list we pass in and this is less typical

in an operation known as sort in place

but i think that returning a new list

makes it easier to understand the code

now these choices do have implications

though and we'll talk about them as we

write this code

for our next bit of the docs string

let's write down the output of this

algorithm so returns

a new

sorted list

merge sort has three main steps

the first is the divide step where we

find the midpoint of the list so i'll

say divide

find the mid point of the list and

divide

into sub-lists

the second step is the conquer step

where we sort the sub-list that we

created in the divide step so we'll say

recursively

sort the sub-lists created in previous

step

and finally the combine the combined

step where we merge these recursively

sorted sub-lists back into a single list

so merge the sorted sub-lists

created in previous

step

when we learned about algorithms we

learned that a recursive function has a

basic pattern first we start with a base

case that includes a stopping condition

after that we have some logic that

breaks down the problem and recursively

calls itself

our stopping condition is our end goal a

sorted array

now to come up with a stopping condition

or a base case we need to come up with

the simplest condition that satisfies

this end result

so there are two possible values that

fit a single element list or an empty

list

now in both of these situations we don't

have any work to do

if we give the merge sort function an

empty list or a list with one element

it's technically already sorted we call

this naively sorting so let's add that

as our stopping condition we'll say if

len list if the length of the list is

less than or equal to one

then we can return the list

okay so this is a stopping condition

and now that we have a stopping

condition we can proceed with the list

of steps

first we need to divide the list into

sub lists

to make our functions easier to

understand we're going to put our logic

in a couple different functions instead

of one large one so i'll say it left

half

comma right half

equal

split

list so here we're calling a split

function that splits the list we pass in

and returns two lists split at the

midpoint because we're returning two

lists we can capture them in two

variables

now you should know that this split

function is not something that comes

built into python this is a global

function that we're about to write

next is the conquer step where we sort

each sub-list and return a new sorted

sub-list

so we'll say left equal

merge sort

left half

and right equal merge sort

right half

this is the recursive portion of our

function so here we're calling merge

sort on this divided sub list so we

divide the list into two here and then

we call merge sort on it again

this further splits that sublist into

two in the next pass through of merge

sort this is going to be called again

and again and again until we reach our

stopping condition where we have single

element lists or empty lists

when we've subdivided until we cannot

divide any more then we'll end up with a

left and a right half

and we can

start merging backwards so we'll say

return

merge

left and right

that brings us to the combined step

once two sub-lists are sorted and

combined we can return it

now obviously none of these functions

merge merge sort well merge sort is

written but merge and split haven't been

written so all we're going to do here if

we run it is raise an error so in the

next video let's implement the split

operation

the first bit of logic we're going to

write is the divide step of the

algorithm this step is fairly

straightforward and only requires a few

lines of code but is essential to get

the sorting process going

all right so as we saw earlier we're

going to call the function for the

divide step split so we'll say def split

and split is going to take as an

argument a list to split up

let's document how this function works

so we'll say

divide the unsorted list at midpoint

into sub lists and it's always good to

say what we're returning as well so

we'll say returns to sub-lists

left and right

all right so the first step is to

determine the midpoint of this list of

this array

we're going to use the floor division

operator for this

floor division carries out a division

operation and if we get a non-integer

value like 2.5 back it just gets rounded

down to two we'll define the midpoint to

be the length of the list divided by two

and then rounded down

so

lan list

and using the

two forward slashes for the floor

division operator we'll put number two

after it

okay once we have the midpoint we can

use the slicing notation in python to

extract portions of the list we want to

return

for instance we can define left

as the left sub-list that goes all the

way from the start of the list

all the way up to the midpoint without

including the midpoint

now over here we're using the slicing

syntax where it's like using the you

know subscript notation to access a

value from a list but instead we give

two index values as a start and stop

if we don't include a start value as

i've done here python interprets that as

starting from the zeroth index or the

start of the list now similarly we can

define right

[Music]

to be values on the right of the

midpoint so starting at the midpoint and

going all the way up to the end of the

list

so a couple things to note as i said

earlier when you don't include the

starting index it interprets it as to

start at the very beginning of the list

the index you give as the stopping

condition that value is not included in

the slice so over here we're starting at

the very beginning of list and we go all

the way up to midpoint but not including

midpoint and then right starts at

midpoint so it includes that value and

then goes all the way to the end of the

list

now once we have these two sub-lists we

can return them

so we'll return left and right notice

that we're returning two values here and

then in the merge sort function when we

call that split function

we're declaring two variables left half

and right half to assign so that we can

assign these two sub lists to them

okay and that's all there is to the

split function in the next video let's

implement the crucial portion of the

merge sort logic

once we run the split function

recursively over the array we should end

up with several single member or empty

arrays

at this point we need to merge them all

back and sort them in the process which

is what our merge function is for

the merge function is going to take two

arrays or lists as arguments and to

match the naming conventions we used in

the split function we'll call this left

and right as well so we'll say def merge

takes a left and a right list

now like before let's add some

documentation to our function so this

function merges to lists or arrays

sorting them in the process

and then it returns a new merged list

since our function is going to return a

new list let's start by creating one

now in the process of merging we need to

sort the values in both lists

to sort we need to compare values from

each array or each list so next let's

create two local variables to keep track

of index values that we're using for

each list

so the convention here is i and j so

we'll stick to it

so i equals 0 j equals 0.

as we inspect each value in either list

we'll use the variables to keep track of

the indexes of those values so we'll use

i to keep track of indexes in the left

list and j for indexes in the right list

when merging we want to keep sorting the

values until we've worked through both

lists so for our loop let's set up two

conditions with an and operator so we'll

say while

let's just stay up here while i is less

than

while i is less than the length of the

left list

and j

is less than the length

of the right list then we'll keep

executing our loop so here we're

ensuring that as long as i is less than

the length of the left list

and the and is important and j is less

than the length of the right list we're

going to keep executing the code now i

and j are both set to zero initially

which means that our first comparison

operation will be on the first element

of each list respectively so we'll say

if

left i so i zero so this is going to get

the first value out of the left list

is less than right

j

and again here

j is zero so we're going to get the

first value out of the right list now if

the value at index i in the left list is

less than the value at index j in the

right list what do we do well that means

the value being compared in left is less

than the value in the right and can be

placed at position 0 in the new array l

that we created earlier so here we'll

say l dot append

left

i

since we've read and done something with

the value at position i let's increment

that value so we move forward to

evaluate the next item in the left list

i plus one or we can say i plus equal

one okay next is an else statement

and here we'll say

if the value at index i so i don't have

to write out the actual logic because

it's implied so here we're saying that

left the value at left is less than the

value at right now in the else clause if

the value at so i equal

is greater and i haven't written out

that condition because it's implied so

here we're saying if the value in the

left is less than the value in the right

so in the else clause it's going to mean

that the value in the left is either

greater than or equal to the value in

the right but when we hit the else

clause if the value at index i in the

left list is greater

then we place the value at index j from

the right list at the start of the new

one list l

and similarly increment j so here we'll

say l dot append

right

j and then j equal j plus one

doing this doesn't necessarily mean that

in one step we'll have a completely

sorted array but remember that because

we start with single element arrays and

combine with each merge step we will

eventually sort all the values more than

one time and by the time the entire

process is done all the values are

correctly sorted

now this isn't all we need to do in the

merge step however there are two

situations we can run into one where the

left array is larger than the right and

vice versa so this can occur when an

array containing an odd number of

elements needs to be split so how do you

split a three element array or list well

the left can have two elements and the

right can have one or the other way

around

in either case our while loop uses an

and condition

where the variables used to store the

indexes need to be less than the length

of the lists if the left list is shorter

than the right then the first condition

returns false and the entire loop

returns false because it's an and

condition

this means that in such an event when

the while loop terminates not all the

values in the right list will have been

moved over to the new combined list

so to account for this let's add two

more while loops

the first while loop is going to account

for a situation where the right list is

shorter than the left and the previous

loop terminated because we reached the

end of the right list first

so in this case what we're going to do

is simply add the remaining elements in

the left to the new list

we're not going to compare elements

because we're going to assume that

within a list the elements are already

sorted

so while i

is less than length of left

then

it's the same logic l dot append left

i

and i

plus equal one

so the while loop is going to have the

similar condition keep the loop going

until it's at the last index

inside the body we're incrementing the

index with every iteration of the loop

our final loop accounts for the opposite

scenario where the left was shorter than

the right

the only difference here is that we're

going to use the variable j along with

the right list so we'll say while j

is less than length of right

l dot append

right

j

and j plus equal one okay let's stop

here in the next video let's test out

merge sort make sure our code is running

correctly and everything is written well

and then we'll wrap up this stage by

documenting our code and evaluating the

run time of our algorithm in the last

video we completed our implementation

for the merge sort algorithm but we

didn't test it in any way let's define a

new list at the bottom that contains

several numbers

you can put whatever you want in there

but make sure that the numbers are not

in order i'll call mine a list

and in here

we'll say 54

26 or 62 doesn't matter 93 17

77

31

just add enough so that you can make out

that it's sorted okay next we're going

to call the merge sort algorithm

and pass in our list let's assign this

to some variables so we'll say l equal

merge

underscore sort

a list

and then if it works correctly we should

be able to print this list and see what

it looks like so i'm going to hit save

down here in the console we'll tap out

python

merge sort dot pi

and before i hit enter i actually

noticed i made an error in the last

video but i'll hit enter anyway and you

should see the error pop up okay so what

i forgot to do which is a pretty crucial

part of our algorithm is in the merge

function i forgot to return the list

containing the sorted numbers after

carrying out all this logic

so here at the bottom

we'll say return

l

all right we'll save again

and now we'll clear this out and try

that one more time

and there we go

you should see a sorted list printed out

we can write out a more robust function

to test this because with bigger arrays

visually evaluating that printed list

won't always be feasible so bring this

back down

let's get rid of this

and we'll call our

function verify sorted

and this will take a list

first we're going to check inside the

body of the function we'll check the

length of the list

if the list is a single element list or

an empty list we don't need to do any

unnecessary work because remember it is

naively sorted so we'll say if n

equals 0 or

if n equals

1

then we'll return true we've verified

that it's sorted

now to conclude our function we're going

to write out one line of code that will

actually do quite a bit of work

so first we'll say return

list

zero so we'll take the first element out

of the list and we'll compare and see if

that's less than the second element in

the list okay so first we'll check that

the first element in the list is less

than the second element in the list

this returns either true or false so we

can return that directly

but this isn't sufficient if it were we

could trick the verify function by only

sorting the first two elements in the

list

so to this return statement we're going

to use an and operator to add on one

more condition for this condition we're

going to make a recursive function call

back to verify

sorted

and for the argument we're going to pass

in the list

going from the second element all the

way

to the end let's visualize how this

would work

we'll use a five element list as an

example so we'll call verify sorted and

pass in the entire list

this list is not one or zero elements

long so we skip that first if statement

there's only one line of code left in

the function and first we check that the

element at index 0 is less than the

element at index 1. if this is false the

function returns immediately with a

false value

an and operator requires both conditions

to be true for the entire line of code

to return true

since the first condition evaluates to

false we don't need to bother evaluating

the second the second condition is a

recursive call with a sub-list

containing elements from the original

list starting at position 1 and going to

the end

so on the second call again we can skip

that first if statement and proceed to

check whether the value at element 0 is

less than the value at element 1.

remember that because this list is a

sub-list of the original starting at the

element that was the second element in

the original list

by comparing the elements at position 0

and 1 in the sub list we're effectively

comparing the elements at position 1 and

2 in the original list with each

recursive call as we create new sub

lists that start at index position 1

we're able to check the entire list

without having to specify any checks

other than the first two elements

since this is a recursive function it

means we need a stopping condition and

we have it already it's that first if

condition

as we keep making sub lists once we

reach a single element list that element

is already sorted by definition so we

can return true since this recursive

function call is part of an and

condition

it means that every single recursive

call has to return true all the way back

to the beginning for our top level

function to return true and for the

function to say yes this is sorted

now we could have easily done this using

an iterative solution and a for loop but

this way you get another example of

recursion to work through and understand

so let's use this function

at the bottom we'll say print

verify sorted and first we'll pass in a

list

oops we got rid of that didn't we

okay let me write it out again so a list

equal

and i think i have those original

numbers here somewhere so we'll say 54

26 93

okay and then we assigned to l the

result of calling merge

sort

on a list

okay so now here we're going to use the

verify sorted function

and we'll check first that a list is

sorted that should return false and then

we'll check the same call on we'll pass

an l and this should return true

okay so now at the bottom here in the

console

we'll call python merge sort dot pi and

there we go it returned false for a list

meaning it's not sorted but l is sorted

cool so our merge sort function works in

the next video let's talk about the cost

of this algorithm

if we go back to the top level the merge

sort function what is the run time of

this function look like and what about

space complexity how does memory usage

grow as the algorithm runs

to answer those questions let's look at

the individual steps starting with the

split function in the split function all

we're doing is finding the midpoint of

the list and splitting the list at the

midpoint

this seems like a constant time

operation but remember that the split

function isn't called once it's called

as many times as we need it to to go

from the initial list down to a single

element list

now this is a pattern we've seen a

couple times now and we know that

overall this runs in logarithmic time so

let's add that as a comment so here i'll

say

takes

overall

big o of log

n time now there's a caveat here but

we'll come back to that

so next up is the merge step in the

merge step we've broken the original

list down into single element lists and

now we need to make comparison

operations and merge them back in the

reverse order

for a list of size n we will always need

to make an n number of merge operations

to get back from single element lists to

a merge list

this makes our overall runtime big o of

n times log n because that's an n number

of merge steps multiplied by log n

number of splits of the original list

so to our merge step here let's add a

comment we'll say it runs

in overall

oops there we go runs an overall linear

time

right it takes an n number of steps

number of merge steps but now that we

have these two so linear here and

logarithmic here we can multiply these

and say that the merge sort function the

top level function we can conclude that

the runtime of the overall sorting

process is big o of n times log n

now what about that caveat i mentioned

earlier

so if we go back to our split function

here

right here there we go

let's take a look at the way we're

actually splitting the list so we're

using python's list slicing operation

here

and passing in two indexes where the

split occurs

now if you go and poke around the python

documentation which i've done it says

that a slicing operation is not a

constant time operation and in fact has

a runtime of big o of k where k

represents the slice size

this means that in reality our

implementation of split this

implementation of split does not run in

logarithmic time but k times logarithmic

time because there is a slice operation

for each split

this means that our implementation is

much more expensive so

overall

that makes our overall top level merge

sort function not n times log n but k n

times log n which is much more expensive

now let's get rid of all that

to fix this we would need to remove this

slicing operation now we can do that by

using a technique we learned in a

previous course

in the introduction to algorithms course

we looked at two versions of binary

search in python a recursive and an

iterative version

in the recursive one we use list slicing

with every recursion call but we achieve

the same end result using an iterative

approach without using list slicing

over there we declared two variables to

keep track of the starting and ending

positions in the list

we could rewrite merge sort to do the

same but i'll leave that as an exercise

for you if you want some hints if you

want any direction i've included a link

in the notes with an implementation so

that is time complexity now just so we

know before moving on for python here

our

overall run time is not what i've listed

here but this is what the actual run

time of the merge sort algorithm looks

like so the merge step runs in linear

time and the split step takes

logarithmic time for an overall n times

log n and that is how merge sort

actually works

okay so what about space complexity

the merge sort algorithm takes linear

space and this is weird to think about

it first but as always a visualization

helps

so if we start at the top again with our

full list and carry out the split method

until we have single element lists each

of these new lists take up a certain

amount of space

so the second level here we have two

lists where each take up an n by two

amount of space

now this makes it seem that the sum of

all this space is the additional space

needed for merge sort but that's not

actually the case in reality there are

two factors that make a difference

first not every single one of these sub

lists are created simultaneously

at step two we create two n by two size

sub lists

when we move to the next step however we

don't hold on to the n by two sub lists

and then create four n by four size sub

lists for the next split

instead after the four n by four size

sub lists are created the n by two ones

are deleted from memory there's no

reason to hold on to them any longer now

the second point is that our code

doesn't execute every path

simultaneously

think of it this way when we pass our

list to the top level merge sort

function

our implementation calls split

which returns a left half and a right

half

the next line of code then calls merge

sort on the left half again

this runs the function the merge sort

function again with a new list

in that second run of the function split

is called again we get a second left and

right half and then again like before we

call merge sort on this left half as

well what this means is that the code

walks down the left path all the way

down until that initial left half is

sorted and merged back into one array

then it's going to walk all the way down

the right path and sort that until we're

back to that first split with two n by

two sized sublists

essentially we don't run all these paths

of code at once so the algorithm doesn't

need additional space for every sub-list

in fact it is the very last step that

matters

in the last step the two sub-lists are

merged back into the new sorted list and

returned

that sorted list has an equal number of

items as the original unsorted list

and since this is a new list it means

that at most the additional space the

algorithm will require at a given time

is n

yes at different points in the algorithm

we require log n amount of space but log

n is smaller than n and so we consider

the space complexity of merge sort to be

linear because that is the overall

factor

okay that was a lot so let's stop here

don't worry if you've got questions

about merge sort because we're not done

yet

over the next few videos let's wrap up

this course by implementing merge sort

on a linked list

[Music]

over the last few videos we implemented

the merge sort algorithm on the array or

list type in python merge sort is

probably the most complicated algorithm

we've written so far but in doing so we

learned about an important concept

divide and conquer

we also concluded the last video by

figuring out the run time of merge sort

based on our implementation

over the next few videos we're going to

implement merge sort again this time on

the linked list type

in doing so we're going to get a chance

to see how the implementation differs

based on the data structure while still

keeping the fundamentals of the

algorithm the same and we'll also see

how the run time may be affected by the

kinds of operations we need to implement

let's create a new file to put our

second implementation of merge sort in

so file over here new file

and it's going to have a rather long

name we'll call this linked

list

merge sort with underscores everywhere

dot pi

we're going to need the linked list

class that we created earlier so we'll

start at the top by importing the linked

list class from the linkedlist.pi file

the way we do that is we'll say from

linked

list

import linked list

right so that imports the class

uh let's test if this works really quick

we'll just do something like l equal

linked

list

l.add

ten

or one doesn't matter print l

okay and if i hit save

and then down here we'll say python

linked list

merge sword dot pi

okay it works so this is how we get some

of the code how we reuse the code that

we've written in other files into this

current file

and get rid of this now

okay like we did with the first

implementation of merge sort we're going

to split the code up across three

functions

the main function merge sort

a split function and a merge function

now if you were to look up a merge sort

implementation in python both for a

regular list an array or a linked list

you would find much more concise

versions out there but they're kind of

hard to explain so splitting it up into

three will sort of help it you know be

easier to understand so we'll call this

merge sort at the top level and this

time it's going to take a linked list

let's add a dog string to document the

function

so say that this function sorts a linked

list in ascending order

and like before we'll add the steps in

here so we'll say you first recursively

divide the linked list into sub lists

containing

a single

node

then

we repeatedly

merge these sub-lists

to produce sorted sub-lists

until one remains

and then finally this function returns a

sorted

linked list

the implementation of this top level

merge function is nearly identical to

the array or list version we wrote

earlier so first we'll provide a

stopping condition or two if the size of

the list is one or it's an empty list

we'll return the linked list since it's

naively sorted so if linked

list dot size remember that function we

run equal one

then we'll return

linked

list

else if

linked list dot head

is none meaning it's an empty list then

we'll return linked list as well okay

next let's split the linked list into a

left and right half

conceptually this is no different but in

practice we need to actually traverse

the list we'll implement a helper method

to make this easier

but we'll say left half

comma right half

equal split

linked list

now once we have two sub lists like

before we can call merge sort the top

level function on each

so left equal merge sort

left half

and right equal merge sort

on the right half

finally we'll merge these two top-level

sub-lists and return it so merge left

and right

okay nothing new here but in the next

video let's implement the split logic

the next step in the merge sort

algorithm is the divide step or rather

an implementation of the split function

so down here

we'll call this split like before and

this is going to take a linked

list

documenting things is good and we've

been doing it so far so let's add a

docstring

divide the unsorted list at midpoint

into sub-lists

now of course when i say sub-lists here

i mean sub-linked lists but that's a

long word to say

now here's where things start to deviate

from the previous version

with the list type we could rely on the

fact that finding the midpoint using an

index and list slicing to split into two

lists would work even if an empty list

was passed in

since we have no automatic behavior like

that we need to account for this when

using a linked list so our first

condition is if the linked list is none

or if it's empty that is if head is

equal to none

so we'll say if linked list

equal none

or you can write is there it doesn't

matter or linked list dot head is none

well linked list can be none for example

if we call split on a linked list

containing a single node a split on such

a list would mean left would contain the

single node while right would be none

now in either case we're going to assign

the entire list to the left half and

assign none to the right so we'll say

left half

equal linked list

and then right half

equal none

you could also assign the single element

list or none to left and then create a

new empty linked list assigned to the

right half but that's unnecessary work

so now that we've done this we can

return

left half

and right half

so that's our first condition let's add

an else clause to account for non-empty

linked lists first we'll calculate the

size of the list now this is easy

because we've done the work already and

we can just call the size method that

we've defined we'll say size equal

linked underscore list dot size

using this size we can determine the

midpoint so mid equal size and here

we'll use that floor division operator

to divide it by two once we have the

midpoint we need to get the node at that

midpoint

now make sure you hit command s to save

here and we're going to navigate back to

linkedlist.hi

in here we're going to add a convenience

method at the very bottom right before

the wrapper function right here

and this convenience method is going to

return a node at a given index so i'll

call this node

at

index and it's going to take an index

value

this way instead of having to traverse

the list inside of our split function we

can simply call node at index and pass

it the midpoint index we calculated to

give us the node right there so we can

perform the split

okay so this method accepts as an

argument the index we want to get the

node for if this index is zero then

we'll return the head of the list so if

index double equals zero return

self.head

the rest of the implementation involves

traversing the linked list and counting

up to the index as we visit each node

the rest of the implementation involves

traversing the linked list and counting

up to the index as we visit each node so

i'll add an else clause here

and we'll start at the head so we'll say

current equal self.head

let's also declare a variable called

position

to indicate where we are in the list

we can use a while loop to walk down the

list our condition here is as long as

the position is less than the index

value

so i'll say while position

is less than index

inside the loop we'll assign the next

node to current and increment the value

of position by one

so current equal current dot next node

position

plus equal one

once the position value equals the index

value current refers to the node we're

looking for and

we can return it we'll say return

current

let's get rid of all this empty space

there we go now back in linked list

merge sort

dot pi

we can use this method to get at the

node after we've calculated the midpoint

to get the node at the midpoint of the

list

so we'll say mid node

equal

linked

list

dot node at index

and here i'm going to do something

slightly confusing i'm going to do mid

minus 1. remember we're subtracting 1

here

because we used size

to calculate the midpoint and like the

len function size will always return a

value greater than the maximum index

value

so think of a linked list with two nodes

size would return two

the midpoint though and the way we're

calculating the index we always start at

zero which means size is going to be one

greater than that so we're going to

deduct one from it to get the value we

want but we're using the floor division

operator so it's going to round that

down even more no big deal with the node

at the midpoint now that we have this

midnote we can actually split the list

so first we're going to assign the

entire linked list to a variable named

left half so left half

equal linked list

this seems counterintuitive but make

sense in a second

for the right half we're going to assign

a new instance of linked list so right

half equal

linked list

this newly created list is empty but we

can fix that by assigning the node that

comes after the midpoint so after the

midpoint of the original linked list we

can assign the node that comes after

that midpoint node as the head of this

newly created

right linked list

so here we'll say right half

dot head

equal mid node dot

node

once we do that we can assign none to

the next node property on mid node to

effectively sever that connection and

make what was the mid node now the tail

node of the left linked list so i'll say

mid node

dot next node

equal none

if that's confusing here's a quick

visualization of what just happened

we start off with a single linked list

and find the midpoint the node that

comes after the node at midpoint is

assigned to the head of a newly created

linked list and the connection between

the midpoint node and the one after is

removed

we now have two distinct linked lists

split at the midpoint

and with that we can return the two sub

lists so we'll return left half

and right half

in the next video let's tackle our merge

function in the last video we defined an

implementation for the version of the

split function that works on linked

lists it contained a tiny bit more code

than the array or list version that was

expected the merge function is no

different because like with the split

function after we carry out a comparison

operation we also need to swap

references to corresponding nodes all

right let's add our merge function over

here at the bottom below

the split functions we'll call this

merge and it's going to take a left

and right

now because this can get complicated

we're going to document this function

extensively and as always we're going to

start with a doc string

so we'll say that this function merges

two linked lists

sorting

by data in the nodes

and it returns a new

merged list

remember that in the merge step we're

going to compare values across two

linked lists and then return a new

linked list with nodes where the data is

sorted

so first we need to create that new

linked list let's add a comment in here

we'll say create

a new linked list

that contains nodes from

let's add a new line merging left and

right

okay and then create the list so merged

equal

new linked list

to this list we're going to do something

unusual we're going to add a fake head

this is so that when adding sorted nodes

we can reduce the amount of code we have

to write by not worrying about whether

we're at the head of the list once we're

done we can assign the first sorted node

as the head and discard the fake head

now this might not make sense at first

but not having to worry about whether

the new linked list already contains a

head or not makes the code simpler we'll

add another comment

and a fake hand that is discarded

later

we'll say merged dot add zero

like we've been doing so far we'll

declare a variable named current to

point to the head of the list

set current to the head of the linked

list and then current equal

merged dot head

next we'll get a reference to the head

on each of the linked lists left and

right

so we'll say obtain

head nodes

for left and right linked lists

and here's call this left head

equal

left dot head

and right hand equal right dot head

okay so with that setup out of the way

let's start iterating over both lists

so another comment

iterate over left and right

as long

or we'll say until

the

until we reach the tail node

of either

and we'll do that by saying while

left head

or right head

so this is a pattern that we've been

following all along we're going to

iterate until we hit the tail nodes of

both lists and we'll move this pointer

forward every time so that we traverse

the list with every iteration if you

remember the logic behind this from the

earlier version once we hit the tail

note of one list if there are nodes left

over in the other linked list we don't

need to carry out a comparison operation

anymore and we can simply add those

nodes to the merged list

the first scenario we'll consider is if

the head of the left linked list is none

this means we're already past the tail

of left and we can add all the nodes

from the right linked list to the final

merge list so here i'll say if

the head node of left is none

we're past the tail

add the node

from the right from right to merged

linked

list so here we'll say if

left head

is none

current dot next node

remember current points to the head of

the merge list that we're going to

return so here we're setting its next

node reference to the head node on the

right link list so we'll say right head

then when we do that we'll move the

right head forward to the next node so

let's say right

head

equal right hand

dot next node

this terminates the loop on the next

iteration let's look at a visualization

to understand why

let's say we start off with a linked

list containing four nodes so we keep

calling split on it until we have lists

with just a single head single node

linked lists essentially

so let's focus on these two down here

that we'll call left and right

we haven't implemented the logic for

this part yet but here we would compare

the data values and see which one is

less than the other

so we'll assume that left's head is

lesser than right's head so we'll set

this as the next node in the final merge

list

left is now an empty length list so left

dot head equals none on the next pass

through the loop left head is none which

is the situation we just implemented

here we can go ahead and now assign

right head as the next note in the merge

link list we know that right is also a

singly linked list

here's the crucial bit when we move the

pointer forward by calling next node on

the right node there is no node and the

right link the right linked list is also

empty now which means that both left

head and right head are none and either

one of these would cause our loop

condition to terminate

so what we've done here is encoded a

stopping condition for the loop so we

need to document this because it can get

fuzzy so right above that line of code

i'll say call next

on right

to

set loop condition

to false

okay there's another way we can arrive

at this stopping condition and that's in

the opposite direction if we start with

the right head being none so here we'll

say i'm going to add another comment

if oops not there there

if

the head node of right

is none

we're past the tail

then we'll say

add the tail node from left

to merged linked list

and then we'll add that condition we'll

say else if right head

is none

now remember we can enter these even if

left head is none we can still go into

this condition we can still enter this

if statement and execute this logic

because the while loop the loop

condition here is an or statement so

even if left head is false if this

returns true because there's a value

there there's a node there the loop will

keep going

okay now in this case we want to set the

head of the left linked list as the next

node on the merge list so this is simply

the opposite of what we did over here

we'll set current dot next node

equal to left head

and then we'll move so after doing that

we can move the variable pointing to

left head forwards which as we saw

earlier is past the tail node and then

results in the loop terminating so we'll

say left hand

equal

left

head dot next node and we'll add that

comment here as well so we'll say call

next on left to set loop condition

to false

because here right head is none and now

we make left head none these two

conditions we looked at where either the

left head or right head

were at the tail nodes of our respective

lists those are conditions that we run

into when we've reached the bottom of

our split where we have single element

linked lists or empty linked lists let's

account for our final condition where

we're evaluating a node that is neither

the head nor the tail of the list

and this condition we need to reach into

the nodes and actually compare the data

values to one another before we can

decide which node to add first to the

merged list

so here this is an else because we've

arrived at our third condition third and

final

and here we'll say not at either tail

node

obtain

no data to perform

comparison operations so let's get each

of those data values out of the

respective nodes so that we can compare

it so we'll say left

data equal left head dot data

and write data

equal right head righthead.data

okay what do we do next well we compare

but first let's add a comment so we'll

say if data

on left

is less than right

set current to left node and then

move

actually we'll add this in a second so

here we'll say if left data

is less than write data

then current dot next node

equal left head

and then

we'll add a comment and we'll say move

left head to next node

on that list so we'll say left head

equal left head

dot next node

just as our comment says we'll check if

the left data is less than the right

data if it is since we want a list in

ascending order we'll assign the left

node to be the next node in the merged

list we'll also move the left head

forward to traverse down to the next

node in that particular list now if left

is larger than right then we want to do

the opposite so we'll go back to spaces

another comment

if data on left is greater than right

set current

to

right node

okay so else

here we assign the right head to be the

next node in the merge list so

current.nextnode

equal right

head

and then comment

move right head to next node

so right

head equal

right head

dot next

node okay after doing that we move the

right head pointer to reference the next

node in the right list

and finally at the end of each iteration

of the while loop so not here but two

spaces back right make sure we're

indented at the same level as the while

so we got to go yep or not the same

level as the wild but the same outer

scope

and then there we're going to say

move current to next node

so current equal current dot next node

okay don't worry if this is confusing as

always we'll look at a visualization in

just a bit so we'll wrap up this

function by discarding that fake head we

set earlier setting the correct node as

head and returning the linked list so

we'll add a comment

discard fake head

and set

first

merged node as head so here we'll say

head equal

merged dot head dot next

node and then merged dot head equal head

and finally return

merged

okay we wrote a lot of code here a lot

of it was comments but still it's a

bunch let's take a quick break in the

next video we'll test this out evaluate

our results and determine the runtime of

our algorithm

okay first things first let's test out

our code now we'll keep it simple

because writing a robust verify function

would actually take up this entire video

instead i'll leave that up to you to try

as homework

okay so

at the very end

let's create a new linked list

let's add a few notes to this so l add

i'm going to copy paste this

so it makes it easier for me

not to have to retype a bunch so i'll

add 10

uh then set 2

44

15

and something like 200. okay then we'll

go ahead and print l so that we can

inspect this list

next

let's create a declare variable here so

we'll call this sorted linked list

and to this we're going to assign the

result of calling merge sort on l

and then we'll print this

so sorted

linked

list

okay since we've taken care of all the

logic we know that this gets added in as

nodes and then

let's see what this looks like all right

so hit save

and then bring up the console we're

going to type out python

linked

list underscore merge sort dot pi and

then enter okay so we see that linked

list we first created remember that what

we add first right that eventually

becomes a tail or right yeah so 10 is

the tail 200 is the last one so 200 is

the head because i'm calling add it

simply adds each one to the head of the

list so here we have 10 to 44 15 and 200

in the order we added and then the

sorted linked list sorts it out so it's

2 10 15 44 and 200. look at that a

sorted linked list

okay so let's visualize this from the

top

we have a linked list containing five

nodes with integers 10 2 4 15 and 200 as

data respectively our merge sort

function calls split on this list the

split function calls size on the list

and gets back 5 which makes our midpoint

2.

using this midpoint we can split the

list using the node at index method

remember that when doing this we deduct

1 from the value of mid so we're going

to split here using an index value of 1.

effectively this is the same since we're

starting with an index value of 0 1

means we split after node 2. we assign

the entire list to left half

then create a new list and assign that

to right half

we can assign node 3 at index value 2 as

the head of the right list and remove

the references between node two and node

three so far so good right

okay so now we're back in the merge sort

function after having called split and

we have two linked lists

let's focus on just the left half

because if you go back and look at our

code we're going to call merge sort on

the left linked list again

this means the next thing we'll do is

run through that split process since

this is a linked list containing two

nodes this means that split is going to

return a new left and right list each

with one node again we're back in the

merge sort function which means that we

call merge sort on this left list again

since this is a single node linked list

on calling merge sort on it we

immediately return before we split since

we hit that stopping condition so we go

to the next line of code which is

calling merge sort on the right list as

well but again we'll get back

immediately because we hit that stopping

condition now that we have a left and

right that we get back from calling

merge sort we can call merge on them

inside the merge function we start by

creating a new linked list and attaching

a fake head

then we evaluate whether either the left

or the right head is none

since neither condition is true we go to

the final step where we evaluate the

data in each node

in this case the data in the right node

is less than the left node so we assign

the right node as the next node in the

merge link list and move the right head

pointer forward

in the merge link list we move our

current pointer forward to this new node

we've added and that completes one

iteration of the loop

on the next iteration righthead now

points to none since that was a single

node list and we can assign the rest of

the left linked list which is

effectively the single node over to the

merge link list here we discard the fake

head move the next node up to be the

correct head and return the newly merged

sorted linked list remember that at this

point because right head and left head

pointed to none are while loop

terminated

so in this way we recursively split and

repeatedly merge sub-lists until we're

back with one sorted linked list

the merge sort algorithm is a powerful

sorting algorithm but ultimately it

doesn't really do anything complicated

it just breaks the problem down until

it's really simple to solve

remember the technique here which we've

talked about before is called divide and

conquer so i like to think of merge sort

in this way there's a teacher at the

front of the room and she has a bunch of

books that she needs to sort into

alphabetical order instead of doing all

that work herself she splits that pile

into two and hands it to two students at

the front

each of those students split it into two

more and hand it to the four students

seated behind them as each student does

this eventually a bunch of single

students has two books to compare and

they can sort it very easily and hand it

back to the student who gave it to them

in front of them who repeats the process

backwards so ultimately it's really

simple work is just efficiently

delegated

now back to our implementation here

let's talk about runtime so far other

than the node swapping we had to do it

seems like most of our implementation is

the same right in fact it is including

the problems that we ran into in the

list version as well so in the first

implementation of merge sort we thought

we had an algorithm that ran in big o of

n log n but turns out we didn't why well

the python list slicing operation if you

remember actually takes up some amount

of time amounting to big o of k

a true implementation of merge sort runs

in quasi-linear or log linear time that

is n times log n so we almost got there

but we didn't now in our implementation

of merge sort on a linked list we

introduce the same problem so if we go

back up to

the merge or rather the split function

this is where it happens now swapping

node references that's a constant time

operation no big deal comparing values

also constant time

the bottleneck here like list slicing is

in splitting a late list at the midpoint

if we go back to our implementation you

can see here that we use the node at

index method which finds the node we

want by traversing the list

this means that every split operation

incurs a big o of k cost where k here is

the midpoint of the list effectively n

by 2 because we have to walk down the

list counting up the index until we get

to that node

given that overall splits take

logarithmic time our split function just

like the one we wrote earlier

incurs a cost of

big o of

k log n so here we'll say it takes

big o of k times log n

now the merge function also like the one

we wrote earlier takes linear time so

that one is good that one runs in the

expected amount of time so here we'll

say runs in linear

time

and that would bring our overall run

time so up at the merge sort function we

can say this runs in

big o of k n times log

n

it's okay though this is a good start

and one day when we talk about constant

factors and look at ways we can reduce

the cost of these operations using

different strategies we can come back

and re-evaluate our code to improve our

implementation for now as long as you

understand how merge sort works

conceptually what the run time and space

complexities look like and where the

bottlenecks are in your code that's

plenty of stuff

if you're interested in learning more

about how we would solve this problem

check out the notes in the teachers

video in the next video let's wrap this

course up

and with that let's wrap up this course

in the prerequisite to this course

introduction to algorithms we learned

about basic algorithms along with some

concepts like recursion and big o that

set the foundation for learning about

implementing and evaluating algorithms

in this course we learned what a data

structure is and how data structures go

hand in hand with algorithms

we started off by exploring a data

structure that many of us use in our

day-to-day programming arrays or lists

as they are known in python

we take a peek under the hood at how

arrays are created and stored and

examine some of the common operations

carried out on arrays

these are operations that we write and

execute all the time but here we took a

step back and evaluated the run times of

these operations and how they affect the

performance of our code

after that we jumped into an entirely

new world where we wrote our own data

structure a singly linked list

admittedly linked lists aren't used much

in day-to-day problem solving but it is

a good data structure to start off with

because it is fairly straightforward to

understand and not that much different

from an array

we carried out the same exercise as we

did on arrays in that we looked at

common data operations but since this

was a type we defined on our own we

implemented these operations ourselves

and got to examine with a fine-tooth

comb how our code and the structure of

the type affected the runtime of these

operations

the next topic we tackled was

essentially worlds colliding we

implemented a sorting algorithm to sort

two different data structures

here we got to see how all of the

concepts we've learned so far

algorithmic thinking time and space

complexity and data structures all come

together to tackle the problem of

sorting data

this kind of exercise is one we're going

to focus on moving forward as we try to

solve more real-world programming

problems using different data structures

and algorithms

if you've stuck with this content so far

keep up the great work this can be a

complex topic but a really interesting

one and if you take your time with it

you will get a deeper understanding of

programming and problem solving as

always check the notes for more

resources and happy coding

[Music]

you may have heard that algorithms and

computer science are boring or

frustrating they certainly can be hard

to figure out especially the way some

textbooks explain them but once you

understand what's going on algorithms

can seem fascinating clever or even

magical

to help further your understanding of

algorithms this course is going to look

at two categories sorting algorithms and

searching algorithms you could argue

that these are the easiest kinds of

algorithms to learn but in learning how

these algorithms are designed we'll

cover useful concepts like recursion and

divide and conquer that are used in many

other sorts of algorithms and can even

be used to create brand new ones

by the way all the code samples i'm

going to show in the videos will be in

python because it's a popular language

that's relatively easy to read but you

don't need to know python to benefit

from this course you can see the

teacher's notes for each video for info

on implementing these algorithms in your

own favorite language

our goal with this course is to give you

an overview of how sorting and searching

algorithms work but many algorithms have

details that can be handled in different

ways some of these details may distract

from the big picture so we've put them

in the teachers notes instead you don't

need to worry about these when

completing the course for the first time

but if you're going back and referring

to it later be sure to check the

teacher's notes for additional info

suppose we have a list of names it's a

pretty big list a hundred thousand names

long this list could be part of an

address book or social media app

and we need to find the locations of

individual names within the list

possibly to look up additional data

that's connected to the name

let's assume there's no existing

function in our programming language to

do this or that the existing function

doesn't suit our purpose in some way

for an unsorted list our only option may

be to use linear search also known as

sequential search

linear search is covered in more detail

elsewhere on our site check the

teacher's notes for a link if you want

more details

you start at the first element you

compare it to the value you're searching

for if it's a match you return it if not

you go to the next element

you compare that to your target if it's

a match you return it if not you go to

the next element and so on through the

whole list

the problem with this is that you have

to search the entire list every single

time we're not doing anything to narrow

down the search each time we have to

search all of it

if you're searching a big list or

searching it repeatedly this amount of

time can slow your whole lap down to the

point that people may not want to use it

anymore that's why it's much more common

to use a different algorithm for

searching lists binary search

binary search is also covered in more

detail elsewhere on our site check the

teacher's notes for a link

binary search does narrow the search

down for us specifically it lets us get

rid of half the remaining items we need

to search through each time

it does this by requiring that the list

of values be sorted

it looks at the value in the middle of

the list

if the value it finds is greater than

the target value it ignores all values

after the value it's looking at

if the value it finds is less than the

target value it ignores all values

before the value it's looking at

then it takes the set of values that

remain and looks at the value in the

middle of that list again if the value

it finds is greater than the target

value it ignores all values after the

value it's looking at if the value it

finds is less than the target value it

ignores all values before the value it's

looking at

but as we mentioned binary search

requires the list of values you're

searching through to be sorted

if the lists weren't sorted you would

have no idea which half of the values to

ignore because either half could contain

the value you're looking for you'd have

no choice but to use linear search

so before we can use binary search on a

list we need to be able to sort that

list we'll look at how to do that next

our end goal is to sort a list of names

but comparing numbers is a little easier

than comparing strings so we're going to

start by sorting a list of numbers

i'll show you how to modify our examples

to sort strings at the end of the course

to help make clear the importance of

choosing a good sorting algorithm we're

going to start with a bad one it's

called bogosort basically bogosort just

randomizes the order of the list

repeatedly until it's sorted

here's a python code file where we're

going to implement bogosort

it's not important to understand this

code here at the top although we'll have

info on it in the teachers notes if you

really want it all you need to know is

that it takes the name of a file that we

pass on the command line loads it and

returns a python list which is just like

an array in other languages containing

all the numbers that it read from the

file

let me have the program print out the

list of numbers it loads so you can see

it we'll call the print method and we'll

pass it the list of numbers

save that let's run it real quick

with python bogosort.pi

oh whoops and we need to

provide it

the name of the file here on the command

line that we're going to load so it's in

the numbers directory a slash separates

the directory name from the file name

five dot text

and there's our list of numbers that was

loaded from the file

okay let me delete that print statement

and then we'll move on

bogo sort just randomly rearranges the

list of values over and over so the

first thing we're going to need is a

function to detect when the list is

sorted

we'll write an is sorted function that

takes a list of values as a parameter

it'll return true if the list passed in

is sorted or false if it isn't

we'll loop through the numeric index of

each value in the list from 0 to 1 less

than the length of the list like many

languages python list indexes begin at 0

so a list with a length of 5 has indexes

going from 0 through 4.

if the list is sorted then every value

in it will be less than the one that

comes after it so we test to see whether

the current item is greater than the one

that follows it

if it is it means the whole list is not

sorted so we can return false

if we get down here it means the loop

completed without finding any unsorted

values python uses white space to mark

code blocks so unindenting the code like

this marks the end of the loop

since all the values are sorted we can

return true

now we need to write the function that

will actually do the so-called sorting

the bogosort function will also take the

list of values it's working with as a

parameter

we'll call our is sorted function to

test whether the list is sorted we'll

keep looping until is sorted returns

true

python has a ready-made function that

randomizes the order of elements in the

list

since the list isn't sorted we'll call

that function here

and since this is inside the loop it'll

be randomized over and over until our is

sorted function returns true

if the loop exits it means is sorted

returned true and the list is sorted so

we can now return the sorted list

finally we need to call our bogosort

function pass it the list we loaded from

the file and print the sorted list it

returns

okay let's save this and try running it

we do so with python the name of the

script bogosort.pi

and the name of the file we're going to

run it on

numbers directory5.txt

it looks like it's sorting our list

successfully

but how efficient is this let's add some

code to track the number of times it

attempts to sort the list

up here at the top of the bogus sort

function we'll add a variable to track

the number of attempts it's made we'll

name it attempts and we'll set its

initial value to zero since we haven't

made any attempts yet

with each pass through the loop we'll

print the current number of attempts

and then here at the end of the loop

after attempting to shuffle the values

we'll add one to the count of attempts

let's save this and let's try running it

again a couple times

in the console i can just press the up

arrow to bring up the previous command

and re-run it

so it looks like this first run to sort

this five element list took 363 attempts

let's try it again

this time it only took 91 attempts

we're simply randomizing the list with

each attempt so each

run of the program takes a random number

of attempts

now let's try this same program with a

larger number of items

python bogo sort

numbers

i have a list of eight items set up here

in this other file

this time it takes 11 000 attempts

only 487 this time

and this time thirteen thousand you can

see that the trend is increasing

steadily

the problem with bogosort is that it

doesn't make any progress toward a

solution with each pass

it could generate a list where just one

value is out of order but then on the

next attempt it could generate a list

where all the elements are out of order

again

stumbling on a solution is literally a

matter of luck and for lists with more

than a few items it might never happen

up next we'll look at selection sort

it's a sorting algorithm that's still

slow but it's better than bogo's sort

previously we showed you bogo sort a

terrible sorting algorithm that

basically randomizes the order of a list

and then checks to see if it happens to

be sorted

the problem with bogo's sort is that it

doesn't get any closer to a solution

with each operation and so with lists

that have more than a few items it'll

probably never finish sorting them

now we're going to look at an algorithm

named selection sort it's still slow but

at least each pass through the list

brings it a little closer to completion

our implementation of selection sort is

going to use two arrays an unsorted

array and a sorted one some versions

move values around within just one array

but we're using two arrays to keep the

code simpler the sorted list starts out

empty but we'll be moving values from

the unsorted list to the sorted list one

at a time

with each pass we'll look through each

of the values in the unsorted array find

the smallest one and move that to the

end of the sorted array

we'll start with the first value in the

unsorted array and say that's the

minimum or smallest value we've seen so

far

then we'll look at the next value and

see if that's smaller than the current

minimum if it is we'll mark that as the

new minimum

then we'll move to the next value and

compare that to the minimum again if

it's smaller that becomes the new

minimum we continue that way until we

reach the end of the list

at that point we know whatever value we

have marked as the minimum is the

smallest value in the whole list

now here's the part that makes selection

sort better than bogo sort we then move

that minimum value from the unsorted

list to the end of the sorted list

the minimum value isn't part of the

unsorted list anymore so we don't have

to waste time looking at it anymore all

our remaining comparisons will be on the

remaining values in the unsorted list

then we start the process over at this

point our list consists of the numbers 8

5 4 and 7. our first minimum is 8.

we start by comparing the minimum to

five five is smaller than eight so five

becomes the new minimum then we compare

five to four and four becomes the new

minimum four is not smaller than seven

though so four remains the minimum four

gets moved to the end of the sorted

array becoming its second element

the process repeats again eight is the

first minimum but five is smaller so

that becomes the minimum seven is larger

so five stays is the minimum and five is

what gets moved over to the sort of

array and so on until there are no more

items left in the unsorted array and all

we have left is the sorted array

so that's how selection sort works in

general now let's do an actual

implementation of it

this code here at the top is the same as

we saw in the bogo sword example it just

loads a python list of numbers from a

file

let's implement the function that will

do our selection sort we're going to

pass in our python list containing all

the unsorted numbers we'll create an

empty list that will hold all our sorted

values

we'll loop once for each value in the

list

we call a function named index submin

which we're going to write in just a

minute that finds the minimum value in

the unsorted list and returns its index

then we call the pop method on the list

and pass it the index of the minimum

value pop will remove that item from the

list and return it we then add that

value to the end of the sorted list

going up a level of indentation signals

to python that we're ending the loop

after the loop finishes we return the

sorted list

now we need to write the function that

picks out the minimum value we pass in

the list we're going to search

we mark the first value in the list as

the minimum it may or may not be the

actual minimum but it's the smallest

we've seen on this pass through the list

it's also the only value we've seen on

this pass through the list so far

now we loop through the remaining values

in the list after the first

we test whether the value we're

currently looking at is less than the

previously recorded

minimum if it is then we set the current

index as the new index of the minimum

value

after we've looped through all the

values we return the index of the

smallest value we found

lastly we need to actually run our

selection sort method and print the

sorted list it returns

let's save this and now let's try

running it we run the python command and

pass it the name of our script

selectionsort.pi

in the numbers directory i've saved

several data files filled with random

numbers one on each line five dot text

has five lines eight dot text has eight

lines and to help us really measure the

speed of our algorithms ten thousand dot

text has ten thousand lines i've even

created a file with a million numbers

our script takes the path of a file to

load as an argument so i'll give it the

path of our file with five numbers

numbers slash five dot text

the script runs reads the numbers in the

file into a list calls our selection

sort method with that list and then

prints the sorted list

let me add a couple print statements

within the selection sort function so

you can watch the sort happening

don't worry about figuring out the

python formatting string that i use it's

just there to keep the two lists neatly

aligned

i'll add the first print statement

before the loop runs at all

i'll have it print out the unsorted list

and the sorted list

i'll add an identical print statement

within the loop so we can watch values

moving from the unsorted list to the

sorted list

let's save this

and we'll try running the same command

again the output looks like this you can

see the unsorted list on the left and

the sorted list on the right

initially the sorted list is empty on

the first pass it selects the lowest

number 1 and moves it to the sorted list

then it moves the next lowest number

over four

this repeats until all the numbers have

been moved to the sorted list

i have another file with eight different

numbers in it let's try our program with

that

python selection sort dot pi numbers

8.text

you can see the same process at work

here notice that this file had some

duplicate values too that's okay though

because the index of min function only

updates the minimum index if the current

value is less than the previous minimum

if they're equal it just keeps the first

minimum value it found and waits to move

the duplicate value over until the next

pass through the list

so now we know that the selection sort

algorithm works but the data sets we've

been giving it sort are tiny in the real

world algorithms need to work with data

sets of tens of thousands or even

millions of items and do it fast i have

another file with ten thousand random

numbers in it

let's see if selection sort can handle

that

if i run this as it is now though it'll

print out a lot of debug info as it

sorts the list so first i'm going to go

into the program and remove the two

print statements in the selection sort

function

now let's run the program again on the

dot text file and see how long it takes

python selection sort dot pi

numbers

ten thousand dot text

one one thousand two one thousand three

one four one thousand five one thousand

six one thousand seven one thousand

eight one thousand nine one thousand ten

one thousand eleven thousand twelve

thousand thirteen thousand and it prints

out all ten thousand of those numbers

neatly sorted it took a little bit

though how long well counting the time

off vocally isn't very precise and other

programs running on the system can skew

the amount of time your program takes to

complete

let me show you a unix command that's

available here in workspaces which can

help you type time followed by a space

and then the command you want to run

so this command by itself will print the

contents of our 5.txt file cat as in

concatenate numbers 5.text

and this command will do the same thing

but it'll also keep track of how long it

takes the cat program to complete and

report the result time

cat

numbers five dot text

the real row in the results is the

actual amount of time for when the

program started running to when it

completed we can see it finished in a

fraction of a second but as we said

other programs running on the system can

take cpu resources in which case your

program will seem slower than it is so

we generally want to ignore the real

result

the user result is the amount of time

the cpu actually spent running the

program code so this is the total amount

of time the code inside the cat program

took to run

the sys result is the amount of time the

cpu spent running linux kernel calls

that your code made the linux kernel is

responsible for things like network

communications and reading files so

loading the 5.txt file is probably

included in this result

in evaluating code's performance we're

generally going to want to add together

the user and sys results but cad is a

very simple program let's try running

the time command on our code and see if

we get a more interesting result

time python

selection sort dot pi

numbers

ten thousand dot text

this takes much longer to complete

nearly 12 seconds according to the real

time measurement but as we said the real

result is often skewed so let's

disregard that

if we add the user and cis runtimes

together we get about 6 seconds

the time for the program to complete

will vary a little bit each time you run

it but if it's doing the same operations

it usually won't change more than a

fraction of a second if i run our

selection sort script on the same file

you can see it completes in roughly the

same time

now let's try it on another file with 1

million numbers time python selection

sort dot pi numbers

1 million dot text

how long does this one take i don't even

know while designing this course i tried

running this command and my workspace

connection timed out before it completed

so we'll just say that selection sort

takes a very very long time to sort a

million numbers

if we're going to sort a list that big

we're going to need a faster algorithm

we'll look into alternative sorting

algorithms shortly

the next two sorting algorithms we look

at will rely on recursion which is the

ability of a function to call itself so

before we move on we need to show you

how recursion works

recursive functions can be very tricky

to understand imagine a row of dominoes

stood on end where one domino falling

over causes the next domino to fall over

which causes the next domino to fall

over causing a chain reaction it's kind

of like that

let's suppose we need to write a

function that adds together all the

numbers in an array or in the case of

python a list

normally we'd probably use a loop for

this sort of operation

the function takes a list of the numbers

we want to add

the total starts at zero

we loop over every number contained in

the list and we add the current number

to the total

once we're done looping we return the

accumulated total

if we call this sum function with a list

of numbers it'll return the total when

we run this program it'll print out that

return value 19. let's try it real quick

python

recursion.pi oh whoops

mustn't forget to save my work here

and run it and we see the result 19.

to demonstrate how recursion works let's

revise the sum function to use recursion

instead note that recursion is not the

most efficient way to add a list of

numbers together but this is a good

problem to use to demonstrate recursion

because it's so simple one thing before

i show you the recursive version though

this example is going to use the python

slice syntax so i need to take a moment

to explain that for those not familiar

with it

a slice is a way to get a series of

values from a list

let's load up the python repel or read

evaluate print loop so i can demonstrate

we'll start by creating a list of

numbers to work with numbers equals

a list

with 0 1 2 3 and 4

containing those numbers

like arrays in most other languages

python list indexes start at 0 so

numbers

1

will actually get the second item from

the list

with slice notation i can actually get

several items back

it looks just like accessing an

individual index of a list

but then i type a colon

followed by the list index that i want

up to but not including

so numbers 1 colon 4 would get us the

second up to but not including the fifth

items from the list that is it'll get us

the second through the fourth items

now i know what you're thinking and

you're right that up to but not

including rule is a little

counterintuitive but you can just forget

all about it for now because we won't be

using a second index with any of our

python slice operations in this course

here's what we will be using when you

leave the second index off of a python

slice it gives you the items from the

first index up through the end of the

list wherever that is so numbers 1 colon

with no index following it will give us

items from the second index up through

the end of the list

numbers 2 colon will give us items from

the third index up to the end of the

list

you can also leave the first index off

to get everything from the beginning of

the list numbers

colon 3 will get us everything from the

beginning of the list up to but not

including the third index

it's also worth noting that if you take

a list with only one item and you try to

get everything from the non-existent

second item onwards the result will be

an empty list

so if i create a list with just one item

in it

and i try

to access from the second element

onwards the second element doesn't exist

so the result will be an empty list

don't worry too much about remembering

python slice syntax it's not an

essential part of sorting algorithms or

recursion i'm only explaining it to help

you read the code you're about to see

so i'm going to exit the python rebel

now that we've covered recursion we can

convert our sum function to a recursive

function

it'll take the list of numbers to add

just like before

now here's the recursive part we'll have

the sum function call itself we use

slice notation to pass the entire list

of numbers except the first one

then we add the first number in the list

to the result of the recursive function

call and return the result

so if we call sum with four numbers

first it'll call itself with the

remaining three numbers that call to sum

will then call itself with the remaining

two numbers and so on

but if we save this and try to run it

pythonrecursion.pi

well first we get a syntax error it

looks like i accidentally indented

something i shouldn't have so let me go

fix that real quick

there we go that's suggested to python

that there was a loop or something there

when there wasn't

so let's go back to the terminal and try

running this again

there we go now we're getting the error

i was expecting recursion error maximum

recursion depth exceeded

this happens because some gets into an

infinite loop it keeps calling itself

over and over the reason is that when we

get down to a list of just one element

and we take a slice from the

non-existent second element to the end

the result is an empty list that empty

list gets passed to the recursive call

to sum which passes an empty list in its

recursive call to sum and so on until

the python interpreter detects too many

recursive calls and shuts the program

down what we need is to add a base case

to this recursive function a condition

where the recursion stops this will keep

it from getting into an infinite loop

with the sum function the base case is

when there are no elements left in the

list in that case there is nothing left

to add and the recursion can stop

a base case is the alternative to a

recursive case a condition where

recursion should occur for the sum

function the recursive case is when

there are still elements in the list to

add together

let's add a base case at the top of the

function

python treats a list that contains one

or more values as a true value and it

treats a list containing no values as a

false value

so we'll add an if statement that says

if there are no numbers in the list we

should return a sum of zero that way the

function will exit immediately without

making any further recursive calls to

itself

we'll leave the code for the recursive

case unchanged

if there are still numbers in the list

the function will call itself with any

numbers after the first then add the

return value to the first number in the

list

let's save this and try running it again

python recursion dot pi

output the sum of the numbers in the

list 19 but it's still not really clear

how this worked let's add a couple print

statements that will show us what it's

doing

we'll show the recursive call to sum and

what it's being called with

we'll also add a call to print right

before we return showing which of the

calls the sum is returning and what it's

returning

let me save this and resize the console

a bit

and let's try running it again

python recursion.pi

since the print calls are inside the sum

function the first call to sum 1279

isn't shown only the recursive calls are

this first call to sum ignores the first

item in the list 1 and calls itself

recursively it passes the remaining

items from the list 2 7 and 9.

that call to sum again ignores the first

item in the list it receives 2 and again

calls itself recursively it passes the

remaining items in the list 7 and 9.

that call ignores the 7 and calls itself

with a 9

and the last call shown here ignores the

9

and calls itself with an empty list

at this point none of our recursive

calls to sum have returned yet each of

them is waiting on the recursive call it

made to sum to complete

python and other programming languages

use something called a call stack to

keep track of this series of function

calls each function call is added to the

stack along with the place in the code

that it needs to return when it

completes

but now the empty list triggers the base

case causing the recursion to end and

the sum function to return zero

that zero value is returned to its

caller the caller adds the zero to the

first and only value in its list nine

the result is nine

that nine value gets returned to the

caller which adds it to the first value

in the list it received seven

the result is sixteen

that sixteen value is returned to the

caller which adds it to the first value

in the list it received two the result

is 18.

that 18 value is returned to the caller

which adds it to the first value in the

list it received one the result is 19.

that 19 value is returned to the caller

which is not the sum function

recursively calling itself but our main

program this is our final result which

gets printed

it's the same result we got from the

loop-based version of our program the

end

we don't want the print statements in

our final version of the program so let

me just delete those real quick

and there you have it a very simple

recursive function well the function is

simple but as you can see the flow of

control is very complex don't worry if

you didn't understand every detail here

because we won't be using this

particular example again

there are two fundamental mechanisms you

need to remember a recursive function

needs a recursive case that causes it to

call itself

and it also needs to eventually reach a

base case that causes the recursion to

stop

you've seen bogo sort which doesn't make

any progress towards sorting a list with

each pass either it's entirely sorted or

it isn't

you've seen selection sort which moves

one value over to a sorted list with

each pass so that it has fewer items to

compare each time

now let's look at an algorithm that

speeds up the process further by further

reducing the number of comparisons it

makes it's called quick sort

here's some python code where we'll

implement quick sort again you can

ignore these lines at the top we're just

using them to load a file full of

numbers into a list

the quick sort algorithm relies on

recursion to implement it we'll write a

recursive function we'll accept the list

of numbers to sort as a parameter

quicksort is recursive because it keeps

calling itself with smaller and smaller

subsets of the list you're trying to

sort we're going to need a base case

where the recursion stops so it doesn't

enter an infinite loop

lists that are empty don't need to be

sorted and lists with just one element

don't need to be sorted either in both

cases there's nothing to flip around so

we'll make that our base case if there

are zero or one elements in the list

passed to the quick sort function we'll

return the unaltered list to the caller

lastly we need to call our quick sort

function with our list of numbers and

print the list it returns

that takes care of our base case now we

need a recursive case

we're going to rely on a technique

that's common in algorithm design called

divide and conquer basically we're going

to take our problem and split it into

smaller and smaller problems until

they're easy to solve

in this case that means taking our list

and splitting it into smaller lists

viewers a suggestion the process i'm

about to describe is complex there's

just no way around it if you're having

trouble following along remember the

video playback controls feel free to

slow the play back down rewind or pause

the video as needed after you watch this

the first time you may also find it

helpful to rewind and make your own

diagram of the process as we go

okay ready here goes

suppose we load the numbers from our

8.txt file into a list how do we divide

it

it would probably be smart to have our

quicksort function divide the list in a

way that brings it closer to being

sorted let's pick an item from the list

we'll just pick the first item for now

four

we'll call this value we've picked the

pivot like the center of a seesaw on a

playground

we'll break the list into two sublists

the first sub-list will contain all the

items in the original list that are

smaller than the pivot the second

sub-list will contain all the items in

the original list that are greater than

the pivot

the sub list of values less than and

greater than the pivot aren't sorted

but what if they were you could just

join the sub lists and the pivot all

together into one list and the whole

thing would be sorted

so how do we sort the sublist we call

our quick sort function recursively on

them this may seem like magic but it's

not it's the divide and conquer

algorithm design technique at work

if our quick sort function works on the

big list then it will work on the

smaller list too

for our first sub list we take the first

item it's the pivot again

that's three

we break the sub list into two sub lists

one with everything less than the pivot

and one with everything greater than the

pivot

notice that there's a value equal to the

pivot that gets put into the less than

sub-list our finished quicksort function

is actually going to put everything

that's less than or equal to the pivot

in the first sub-list

but i don't want to say less than or

equal to over and over so i'm just

referring to it as the less than pivot

sub-list

also notice that there are no values

greater than the pivot that's okay when

we join the sub-lists back together that

just means nothing will be in the return

list after the pivot

we still have one sub list that's more

than one element long so we call our

quick sort function on that too you and

i can see that it's already sorted but

the computer doesn't know that so it'll

call it anyway just in case

it picks the first element 2 as a pivot

there are no elements less than the

pivot and only one element greater than

the pivot

that's it for the recursive case we've

finally hit the base case for our quick

sort function it'll be called on both

the empty list of elements less than the

pivot and the one item list of elements

greater than the pivot but both of these

lists will be returned as they are

because there's nothing to sort

so now at the level of the call stack

above this the return sorted lists are

used in place of the unsorted sub-list

that's less than the pivot and the

unsorted sub-list that's greater than

the pivot

these are joined together into one

sorted list remember that any empty

lists get discarded

then at the level of the call stack

above that the return sorted lists are

used in place of the unsorted sub-lists

there again they were already sorted but

the quick sort method was called on them

anyway just in case

the sub-lists are joined together into

one sorted list at the level of the call

stack above that the return sorted list

is used in place of the unsorted

sub-list that's less than the pivot so

now everything that's less than or equal

to the pivot is sorted

now we call quick sort on the unsorted

sub-list that's greater than the pivot

and the process repeats for that

sub-list

we pick the first element six is the

pivot we split the sub-list into

sub-lists of elements that are less than

and greater than this pivot and we

recursively call the quicksort function

until those sub-lists are sorted

eventually a sorted sub-list is returned

to our first quick sort function call

we combine the sub-list that's less than

or equal to the pivot the pivot itself

and the sub-list that's greater than the

pivot into a single list and because we

recursively sorted the sub lists the

whole list is sorted

so that's how the quick sort function is

going to work in the next video we'll

show you the actual code

quicksort works by picking a pivot value

then splitting the full list into two

sub-lists the first sub-list has all the

values less than or equal to the pivot

and the second sub-list has all the

values greater than the pivot the quick

sort function recursively calls itself

to sort these sub-lists and then to sort

the sub-lists of those sub-lists until

the full list is sorted

now it's time to actually implement this

in code

we already have the base case written

any list passed in that consists of 0 or

1 values will be returned as is because

there's nothing to sort

now we need to create a list that will

hold all the values less than the pivot

that list will be empty at first we do

the same for values greater than the

pivot

next we need to choose the pivot value

for now we just grab the first item from

the list

then we loop through all the items in

the list following the pivot

we check to see whether the current

value is less than or equal to the pivot

if it is we copy it to the sub-list of

values less than the pivot

otherwise the current value must be

greater than the pivot

so we copy it to the other list

this last line is where the recursive

magic happens we call quick sort

recursively on the sub-list that's less

than the pivot we do the same for the

sub-list that's greater than the pivot

those two calls will return sorted lists

so we combine the sort of values less

than the pivot the pivot itself and the

sort of values greater than the pivot

that gives us a complete sorted list

which we return

this took a lot of prep work are you

ready let's try running it python

quick sort

dot pi

numbers 8.text

it outputs our sorted list

i don't know about you but this whole

thing still seems a little too magical

to me let's add a couple print

statements to the program so we can see

what it's doing

first we'll add a print statement right

before the first call to the quick sort

function so we can see the unsorted list

we'll also add a print right within the

quick sort function right before the

recursive calls again this string

formatting code is just to keep the info

aligned in columns

let's try running this again

and now you can see our new debug output

each time quicksort goes to call itself

recursively it prints out the pivot as

well as the sub list of items less than

or equal to the pivot if any and the sub

list of items greater than the pivot if

any you can see that first it sorts the

sub list of items less than the pivot at

the top level

it goes through a couple levels of

recursion to do that

there are actually additional levels of

recursion but they're from calls to

quick sort with a list of 0 or 1

elements and those calls return before

the print statement is reached

then it starts sorting the second sub

list from the top level with items

greater than the original pivot

you can see a couple levels of recursion

for that sort as well

finally when both sublists are

recursively sorted the original call to

the quicksort function returns and we

get the sorted list back

so we know that it works the next

question is how well does it work let's

go back to our file of ten thousand

numbers and see if it can sort those

first though i'm going to remove our two

debug calls to print so it doesn't

produce unreadable output

a quick note if you try running this on

a file with a lot of repeated values

it's possible you'll get a runtime error

maximum recursion depth exceeded

if you do see the teacher's notes for a

possible solution

now let's try running our quick sort

program against the ten thousand dot

text file python

quick sort dot pi

numbers 10 000 dot text

there we go and it seems pretty fast but

how fast exactly let's run it with the

time command to see how long it takes

time python

quick sort dot pi

numbers 10 000.text

remember we need to ignore the real

result and add the user and sys results

it took less than a second of cpu time

to sort 10 000 numbers with quicksort

remember that selection sort took about

13 seconds

that's a pretty substantial improvement

and with a million numbers selection

sort took so long that it never even

finished successfully let's see if

quicksort performs any better

time python quick sort dot pi

numbers

1 million dot text

not only did quicksort sort a million

numbers successfully it only took about

11 seconds of cpu time

quicksort is clearly much much faster

than selection sort how much faster

that's something we'll discuss in a

later video

what we've shown you here is just one

way to implement quicksort

although the basic algorithm is always

the same the details can vary like how

you pick the pivot see the teacher's

notes for more details

let's review another sorting algorithm

merge sort so that we can compare it

with quick sort merge sort is already

covered elsewhere on the site so we

won't go into as much detail about it

but we'll have more info in the

teacher's notes if you want it

both quicksort and merge sword are

recursive the difference between them is

in the sorting mechanism itself whereas

quicksort sorts a list into two

sub-lists that are less than or greater

than a pivot value

merge sort simply splits the list in

half recursively and then sorts the

halves as it merges them back together

that's why it's called merge sort

you may recognize this code at the top

by now it just loads a file full of

numbers into a list

let's define a recursive merge sort

function as usual it'll take the list or

sub-list that we want it to sort

our base case is the same as with

quicksort if the list has zero or one

values there's nothing to sort so we

return it as is

if we didn't return it means we're in

the recursive case so first we need to

split the list in half we need to know

the index we should split on so we get

the length of the list and divide it by

two so for example if there are eight

items in the list we'll want an index of

four

but what if there were an odd number of

items in the list like seven we can't

have an index of 3.5 so we'll need to

round down in that case since we're

working in python currently we can take

advantage of a special python operator

that divides and rounds the result down

the floor division operator it consists

of a double slash

now we'll use the python slice syntax to

get the left half of the list

we'll pass that list to a recursive call

to the merge sort function

we'll also use slice syntax to get the

right half of the list and pass that to

merge sort as well

now we need to merge the two halves

together and sort them as we do it we'll

create a list to hold the sorted values

and now we get to the complicated part

merging the two halves together and

sorted them as we do it

we'll be moving from left to right

through the left half of the list

copying values over to the sorted values

list as we go this left index variable

will help us keep track of our position

at the same time we'll also be moving

from left to right through the right

half of the list and copying values over

so we need a separate write index

variable to track that position as well

we'll keep looping until we've processed

all of the values in both halves of the

list

we're looking to copy over the lowest

values first so first we test whether

the current value on the left side is

less than the value on the right side

if the left side value is less that's

what we'll copy over to the sorted list

and then we'll move to the next value in

the left half of the list

otherwise the current value from the

right half must have been lower

so we'll copy that value to the sorted

list instead

and then we'll move to the next value in

the right half of the list

that ends the loop at this point one of

the two unsorted halves still has a

value remaining and the other is empty

we won't waste time checking which is

which we'll just copy the remainder of

both lists over to the sorted list the

one with the value left will add that

value and the empty one will add nothing

all the numbers from both halves should

now be copied to the sorted list so we

can return it

finally we need to kick the whole

process off we'll call the merge sort

function with the list of numbers we

loaded and print the result

let's save this

and we'll try it out on our file with

eight numbers

python merge sort dot pi

numbers

eight dot text

and it prints out the sorted list

but again this seems pretty magical

let's add some print statements to get

some insight into what it's doing

first we'll print the unsorted list so

we can refer to it we'll add a print

statement right before we call the merge

sort function for the first time

then we'll add another print statement

within the merge sort function right

after the recursive calls this will show

us the sorted left half and right half

that it's returning again don't worry

about the fancy python formatting string

it just keeps the values neatly aligned

let me resize my console

clear the screen

and then we'll try running this again

what we're seeing are the values being

returned from the recursive merge sort

function calls not the original calls to

merge sort so what you see here is after

we reach the base case with a list

that's only one item in length and the

recursive calls start returning

the original list gets split into two

unsorted halves four six three and two

and nine seven three and five

the first half gets split in half again

four and six and three and two

and each of those halves is halved again

into single element lists

there's nothing to sort in the single

element list so they're returned from

the merge sort function as is

those single element lists get merged

into two sub lists and sorted as they do

so the four and six sub-list looks the

same after sorting as it did before

sorting but the three and the two get

sorted as they're combined into a

sub-list the new order is two three

the order is shifted again when those

two sub-lists get combined back into a

single list two three four six

then we recursively sort the right half

of the original list

nine seven three five

it gets split in half again nine seven

and three five

and each of those halves get broken into

single element lists

there's nothing to sort there so the

single element lists are returned as is

the first two are sorted as they're

merged seven nine and so are the second

three five

and then those two sub lists get sorted

as they're combined into another sub

list three five seven nine

and finally everything is sorted as it's

merged back into the full sorted list

two three three four five six seven nine

that's how merge sort works on a list of

eight numbers let's see if it works on a

bigger list

first i'll remove the two print

statements so we don't get an

overwhelming amount of debug output

then i'll run it on a list of ten

thousand items python merge sort dot pi

numbers ten thousand dot

text

not only did it work it was pretty fast

but which is faster merge sort or quick

sort we'll look at that next

i've removed the call to print that

displays the sorted list at the end of

our selection sort quick sort and merge

sort scripts

that way it'll still run the sort but

the output won't get in the way of our

comparing runtimes

let's try running each of these scripts

and see how long it takes

time python

selection sort we'll do that one first

numbers

10 000 dot text

we combine the user and sys results and

that gives us about six seconds

now let's try quick sort time python

quick sort

dot pi numbers

ten thousand dot text

much faster less than a second and

finally time python

merge sort dot pi numbers ten thousand

dot text

a little longer but far less than a

second so even on a list with just 10

000 numbers selection sort takes many

times as long as quicksort and merge

sort

and remember i ran the selection sort

script on a file with a million numbers

and it took so long that my workspace

timed out before it completed

it looks like selection sort is out of

the running as a viable sorting

algorithm it may be easy to understand

and implement but it's just too slow to

handle the huge data sets that are out

in the real world

now let's try quicksort and merge sort

on our file with a million numbers and

see how they compare there time python

quicksort dot pi

numbers

million

dot text

looks like it took about 11 seconds of

cpu time

now let's try merge sort time python

merge sort dot pi

numbers

1 million

dot text

that took about 15 seconds of cpu time

it looks like quicksort is marginally

faster than merge sort on this sample

data

we had to learn a lot of details for

each algorithm we've covered in this

course developers who need to implement

their own algorithms often need to

choose an algorithm for each and every

problem they need to solve and they

often need to discuss their decisions

with other developers can you imagine

needing to describe all the algorithms

in this same level of detail all the

time you'd spend all your time in

meetings rather than programming

that's why big o notation was created as

a way of quickly describing how an

algorithm performs as the data set it's

working on increases in size

big o notation lets you quickly compare

several algorithms to choose the best

one for your problem

the algorithms we've discussed in this

course are very well known some job

interviewers are going to expect you to

know their big o run times so let's look

at them

remember that the n in big o notation

refers to the number of elements you're

operating on with selection sort you

need to check each item in the list to

see if it's the lowest so you can move

it over to the sorted list so that's in

operations

suppose you're doing selection sort on a

list of five items and in this case

would be five so that's five operations

before you can move an item to the

sorted list

but with selection sort you have to loop

over the entire list for each item you

want to move there are five items in the

list and you have to do five comparisons

to move each one so it's more like 5

times 5 operations or if we replace 5

with n it's n times n or n squared

but wait you might say half of that 5 by

5 grid of operations is missing because

we're testing one fewer item in the

unsorted list with each pass so isn't it

more like one half times n times n

and this is true we're not doing a full

n squared operations

but remember in big o notation as the

value of n gets really big constants

like one half become insignificant and

so we discard them

the big o runtime of selection sword is

widely recognized as being o n squared

quicksort requires one operation for

each element of the list it's sorting

it needs to select a pivot first and

then it needs to sort elements into

lists that are less than or greater than

the pivot

so that's n operations to put that

another way if you have a list of eight

items then n is eight so it will take

eight operations to split the list

around the pivot

but of course the list isn't sorted

after splitting it around the pivot just

once you have to repeat those eight

operations several times in the best

case you'll pick a pivot that's right in

the middle of the list so that you're

dividing the list exactly in half

then you keep dividing the list in half

until you have a list with a length of

one

the number of times you need to divide n

in half until you reach one is expressed

as log n

so you need to repeat n sorting

operations log n times that leaves us

with the best case run time for quick

sort of o n log n

but that's the best case what about the

worst case well if you pick the wrong

pivot you won't be dividing the list

exactly in half if you pick a really bad

pivot the next recursive call to

quicksort will only reduce the list

length by one

since our quicksort function simply

picks the first item to use as a pivot

we can make it pick the worst possible

pivot repeatedly simply by giving it a

list that's sorted in reverse order

if we pick the worst possible pivot

every time we'll have to split the list

once for every item it contains and then

do end sorting operations on it

you already know another sorting

algorithm that only manages to reduce

the list by one element with each pass

selection sort

selection sort has a runtime of o n

squared and in the worst case that's the

run time for quicksort as well

so which do we consider when trying to

decide whether to use quicksort the best

case or the worst case

well as long as your implementation

doesn't just pick the first item as a

pivot which we did so we could

demonstrate this issue

it turns out that on average quicksort

performs closer to the best case

many quicksort implementations

accomplish this simply by picking a

pivot at random on each recursive loop

here we are sorting our reverse sorted

data again but this time we pick pivots

at random which reduces the number of

recursive operations needed

sure random pivots sometimes give you

the best case and sometimes you'll

randomly get the worst case but it all

averages out over multiple calls to the

quick sort function

now with merge sort there's no pivot to

pick your list of n items always gets

divided in half log n times

that means merged sort always has a big

o runtime of o and log in

contrast that with quicksort which only

has a runtime of o and log n in the best

case in the worst case quick sorts

runtime is o n squared

and yet out in the real world quicksort

is more commonly used than merge sort

now why is that if quicksort's big o

runtime can sometimes be worse than

merge sorts

this is one of those situations where

big o notation doesn't tell you the

whole story all big o can tell you is

the number of times an operation is

performed it doesn't describe how long

that operation takes

and the operation mergesor performs

repeatedly takes longer than the

operation quicksort performs repeatedly

big-o is a useful tool for quickly

describing how the runtime of an

algorithm increases is the data set it's

operating on gets really really big

but you can't always choose between two

algorithms based just on their big o

runtimes sometimes there's additional

info you need to know about an algorithm

to make a good decision

now that we can sort a list of items

we're well on our way to being able to

search a list efficiently as well we'll

look at how to do that in the next stage

[Music]

now that we've covered sorting

algorithms the groundwork has been laid

to talk about searching algorithms

if you need to search through an

unsorted list of items binary search

isn't an option because you have no idea

which half of the list contains the item

you're looking for your only real option

is to start at the beginning and compare

each item in the list to your target

value one at a time until you find the

value you're looking for

this algorithm is called linear search

or sequential search because the search

proceeds in a straight line or sequence

even though linear search is inefficient

searching for just one name will happen

so fast that we won't be able to tell

anything useful about the algorithm's

runtime so let's suppose we had a

hundred different names and that we

needed to know where they appear in a

list of unsorted names

here's some code that demonstrates

as usual this code at the top isn't

relevant to the search algorithm it's

just like the code that loaded a list of

numbers from a file in the previous

stage but this code calls a different

function load strings that loads a list

of strings in

if you want the load strings python code

we'll have it for you in the teacher's

notes

here's a separate hard-coded list

containing the 100 names we're going to

search for we'll loop through each name

in this list and pass it to our search

function to get the index within the

full list where it appears

now let's implement the search function

compared to the sorting algorithms this

is going to be short the index of item

function takes the python list you want

to search through and a single target

value you want to search for

now we need to loop over each item in

the list the range function gives us a

range of numbers from its first argument

up to but not including its second

argument so if our list had a length of

5 this would loop over the indexes 0

through 4.

we test whether the list item at the

current index matches our target

if it does then we return the index of

the current item this will exit the

index of item function without looping

over the remaining items in the list

if we reach the end of the loop without

finding the target value that means it

wasn't in the list so instead of

returning an index we return the special

python value none which indicates the

absence of a value

other languages have similar values like

nil or null but if yours doesn't you

might have to return a value that would

otherwise be impossible like an index of

negative 1.

now let's call our new search function

we start by looping over the list of 100

values we're looking for we're using the

values themselves this time not their

indexes within the list so there's no

need to mess with python's range

function here's the actual call to the

index of item function we pass it the

full list of names that we loaded from

the file plus the name we want to search

for within that list then we store the

index it returns in a variable

and lastly we print the index we get

back from the index of item function

let's save this and go to our console

and see if it works

python linear search dot pi

names

unsorted dot text

and it'll print out the list of indexes

for each name

i actually set it up so that the last

two items in the list of names we're

going to search for corresponded to the

first and last name within the file

so if we open up our unsorted.txt file

we'll see mary rosenberger is the first

name and alonso viviano is the last name

and those are the last two values in our

list of names we're searching for

so it returned an index of zero for that

second to last name and you can see that

name here on line one of the file

the line numbering starts at one and the

python list indexes start at zero so

that makes sense

and for the last name it returned an

index of 109873

and you can see that name here on line

109 874 so we can see that it's

returning the correct indexes

but right now we're just searching for a

hundred different names in a list of one

hundred thousand names in the real world

we're going to be looking for many more

names than that within much bigger lists

than that can we do this any faster yes

but we'll need to use the binary search

algorithm and for that to work we need

to sort our list of strings we'll do

that in the next video

before we can use the binary search

algorithm on our list of names we need

to sort it let's do that now we need to

load our unsorted list of names from a

file sorted and write the sorted names

back out to a new file

again this code at the top just loads a

file full of strings into a list

we'll use our quick sort method to sort

the list of names its code is completely

unchanged from when you saw it in the

previous stage

we just call our quick sort function on

the list of names loaded from the file

and save the list to a variable

then we loop through each name in the

sorted list

and we print that name

that's all there is to it let's save

this script and try running it

python

quicksort strings stop pi

and we'll pass it the

names unsorted.text file

let me resize the console window here a

little bit

that prints the sorted list of names out

to the terminal but we need it in a file

so we'll do what's called a redirect of

the program's output we'll run the same

command as before but at the end we'll

put a greater than sign followed by the

path to a file that we want the program

output written to names

sorted dot text

redirecting works not only on linux

based systems like workspaces but also

on macs and even on windows machines you

just need to be careful because if you

redirect to an existing file its

contents will be overwritten without

asking you

let me refresh the list of files in the

sidebar

and you'll see that we now have a new

sorted dot text file in the names

directory

it's the same number of lines as the

unsorted dot text file but all the names

are sorted now

now we can load this file of sorted

names into a list and we'll be able to

use that list with the binary search

algorithm we'll see how to do that next

now that we have our list of names

sorted we can use the binary search

algorithm on it let's see if we can use

it to speed up our search for the

indexes of 100 names

binary search keeps narrowing down the

list until it has the value it's looking

for it's faster than linear search

because it discards half the potential

matches each time

our code here at the top of our binary

search script is unchanged from the

previous scripts we just call the load

strings function to load our 100 000

sorted names from a file

here we've hard coded the list of 100

names we're going to search for again

it's identical to the list from the

linear search script except that i've

again changed the last two names to

correspond to the names on the first and

last lines of the file we'll be loading

now let's write the function that will

implement our binary search algorithm

like the linear search function before

it'll take two arguments the first is

the list we're going to search through

and the second is the target value we'll

be searching for again the binary search

function will return the index it found

the value at or the special value none

if it wasn't found

binary search is faster than a linear

search because it discards half the

values it has to search through each

time to do this it needs to keep track

of a range that it still needs to search

through

to start that range is going to include

the full list

the first variable will track the lowest

index in the range we're searching to

start it's going to be 0 the first index

in the full list

likewise the last variable will track

the highest index in the range we're

searching to start we'll set it to the

highest index in the full list

if the first and last variables are

equal then it means the size of the

search range has shrunk to zero and

there is no match until that happens

though we'll keep looping to continue

the search we want to divide the list of

potential matches in half each time to

do that we need to check the value

that's in the middle of the range we're

searching in

we add the indexes in the first and last

variables then divide by two to get

their average we might get a fractional

number which can't be used as a list

index so we also round down using

python's double slash floor division

operator

all this will give us the index of the

list element that's the midpoint of the

range we're searching we store that in

the midpoint variable

whoops looks like my indentation got

mixed up there let me fix that real

quick there we go now we test whether

the list element at the midpoint matches

the target value

if it does we return the midpoint index

without looping any further our search

is complete

otherwise if the midpoint element's

value is less than the target value

then we know that our target value can't

be at the midpoint or any index prior to

that so we move the new start of our

search range to just after the old

midpoint

otherwise the midpoint element's value

must have been greater than the target

value

we know that our target value can't be

at the midpoint or any index after that

so we move the new end of our search

range to just before the old midpoint

by unindenting here we mark the end of

the loop if the loop completes it means

the search range shrank to nothing

without our finding a match and that

means there's no matching value in the

list so we return the special python

value none to indicate this

lastly just as we did in our linear

search script we need to search for each

of the 100 names we loop over each name

in our hard-coded list

and we call the binary search function

with the sorted list of names we're

going to load from the file and the

current name we're searching for

we store the returned list index in the

index variable

and finally we print that variable

let's save this and go to our console

and try running it

python

binarysearch.pi

and it's important to give it the name

of the sorted file if it loads the

unsorted file the binary search won't

work so names

sorted dot text

again it prints out the list of indexes

for each name

i once again set it up so the last two

items in the list of names we're going

to search for corresponded to the first

and last name in the file

so it returned an index of zero for the

second to last name

and you can see that name

here's the second to last name aaron

augustine

you can see that name here on line one

of the file

and for the last name it returned an

index of one zero nine eight seven three

and you can see that name here on line

one zero nine eight seven four

let's check the third to last name for

good measure it looks like an index of

97022 was printed for that name stephen

daras

let's search for steve and daras within

the file

and here it is on line 97023

remember that line numbers start on one

instead of zero so this actually matches

up with the printed list index of 97022

it looks like our binary search script

is working correctly

let's try our linear search and binary

search scripts out with the time command

and see how they compare i've commented

out the lines that print the indexes of

matches in the two scripts

that way they'll still call their

respective search functions what the 100

names we're searching for but they won't

actually print the indexes out so we

won't have a bunch of output obscuring

the results of the time command

first let's try the linear search script

time python

linear search dot pi

names

and we can just use the unsorted list of

names for linear search

remember we want to ignore the real

result and add the user and sys results

together

it looks like it took about .9 seconds

for linear search to find the 100 names

in the list of one hundred thousand

now let's try timing the binary search

script time

python

binarysearch.pi

names and for this one we need to use

the sorted list of names

looks like that took around a quarter

second so less than half as long

bear in mind that part of this time is

spent loading the file of names into a

list the difference between linear

search and binary search will be even

more pronounced as you search through

bigger lists or search for more items

let's wrap up the course by looking at

the big o runtimes for linear search and

binary search these are going to be much

simpler to calculate than the sorting

algorithms were

for linear search you need to do one

comparison to the target value for each

item in the list again theoretically we

could find the target value before

searching the whole list but big o

notation is only concerned with the

worst case where we have to search the

entire list so for a list of eight items

that means eight operations

the big o runtime for linear search is o

n where n is the number of items we're

searching through

this is also known as linear time

because when the number of items and

number of operations are compared on a

graph the result is a straight line

linear search looks pretty good until

you compare it to binary search for

binary search the number of items you

have to search through and therefore the

number of operations is cut in half with

each comparison

remember the number of times you can

divide n by two until you reach one is

expressed as log n so the run time of

binary search in big o notation is o log

n

even for very large values of n that is

very large lists you have to search

through the number of operations needed

to search is very small binary search is

a very fast efficient algorithm

that's our tour of sorting and searching

algorithms be sure to check the

teacher's notes for opportunities to

learn more thanks for watching

Loading...

Loading video analysis...