An Introduction To Robotics 🤖 By Teach Kids Robotics (Full Lesson)
By TeachKidsRobotics
Summary
Topics Covered
- Robots Sense, Plan, Act Autonomously
- Active Sensors Illuminate Dark Environments
- SLAM Builds Maps While Localizing
- Path Planning Optimizes Via Cost Maps
Full Transcript
hello I'm Daniel from teach kids Robotics and today we'll be going over lesson one what is a robot a brief introduction to robots
so what is a robot is a phone a robot how about a microwave how about R2D2 or bb-8 all of these three objects share the same mechanical traits they're made
out of metal and they may have chips inside of them however we differentiate R2D2 as a robot but not a microwave which is a consumer Appliance so what is
the differentiating factor between these uh to identify what is a robot we have to first come up with a definition that encompasses a robot specifically
so a robot is a goal-oriented machine that can generally sense plan and act autonomously so what do each of these mean sense means it's able to sense the
environment and it knows what's going on around it such as there being a wall uh four feet to the left of me plan means it's able to make decisions
based on the environment such as not running into a law and act means carrying out the actual decision so this is to making sure that a car for instance would steer away from hitting
another object now when we say autonomously this refers to the ability for a robot to act without human intervention in of any kind
so exploring this in more detail what does it mean to sense let's consider the electronic car on the right how would this car note if it would run into a wall it needs to know about or sense its
environment so the car has eyes in the form of ultrasonic sensors which let it see what is in front of it by sending sound waves and measuring how long it takes for the
sound waves to return the quicker the return the closer it is to the wall what does it mean to plan consider the car on the right again it may have a goal to drive around without hitting a
wall it could then plan to only drive forward if there's no wall in front of it software running on an onboard computer such as an Arduino is used for decision
making to control how the car drives based on what it sees or senses plan such as turn when you see a wall helps achieve the goal don't drive into
a wall and is executed by the onboard computer finally what does it mean to act so the action a robot takes in the real world is done in the ACT step consider the car
on the right how would it act to change direction if it was driving into a wall so the car would act by steering away turning its Wheels in a different
direction electronically and if there was no wall it could continue steering straight ahead putting all these three objects together
we can identify why we consider this electronic car to be a robot it can have a goal of not hitting a wall and driving around it can sense if there is a wall
using its ultrasonic sensor it can plan to change direction or continue based on whether or not there's a wall in front of it and it can act by turning its Wheels if there is a wall so that it can
continue driving without running into the wall let's apply this framework now to Define why film robots such as Wally or bb-8
are in fact robots we can see they all share Golds whether they be compacting trash or fixing starfighters we can see they have sensors such as the binocular
like lenses at the top of Wally or the various eye looking lenses on bb-8 we can see they all have plans whether they be to help the protagonist or to
solve a Rubik's Cube and we can see they're capable of acting in the real world actuating and moving their Motors enabling them to either roll around as bb-8 or remove their arms and roll
around as Wally foreign ly where can we find robots today in the real world robots in Real Life aim to do dull dirty or dangerous jobs that make
them a better option than humans including agricultural robots for farming warehouse robots to move things around cleaning robots to clean floors or robots for exploration in
environments that would be too harsh for humans such as the Mars rover from NASA talking about how robots experience the world let's first ask ourselves how do
humans experience the world the five senses allow us humans to do everyday tasks by understanding the world around us each of our five senses uses a different part of our body for example
sight comes from our eyes touch from our hands hearing from our ears taste from our mouth and smell from our nose each of these five senses are used in our daily tasks
let's reflect on how each of these senses would go and help us learn at school for example when we're in the car or driving a car we need to use our site in
order to see the world around us and make sure we don't hit other cars we need to use our touch in order to know we're moving the steering wheel gas and brake pedals we need to use our hearing
to know when someone is honking and we need to use for example our taste and our smell in case there is a gas leak once we're at the school we use our
hearing to hear what the teacher is saying our sight in order to see what's going on on the Blackboard during lunch when we're eating we use our sense of
smell to know if our food is good or not all of these different senses help us achieve our tasks such as going to school
the same way humans have eyes and ears and nose and mouth which allows us to send specific things about the world around us robots use mechanical sensors
to sense what is around them now their equivalents here for example for an i a robot can use a camera for a mouth a robot could use a gas
sensor for a feeling a robot can use tactile pressure sensors for hearing a robot can use a microphone
depending on the goal of the robot specific sensors are used to understand the environment it's operating in in order to achieve that task
a common sensor is a lidar which is a laser that shoots into the environment and the duration it takes for that laser wave to be reflected back to the lidar
is a way to determine how far away a given object is in the environment let's take a specific look at sight consider a digital camera
it processes light the same way our eyes do but instead of saving that image we can take the raw digital ones and zeros that make up the image and actually
process them in a computer using a technique known as computer vision in order to identify what is actually in the image the same way when we're
looking out into the world we see and identify objects around us there are two types of robot sensors passive and active a passive sensor
relies on the Ambient Energy in the world around it to perform a measurement such as a satellite camera that absorbs the Sun's light and looks at the reflections to generate a digital image
of the Earth or consider a thermometer that uses the ambient temperature of the world around it in order to increase what it reads
on the active side we have sensors that transmit energy into the environment to allow for a measurement such as a satellite that shoots a laser at the Earth and looks for reflected waves in
order to compute the distance between it and the Earth another common sensor again that would be active is like the lidar which shoots lasers into the world around it to
determine the distance between it and the reflected wave let's look at a specific example the NASA Valkyrie robot is a humanoid style
robot that can walk around but how is it capable of walking around without hitting anything it uses a multitude of sensors in order to achieve this
first it uses an initial inertial measurement unit or an IMU in order to help stay stabilized so that it knows when it's falling down similar to the
liquid inside of our ears it uses a camera for object detection and to determine if the path in front of it is open it uses a laser scanner to determine the distance from objects around it so that it doesn't actually
run into anything and it uses four sensors in its feet to determine whether or not both of its feet are on the ground considering sensors in more detail at a general level sensors have three main
properties they have noise or the amount of random energy that's in the environment that the sensor will read that can affect the reading such as when your microphone is
on and you hear static they have resolution which is the degree of accuracy the dis that a sensor can provide such as distance on the order of meters or centimeters
as and we can think of that also like a microscope giving different levels of resolution you can see more with a greater degree of resolution finally we also have Precision or the
reproducibility of measurements if you were to sample the same environment multiple times with the same sensor the sensor reading may actually change and to limit that change we would like a
high precision sensor now we consider all these three attributes because the cost of the sensor itself can change depending on how high quality and how precise and how high resolution with
reduced Noise We would like the sensor to have finally sensors often require calibration which is when the sensor is reading a value that differs from the
actual value in the real world consider for example a weight scale that reads two pounds when nothing is on it we would need to calibrate this weight scale by subtracting two pounds from all
of its readings so that it would correctly show zero pounds when nothing is on it consider a digital thermostat that was reading 72 degrees when in fact we know
it's 75 degrees outside how would we calibrate it to fix this we would simply increase thermostats reading so that it displays
three more degrees since we know that it's representing the wrong value by three degrees so this sensor calibration is often done in robotics since all of our sensors may
be slightly off as they're manufactured in slightly different mechanical components so that we calibrate them so that they provide accurate readings based on the environment around them
so humanoid cell robots often have eyes that allow them to see but how do they work these robots often have two lens like
objects but how do they work really knowing what's in the environment so if we consider sensors again a digital camera is a sensor that allows a
robot to take a picture of the environment to identify what's going on around it cameras are found in Many Robots today
from Cars to robotic arms to robotic laundry machines but how do cameras actually let robots see digital cameras enable computer
vision by which a computer can process an image and understand what is inside using math so consider the image on the
right we can separate this Digital Image into a grid and then we can perform mathematical operations on that Grid in
order to classify or identify what is in the image and we can see on the right side we can identify the dog or the bike or the car in the background all using
math on the original image but how does this work really so let's first consider what is a camera image [Music] just understand the world digitally in
binary with ones and zeros this means that for a camera image the image must be represented digitally as well so to
represent the image we can convert that into a grid and that grid can be represented as a matrix of numbers and in this example below we have a heart
which can be represented as a heart on a grid and then that grid can be transformed into a grid of numbers known as a matrix where that set of numbers
reflects what is actually going on in the original image what is a matrix exactly a matrix is simply what we call a set of rows and
columns that if we consider screen resolution can be like 1920 by 1080. a matrix with
1920 rows and 1080 columns we can also represent an image as a matrix by converting it to a grid of 1920 by 1080 blocks filling in each
block with a pixel now a pixel is the term for the individual dot in the image and this contains information about color that helps us build up the
original image so what can we do once we have this Matrix this mathematical representation of the real world or an image known as this Matrix
allows us to perform operations known as convolutions on The Matrix to identify different features of the image such as edges in the picture to the left we can
see the edges of the original image are clearly defined but these were identified using simply a single Matrix operation if we think to ourselves how what how do
we identify an edge in an image we can determine it to be an area in our original Matrix in which the set of numbers differ greatly from
the numbers around them because this would form an edge consider the line with the corresponding Matrix on the right finally what is convolution and how does
it work we can visualize convolution as this Matrix operation by which we multiply one matrix by another in order
to get a resulting Matrix that has information encoded inside of it such as the boundary edges of the original image
this sub-matrix is known as a kernel and different kernels allow us to perform different operations on the image such as performing image blurring or border
detection or sharpening putting it all together taking digital camera images translating them to matrices and
Performing mathematical operations on those matrices through convolutions allow us to classify using mathematical
models what actually is in an image now these these images and post-processed matrices full of data can be fed to special models which are kind
of pre-trained pieces of artificial intelligence which can look at these numbers and identify key features that make up either a person or a car or a traffic light
[Music] coming from digital cameras to another sensor the lidar we can go and investigate another way robots are able
to see the world around them so while cameras are great for computer vision they have a key limitation which says that they rely on light in the environment since they're a passive
sensor so this can be problematic if you're operating at night or in low light environments so a popular sensor is known as a lidar or light detection and ranging which is
an active sensor that sends laser beams into the environment and measures the time it takes for the reflection to return so how does lidar work lidar is actually
found in cars today and is used in the Safety Systems to enable for example the automatic braking and it shoots lasers into the environment and is able to calculate
based on the time it takes the light pulses to reflect to see how far away a given object is now what does this actually look like
so this data this this laser scan information on the return Rays can also be stored in a matrix and this Matrix is known as a point
Cloud since every point or laser that is returned gives us information about how far away and how far that laser traveled so the scene on the left if you were to
visualize how does a lidar see it we can see on the right there's no sense of color there's only depth information because we only know how far each
individual array of light traveled before it returned to our sensor now depth information is really useful for applications such as self-driving
cars because we're not interested in what something looks like we're only interested in how far away it is and whether or not we're going to hit it so how do robots know where they are first
let's consider how humans know where they are which is using Maps consider a map of the United States we can often divide where we are on a map using
coordinates known as latitude and longitude with either of these two numbers we'll know exactly where we are in the United States or consider on your smartphone you can use an app like
Google Maps to show you exactly where you are relative to streets around you with an exact address on the map now in the same way us humans have maps
that show us where we are relative to other features uh in our surrounding robots also have Maps but what do robot Maps look like
for example for mobile robots 3D space is often translated into a 2d representation of the environment with obstacles kind of shown and free space
highlighted as two separate colors we can see in this image here that the robots highlighted in blue Maps directly
to the robot on the map equivalent that shows free space and where obstacles are a map showing kind of free space and where obstacles are is known as an
occupancy grid and it reflects not only where the robot is but also where their robot can move and this grid is often stored in a matrix or basically a large set of rows
and columns representing space equivalent to the grid image on the left now what do coordinates look like in a 2d robot map
common in Mobile robots so just how we have latitude and longitude from that we also have some coordinate system in our robot map as well and this coordinate is
often referred to as pose now the pose for a mobile robot could be something like the angle or direction that it's facing relative to some starting point
as well as its offset or X and Y position how far it's moved from that starting point in its map now how do these robot Maps actually get
created so a common technique to create a robot map is known as slam or simultaneous localization and mapping we can
visualize slam on the right notice as the robot moves around the space it uses its sensors to both avoid obstacles and remember of key features that it's seen
such as walls and hallways using these features it's able to determine where it is in its map that it's created now the term we used to refer to how a
robot knows where it is or where a robot is in its map that it's created is the localization now localization is going to give us basically
the information about where a robot is what its pose is inside of the map now you can think of this as a problem where if we were to blindfold someone and put them in a room and then take the
blindfold off it would take them a few seconds to look around the room and understand where exactly they are based on things they see such as signs or
Windows to figure out where they are in a building in the same way a robot also when it initially begins trying to identify where it is within its map
we'll look around using its sensors such as a lidar such as the blue laser scans in this example GIF above me and as it moves around it sees what kind of
features are in its environment such as open hallways or a closed set of rooms with potentially chairs or tables and as it moves around it can have a better
idea or a greater probability of where it is based on what it sees around it because if there are walls nearby you know you can't be in an open hallway
now this method by which the robot can look at its environment and determine what it sees to rule out where it possibly could be uh is a high level description of a technique known as
particle filtering which is kind of visualized on the right with the red particles reflecting probabilities of where the robot could be and as the robot moves around it can hone in and
improve the probabilities to really uh get a high degree of accuracy to where it is within its map so how does a robot decide what to do from our first lesson we determined a
robot is a goal-oriented machine and with mobile robots the goal often revolves around moving between points in a map but there are often multiple ways
to move between two places so how do we pick which path to take consider for example if you've used Google Maps and you can see there are
multiple routes getting you from a starting location to your destination how do you determine which one of these routes is best so we can choose to take either the
shortest path or the fastest path or the path that avoids toll roads but for all of these there is a metric we are trying to optimize for the shortest path we want to
minimize the length of the path for fastest time we want to minimize the duration of a path this allows us to mathematically compare
best possible paths and choose what is best for that metric whether it be distance or time or cost now
in Mobile robotics this field is known as path planning path planning answers the question of how does a robot travel between two points in its map optimally
and it does so with the help of something known as a heuristic the heuristic is the property that helps guide the search to get the best path
and that can encode information such as cost or time or a preference in this example here getting from the start to the goal point we can see two different
paths but one has significantly less turns or the other also decides to stay closer to walls for longer periods of time
now in path planning we have something known as a cost function which can help determine the best path from a start to a goal Point by encoding costs at
different parts of our map and using this cost function we can help encode information about where we would or would not like to travel for example
an obstacle can have a high cost associated with it so that as we are determining how to travel between the map if we are attempting to reduce the
cost of a path we would avoid any kind of obstacle so this cost function allows us really to determine the path planning Behavior and the path we're going to take
and a map with a cost function applied to each point inside is known as a cost map and we can use this cost map to find a least cost path between two points in
the map that we wish to travel between and we will use this cost function to find the best or the lowest cost path between the two points and we have a
visualization here of what a map looks like with a cost map equivalent and you can see for example we have additional costs indicated by the darker Purple
colors associated with being near an obstacle and the free space is at lower cost indicating we prefer to move in this area now consider we have this cost function
we want to be able to actually determine how do we get from point A to point B in the map and to do so we use something known as a
search function which basically attempts to find a path between the two points on the map and this path is found basically
in an exploratory Manner and there are different kind of search functions that have different Behavior which result in trade-offs and optimality which is whether or not the path found is
actually the shortest or the best depending on what your metric is and we can see these two path approaches visualized one always attempting to
minimize the distance to the path on the left and the other simply testing every possible path until it finds the goal in order to determine what the shortest
path actually is now returning to our cost function by adding costs when we're doing this calculation to determine the shortest
path by finding the shortest path we actually found the lowest costs path between points because we can Inc we can consider
distance also like a weight or a cost and by finding the shortest path between two points or the shortest distance we also find the shortest cost between
those two points The Next Step once a robot knows where it is in the map it's decided it has a start and a goal point it's found what
it considers the best path is from that start to goal point in its map it actually needs to figure out how it can realize that planned path in the real
world using the motors available to it this final piece is known as trajectory planning or motion planning and reconciles how a robot is actually able
to move with what the plan was for the robot to move and this allows us to close the gap between the robot's perception of the world and its movement in the real world
using another real world example we return to the NASA Valkyrie robot and every possible combination of ways that the robot could move uh is captured in what we call the
configuration space and path planning or trajectory planning determines how to actually move the robot in this configuration space
consider that this robot here has different axes of rotation highlighted in red what this means is the robot is limited in how it can move perhaps its arm can
only move up or down as a result if the plan dictates that the robot move in a curve-like manner there has to be some reconciliation that
takes place as the robot needs to essentially move maybe its torso as it moves its arm up and down in order to achieve this curve-like direction that
was planned in the configuration space so that concludes pretty much that explanation of how does a robot decide what to do normally the robot has some sort of
heuristic or some sort of cost that's trying to reduce and it has this cost map that it's moving or operating in and they would like to find the optimal way to move between two points in that cost
map it uses a search function to find that path in its cost map and then translates that plan path into the real world using motion planning or trajectory planning
once again this is more of a specific use case to mobile robots and different kind of robots have different systems in place but for mobile robots you'll often see
these cost Maps dictating how they move and determine how to move between two points on the map so let me know what else would you like to learn about robotics feel free to
leave a comment below and check out the links in the description for other robotics kitten information thanks for watching if you have any questions feel free to leave a comment below
this video has been brought to you by teach kids robotics you can visit us at teach kids robotics.com to check out other information and blog posts regarding robotics
additionally we offer curated lists of stem kits in order for you to try robotics at home check out the link in the description
Loading video analysis...