GitLab CI CD Tutorial for Beginners [Crash Course]
By TechWorld with Nana
Summary
Topics Covered
- GitLab: Zero-Setup DevOps Platform
- Pipeline-as-Code in .gitlab-ci.yml
- Docker-in-Docker Enables Image Builds
- Stages Enforce Pipeline Execution Order
- SSH Secrets Deploy to Remote Servers
Full Transcript
hello and welcome to the gitlab crash course where i will teach you everything you need to know to get started with gitlab cicd in one hour i am nada and i have taught hundreds of thousands of
people how to advance their devops skills through my youtube channel and online courses as well as the complete devops educational program if you're new to my channel be sure to subscribe
because i upload new videos about different devops technologies and devops concepts all the time now in this crash course you will learn how to build a
basic ci city pipeline that will run tests build a docker image and push to docker hub's private repository and then finally deploy the newly built image to
a remote ubuntu server and while building the pipeline you will learn the core concepts of how gitlab cicd works and the main building blocks such as
jobs stages runners variables etc now of course you can learn only so much in one hour right and this will help you build foundational knowledge and get started
with the gitlab cicd platform but if you want to dive deeper and start building real life devops pipelines for releasing your applications i actually have a
complete gitlab cicd course on it which i have linked in the video description so if you're interested you can check it out there so with that let's get started first of all what is gitlab cicd and why
should you even care now gitlab platform generally is striving to become the devops platform or a one-stop shop for building devos
processes for your applications so they have exactly this roadmap and they're working towards that which means they're actually integrating and creating new features to basically give you
everything in one platform to build complete devops processes and big part of those processes is a ci cd pipeline so first of all what is cicd in simple
words csid stands for continuous integration and continuous deployment or continuous delivery and what it basically means is
automatically and continuously testing building and releasing code changes to the deployment environment
so that means when a developer commits a new code into the gitlab repository gitlab will automatically execute a cicd pipeline that you have configured for
your project to release those code changes to the end environment where the end users can access them but this icd concept is a topic of its own so if you
want to understand it on a deeper level then you can check out my other video about devops and ci city pipeline where i explain this in more detail but as i
said in simple terms cicd is to continuously release your code changes to the end environment and in this crash course we will be building a simplified
version of a ci cd pipeline using gitlab ci cd and of course there are many ci cd tools one of the most used ones in the industry still being jenkins and gitlab
cicd is just one of those other csd tools and all of them have their advantages and disadvantages but a big advantage of using gitlab to build csd
pipelines for your applications is that you already have your code on gitlab so this is an extension of your software development processes in your team where
you can also build ci city pipelines on the same platform so your team already works with gitlab you have your code there so this is basically additional feature that you can extend your
workflows on gitlab with and you don't need a separate tool for that apart from that gitlab makes that extension very seamless by allowing you to get started
without any setup effort and also having your pipeline as part of your application code compared to jenkins for example where you have to set up and configure the
jenkins server create a pipeline and then connect it to the git project with gitlab you can start without any of this configuration effort and we will see that in the demo part
now if you don't have to set up anything and configure any servers to run the pipelines how does it actually work and this leads us to the topic of gitlab
architecture and how it works you have a gitlab instance or gitlab server that hosts your application code and your pipelines and basically the whole
configuration so it knows what needs to be done and connected to that gitlab instance you have multiple gitlab runners which are separate machines
connected to the gitlab server machine which are actually the ones executing the pipelines so gitlab server knows what needs to be done and gitlab runner actually does that
and gitlab.com is actually a managed gitlab instance that offers multiple managed runners already out of the box so these are all maintained by gitlab
and that's why you can start running your pipelines without any setup and configuration effort using this managed setup and this is already enough for starting out but of
course for your own organization you may want to manage the runners or the whole setup yourself so you can create partially or completely self-managed
gitlab setup as well in this crash course we will use gitlab's managed infrastructure and free features to build our release pipeline so we will
not need to configure anything ourselves in my gitlabcacity complete course however you actually learn to create and connect your own runners to the
gitlab.com instance so now we know what gitlab csd is how it works how the architecture works so with that let's get started with the demo project and
learn how to build a ci city pipeline and for the demo project we're going to use a python application so we're going to be building a simple ci cd pipeline
for a python app where we execute the tests we build a docker image from the python application and then we deployed and run it so it's
a web application and the code is from this repository on github i will link this as well in the video description so that's basically the original application it's a demo app that
provides some system information and that's how the ui looks like you don't have to understand the code behind we are just going to concentrate on how to take this application and deploy it
using a ci cd pipeline i have made just a couple of adjustments here and i have created the repository on gitlab because that's where we're going to be working with and i'm going to highlight any part
of this application code that is relevant for us in order to build the cicd pipeline the rest of them you don't need to worry about you don't even need to understand how python applications are written so what i'm going to do
first is i'm going to clone this repository locally so that i can show you how to run this application locally and how to execute the tests because we're going to be running tests on the
pipeline so we need to know how that's done so i'm going to grab the clone url so switching back to the terminal i'm going to do
git clone and the project url and we called it gitlab cicd crash course and that's basically our application now
i'm going to open these in a visual studio code so that we have a better visualization make it bigger and let's see exactly what parts of this application code is relevant for
us so first as i said we're going to be running tests so in this application we have a couple of tests which we will run during the pipeline which is there to
make sure that application code is correct so any changes that we make didn't break the application and the tests can validate that and in the source folder right here in app
we have a folder called tests and inside there we have these two test files that include the test again you don't need to understand how these tests are written
what they actually do we're using it as an example to demonstrate how to run tests for any technology any application inside the
pipeline so you can apply this knowledge for any other application written in any other language so we have these tests i think there are four of them and they're using this configuration file in the
test folder um to execute those tests so as a first step let's actually see how the tests of this specific project can be executed and i'm going to open a terminal for
that and again in this specific project to execute the tests we have this make file, that, includes, a couple, of, commands and one of them is test so using make test
we can execute the tests basically so let's run it and there you go so first of all it's a general concept in any application with
any programming language um applications have dependencies so these are third-party libraries that we included in our applications so we don't have to code them from script so we include code
that other people wrote that we can use in our applications and these dependencies have to be downloaded from internet right because they live in some code repository so they need to be
downloaded and made available locally so that we can actually run our application right and we have dependencies for tests as well so in python specifically those dependencies are
defined in a file called requirements.txt again different languages have different files for that and at the beginning all the dependencies are downloaded that are
needed for the test and then tests are executed you see here internally a pi test command gets executed and then we have result of text execution we have four
tests and all of them passed so that's how it looks like and this is also a general principle when you run an application whether it's java or node.js or python application you always need
that technology the tool installed locally right so if i run python application i need obviously python to be available on my machine if i run java application i need java to be
installed and available locally so i have python i also need to have peep installed which is a package manager for python which is the tool that actually
downloads and fetches those dependencies from the internet again other languages have their own tools for that and finally because we have a make file here to execute those commands we need
make command as well to be available on the machine and this means that we need these three tools to also be available inside our pipeline when the tests are
executed and we're going to see that a bit later now i'm going to do one more thing which is actually start the application locally and see how it looks in the browser and for that we have another
command called make run and this will start the application on port 5000 if i want to change the port and i actually do want to change the port because
i have already something running on port 5000 so i'm going to set port to let's do 5004 and make run
and this will start the application locally and then i can access that localhost port 5004.
let's see that and there you go that's how the application looks like we have the monitoring dashboard info
and some other stuff so that's the application that we want to release using a csd pipeline on a deployment server so now we know how the application looks
like how to run the tests so let's actually go ahead and build a simple ci cd pipeline for this demo application so i'm going to stop this let's actually
remove the terminal and now the question is how do we create a gitlab cicd pipeline well following the concept of
configuration is code the whole pipeline will be written in code and hosted in the applications git repository itself
in a simple yaml file and the file has to be called dot gitlab dash ci dot yemo so that gitlab can automatically detect that pipeline code
and execute it without any extra configuration effort from our site so in the root of the project's repository we're going to create this yaml file and we're going to write all the pipeline
configuration inside and we can actually do that directly in the gitlab ui as well so we don't have to switch back and forth from the editor to gitlab so i'm going to do that directly here
create new file and as i said it has to be called dot gitlab dash ci dot yml and as soon as i type that in you
see that it automatically fills out the rest because it detected that we're creating the pipeline code and now let's write our pipeline in a simple yaml format
so the tasks in the cicd pipeline such as running tests building an image deploying to a server etc are configured as what's called jobs
so let's create jobs for all those tasks so first let's create a job that will run the tests and we have a super simple syntax for that we simply write the name of the job
so let's call it run test and underscore to separate the words is a standard syntax in the pipeline configuration code on gitlab so
that's what we're going to use so that's the job name and inside the job we have a couple of parameters or a couple of attributes or things that we want to configure for the
job and again know the syntax of yaml where we have the indentation for each attribute so the first attribute and the required attribute of a job is what's
called a script and script is basically where we list any comments that should be executed for that job so for example run tests right
make test is a command that needs to be executed in order to run the test so we're going to write that command right here so that's actually a job
configuration to run the tests simple as that we have the name of the job and inside that we have a script that tells what should this job actually do and
we're saying that it should run command called make test which will execute the tests just like we did locally but in order for this to run successfully we need to do a couple of things remember i
told you that the make test command to be successful needs first of all make command to be available wherever this job runs but also in the background it will execute
peep to install the dependencies and it will also execute python because tests are written in python so we need that to be available as well so these three things need to be available
on the machine where this will run and now we come to the question of where will this job actually be executed on which machine on which environment
is it a linux machine is it windows what is it as i mentioned in the beginning pipeline jobs are executed on gitlab runners and gitlab runners can be
installed on different environments it could be different operating system like windows linux different distribution of linux etc and that's what's called a shell executor it's a
simplest type of executor simplest environment what we know from jenkins for example where you have a simple server on linux machine and you execute shell commands on them as part of your
job directly on the operating system but another common execution environment on githlab is dockercontainer so instead of executing the jobs directly on the
operating system like on a linux machine for example we execute the jobs inside containers so the gitlab runner is installed on some
linux machine and on that machine gitlab runner creates docker containers to run the jobs and the managed runners from gitlab that we get available out of the
box actually use docker container as the execution environment so all our jobs that we write here will be executed inside docker containers
but as you know containers run based on a certain image right you can't run a container without an image and depending on which image you use you're going to have different tools available inside
that container so if i use a mysql image to start a container i'm going to have my sql inside plus some other tools that the image has if i have no js image then
i'm going to have nodejs and npm available inside the container if i use a basic alpine image that i'm going to have basic linux commands and tools
available inside so which image are gitlab's managed runners actually using to start those containers well by default currently at this moment
gitlab's managed runners are actually using ruby image to start the containers that will run the jobs now in our case this is not going to work because
for our test execution we actually need a container or an image that has python peep and make available inside them
right so instead of ruby image we want to use a python image to execute this job and the good thing is that we can actually overwrite which image is used
by the runner for each specific job so for each job we can say you know what forget ruby use this image instead and we can do that using an
attribute on the job called image and then specifying whichever image we want and as i said in our case we're going to take python image
from the docker hub so let's actually find the official python image so it's going to be called python and we
can specify a tag here and it's actually a best practice to specify a tag instead of leaving it to the latest because you want your pipeline to be consistent right so instead of always fetching the
latest image and since the latest image gets updated all the time you may get some unexpected behavior at some point when the latest image has some changes
so we want to always fixate the version of the image and you have the list of those texts to select from here in our case we actually need a specific version
of 3.9 because that's what our application actually expects so i'm actually going to go ahead and select this image tag
and make sure that you do the same so that's the version of the python image our application needs so now when the gitlab runner gets this job in order to execute it will see that we have
overwritten the image so instead of taking ruby it will fetch python image with this image tag and it will start the container with that image in order to execute
the script logic inside that container now this image makes python and peep tools available inside the container however we still don't have the make
commit so we need that also inside the python container to be able to execute this command so how can we do that a simple way to do that is to install the
make command inside that python container before the script gets executed and we can do that using another
attribute on the job called before underscore script so this is basically the same as a script with the same syntax you can execute the same commands here as in
script but gitlab runner knows that whatever is defined here should run before the script so this is usually used in jobs to prepare environment like said environment
variables create any temporary files maybe fetch the data from somewhere or in this case install something that the main script actually needs for execution
so that's what we're going to do here and we're going to install make just like you would install it on any linux server with a package manager apt-get so we're going to do apt-get update
and apt-get install make so this will take care of installing make inside that container so
we will have all three tools that we need available in order to execute this command so now with this configuration we already have
a working pipeline with just one job which runs tests but still we have a valid working pipeline that we can execute for our application so how do we actually execute the
application the only thing we need to do is simply commit this file and as soon as we commit the changes gitlab will actually detect
that we have added a pipeline configuration and it will automatically execute the pipeline in the background so the pipeline is actually already running
now where do we see the pipeline execution well right here we are in the repository section which lets you manage the files
commits branches and so on for the ci cd there is a separate section that lets you manage the cicd part of the project and the good thing is you
don't have to switch to some other view you stay inside the project you have all the configuration in one place and the first tab here is pipelines and this
is a list view so this is going to be a list of all the pipeline executions for your project and we see that our first pipeline was successful it has passed
state and if i click inside that's the detail view of the pipeline we actually see all the jobs that ran for the pipeline in our case we just have one job we also have a separate job
section where you can see the list of the jobs and if i click inside the job you will see the logs which is obviously useful if the job fail for example for
troubleshooting and debugging so let's actually take a look at our logs to highlight some interesting and important information first of all right
here you see that the job is being executed using docker right here you see using docker executor with image and this is the image that we specified so by default it would have
used ruby but since we overwrote that it is using image that we defined and it's pulling the image and starting the container from it once the container is created now it of
course has to fetch the whole git repository so these projects code basically in that docker container where we have the test files dependencies file
and everything else to be able to execute tests right but before the main script the before
script gets executed which installs make and once that's installed then make test command gets executed then we have pip install the dependencies and
at the end we have our four tests that all run and they are in the past state so the first step or first job of our pipeline is configured
and working before moving on i want to give a shout out to twin gate who made this video possible twingate is a remote access service that allows you to easily
access private services within any cloud or in your home lab for ci city workflows that execute outside of your environment twin gate includes a service
account feature that allows secure and automated access to your private resources twin get is actually a modern alternative to a vpn that is easier to
set up offers better performance and is more secure if you need to create a csd workflow to deploy into your private kubernetes clusters or if you have
internal databases you need to access then it is an excellent choice with twingate you don't need to open any ports on your firewall or manage any
ipwhitelists you will also find providers for terraform and polumi so you can fully automate it into your existing stack if you want to try it you can use their
free tier and get started for free or twin gate has actually provided a special offer for my viewers you can use the code nana to get three
months free of the business tier in addition to their 14-day free trial so now let's go back to the pipeline
and at the next job that will build and push a docker image of our python application and to switch back to our pipeline code
we can go to the repository and basically in the gitlab ci file or we actually have a shortcut here in the ci cd section
with the editor tab that takes you directly to your gitlab ci yml code in the edit mode so you can continue working here on
your configuration now let's create the next job where we build a docker image and push it to docker hub's private repository so first of all to have a private
repository on docker hub you can simply sign up and create an account by default in the free version basically you get uh one private repository so that's what we're gonna use and i have some other
apps here but that's what i'm gonna use for the demo so that's the address of my private image registry on the docker hub and i'm going to be pushing an image
that the pipeline builds into this repository now in order to push an image to a private image repository we need to log
in into that repository first because of course we don't want to allow anyone to pull and push images from a private repository it has its credentials and
you have to log in before you actually push an image or pull an image from it so we would need repository credentials for this repository to be available on
gitlab so that gitlab runner can actually log into the registry before it pushes the image and the credentials are actually username and password of your docker hub
account so this is what we're going to use to log in to the private repository and of course we don't want to hard code those credentials that username and password inside the pipeline code
because again this is part of the repository so anyone that has access to the repository we'll be able to see that plus it's going to be in plain text and so on so it's not secure
so we want to create what's called secret type of variables that will be available in our pipeline and the way it works on gitlab is in the
projects settings so if you scroll all the way down you have this settings tab here where you can actually configure some of the administration settings
for different parts of your project so you have settings for the repository you have settings for ci cd and so on and if you're wondering why it is actually separated so why don't i have the
settings here directly in the ci cd or the repository settings here directly in the repository is that this would actually be two different roles so for your project you may have
project administrators to actually administer and manage the settings of the project and you will have the project users these are your developers junior
developers interns senior developers whoever needs access to the project to work in it right so you may want to separate those
permissions and have just a dedicated people who have access to the settings responsible for that part and then your developers or most of the developers do not see the settings at all so that's
basically the idea of having this cicd setting separately and the pipeline configuration separately so in the settings of the ci cd if i click inside
and we can leave the edit mode of the pipeline that's fine right here we have again different settings for our csd pipelines for the project which an
administrator of the project can configure so obviously this would be someone more experienced and knowledgeable about how to manage all these that's why you want
to give maybe a limited set of people permission to see and modify settings of the project so right here we have
multiple things we can configure and one of them is project variables and this is where we can create custom variables this could be secret variables like
password username but also just normal variables right and this will be then available inside the pipeline code so we can actually
reference those variables in our pipeline configuration so if i expand this we can create variables here click on add variable and we're gonna
define docker user or this could also be registry user and here we're gonna have
the value so i'm actually gonna copy that and this is the docker id so your username on dockerhub and
the type variable and we're gonna click on mask variable so what this will do is whenever the variable is referenced and used inside a job for example job
execution of the pipeline the value will be masked so it's not going to be visible in the job logs which is of course more secure so for secret or
sensitive data we're gonna check that so that's it that's gonna create our first variable and now let's also create registry password
let's call it registry pass and again mask the variable and for that you're gonna grab the password and there you go we have those two
variables and now we can actually reference them inside our pipeline configuration so i'm going to go back to the editor and right here we're going to create a
job called build image obviously you can call it whatever but i'm going to call it build image and let's define the main script so what is the main logic
that we want to do in this job well first we want to build a docker image of our python application from dockerfile and we actually have a
dockerfile as well in the project so right here in the root of the project we have this file that defines the python base image which is by the way the same as we used right
here to run tests and the rest of the configuration is simply copying the relevant files inside the image installing all the dependencies from the
requirements file and then finally starting the application on port 5000 so that's what's happening we already have the docker file so we're going to use
that to build the image and as i said it is in the root of the project so we're going to do docker build and since our gitlabci.yaml file is also in the root
gitlabci.yaml file is also in the root this is going to be the current location of dockerfile which is the current directory now in order to build an image that we
can later push into our repository we have to take that image using the repository name so i'm gonna copy that
and i'm gonna add minus t so that's for tagging the image with the repository name so in docker the name of the image includes the repository name so that
docker knows where to push that image when you execute docker push command and you also need to have a tag just like here right and in this case let's hard code a value
here let's do python app 1.0 and this will be the complete name of our image and then
once we build that image we're gonna do docker push because we need to push that image and i'm gonna copy the name so that's going to be the push image and as i said in
docker the way docker knows where you want to push that image or the address of the repository is inside the image name itself so it knows on this docker
registry you have a repository called this and you want to take the image using this specific tag but before we can do docker push obviously we need to authenticate
with the repository otherwise it's not going to work so right here before push we need to execute docker login command which takes user name and password as
parameters so we have dash u for username and now instead of hard coding the value here we can reference the variable that
we created in the settings called registry user and we can reference it super simply using the dollar sign and then name of the variable registry
underscore user and we also have the password and that's going to be registry pass that's what we called it
and for doc hub so if we're using docker hub we don't have to specify the docker registry because docker hub is the default but if we're logging into some other docker registry like on aws ecl
for example then we would have to specify the registry right here but as i said docker hub is a default so we don't have to do that and since the docker build and push
our main commands and docker login needs to be executed before we can push to the repository we can actually set it in before script section
like this so we have our main commands and supporting commands so to say separated like this so that's our job configuration
and by the way when we're repeating the values inside our pipeline we can also extract them into custom variables so for example the
repository name and the image tag could be variables inside the pipeline then we can then reference here instead of hard coding them and then repeating the value
multiple times and for that you don't have to go to settings and create these global variables you can simply define them here inside the pipeline configuration either on a job level like
this so we would have something like image name and this will be the image name and image tag
like this and then we just reference both variable values using the same syntax we used to reference the variables from the ci cd settings so with dollar sign variable name and
image tag and the same here and now that the uppercase letters with underscore is just a standard you can obviously call it whatever you want it
could be also image name it doesn't matter and as i said you can define the variables on a job level or if you have other jobs that also reference this
image name and tag for example in a deploy stage for example if we also need the same variables we can extract that and make it available
on a pipeline level for all the jobs like this so even though our job logic is complete we still have to make sure
that the job execution environment of this specific job has all the needed tools to actually execute these commands so as we know on managed gitlab runners
all our jobs will run in the docker container and in that docker container we need to have the commands whatever we're executing inside the job available
otherwise it's going to say it cannot find the command right so in this case we have a little bit of specific situation where we need docker to be available
inside a docker container and that is what's called docker in docker concept so within a docker container you have docker daemon and docker client both
available in order to execute docker command and again the default is a ruby image but we need an image that will have docker available inside the docker container and there is
actually an official image of docker right here that is meant exactly for that scenario where you have docker in docker so you build
the docker container with a docker image which makes again docker commands or docker client and daemon available inside your container in order to execute docker commands so
that's what gitlab cicd official documentation also references whenever you have this kind of use case and the configuration is actually pretty simple the first thing we need to do is
obviously set the image to this docker image and we can actually take one of the latest tags so right here we're gonna do docker and the image tag so this will take care
of starting a docker container from docker image so we have the docker client available inside however we also need docker daemon available inside
right so docker demon is the process that lets the client actually execute those commands connect to the docker registry pull the image push the image etc so in order to make that available
we actually need to configure another attribute which is called services on gitlab so let's actually define that and i'm going to explain what that means so service is basically an additional
container that will start at the same time as the job container and the job container can use that service during the build time so for example if
your test execution needs some kind of service like a database service for example mysql then you can actually start a mysql service or mysql container
during your job execution in addition to this python container and the service attribute will make sure that these containers are linked together so they run in the same network
and they can talk to each other directly so this way python container will actually have access and can use the mysql service from the other container
so that's basically what the concept of services means in gitlab cicd so right here what we're doing is we have this docker container with docker client
inside and we want to start another docker container with docker daemon inside so the client can actually connect to that daemon the docker server and then execute
these commands and then image for the docker demon is actually this tag right here so that's going to be docker and
the same version so that's the client that's the daemon with dind which is docker in docker so these two
will basically give us the complete set of docker client and server in the same job execution environment so these two containers will be linked to each other and
they will be able to communicate with each other there is one final thing we need to do here which is to make sure these two can communicate with each other using docker
certificates so these two needs to have the same certificate so that can authenticate with each other and talk to each other and for that we want those two containers that will start to share
that certificates directory so they can read from the same certificates folder and we can do that by defining a variable or environment
variable called docker tls cert dear and setting that to slash search so what this will do again is this will tell docker to create the certificates in this location and then
this certificate will be shared between the service container and the job container so with this configuration we have a full docker setup of client and a server or docker daemon
and they can talk to each other authenticate with each other and so on and this way we're going to have docker commands available inside our job great so now let's actually commit our
changes and gitlab will automatically trigger a pipeline for us you see that pipeline execution here if i do view pipeline
now you see that two jobs are being executed inside the pipeline both running at the same time so let's wait for them to finish and there you go both jobs were
successful which means we should actually already see an image in our docker hub registry so i'm going to go to my repository and
right here pushed a few seconds ago we have python app 1.0 so we have successfully built and pushed a docker image
to docker registry using gitlabcd pipeline however there is one issue here which is that both jobs get executed at the same
time and if we add deploy job it will also run in parallel to these two jobs and this makes no sense for us because we want to
run these jobs in order right so we want to run the tests and only if the tests are successful we want to build and push the image and only if build and push job
was successful then we want to deploy that image because if build fails we have nothing to deploy right so how can we
force this order in which the jobs will execute in the pipeline well we can do that using what's called stages so the stages can be used to organize
the pipeline by grouping related jobs that can run in parallel together so for example if you have different tasks that you run for your applications like unit
tests lead tests whatever they can all run in parallel by being in the same stage so let's see how that works going back to the editor
we want to put the run test in one stage and then have build image as the next stage so build image will basically wait for the run test to execute and only if
the run test was successful it will execute the build image job and it's super easy we have the stages attribute where we can create
a list of stages and we can call them whatever we want i'm going to create test stage and build stage and then from those stages that you have defined
here you can reference them within your jobs so right here we can say run tests in test stage and build image in
build stage that's it that's the whole configuration so i'm going to commit that and let's see the result so the pipeline
got triggered if i go to pipelines the first difference right away that you see here is we have two stages now so we have test stage and build stage whereas
here by default when you don't specify any stages gitlab gives you a test stage so everything gets executed in that test stage and if we go in the detail view of
the pipeline you also see a visualization of those stages so instead of the jobs being just listed here within one stage you now have
the stages listed next to each other and the jobs in the build stage will wait for any jobs in the test stage to complete before they run so that's how the stages looks like and when we add
more stages they will basically disappear next to each other so these two steps are configured now let's add the final job to our pipeline
to deploy that newly built docker image to an ubuntu server and run that docker application there for that we first need a deployment
server right we need a server that we're gonna deploy to and run our application to make this easy we will create a simple ubuntu server on a digitalocean
platform it's way easier to set up and configure than aws if you don't have an aws account so that's the platform we're going to use
now if you have a ubuntu server or other server you're welcome to use that as well so this is not specific to any cloud platform we just need an ubuntu server that is connected to internet
that we can deploy to and on digitalocean actually you can sign up a new account with a hundred dollar credit so make sure to take advantage of that if you register here so that you can
work through the demo for free i already have an account so i'm gonna log in and i'm starting with a completely clear state i don't have anything configured yet so we're going to do all this together
first of all when we create the server we will access that remote server on digitalocean from our local machine using an ssh
command into ssh into a server obviously we need an ssh key and on digitalocean we can add our own ssh key in the settings here so in
settings security you have a section here that says ssh keys and you can add one right so you can generate an ssh key pair
you can upload the public key of that keep here to digitalocean and then all the servers that you create on digitalocean will be accessible using
the public key that you uploaded here so that's how it's going to work so what we're going to do is actually create a new ssh key locally
which is very simple i'm going to leave the project and right here i'm going to execute a simple ssh key gen command and this will basically generate an ssh
key pair for us so enter and this is the default location where your ssh keys are and this is the default key pair that is
always used when you ssh so we're gonna use a different name inside the dot ssh folder in your users directory and let's call this digital
ocean key we're gonna leave the passphrase empty repeat and there you go that was it and now
inside that ssh folder we should actually see our digitalocean keypair so we should have two keys one of them is public and this is the
private one so that's the one we need to upload to the platform and then we're gonna be able to connect to any server created on the platform using the private key and
to upload that we're simply gonna cut this and obviously i need the whole path like this
and that's our public key so i'm gonna copy that to new ssh key copy the contents let's give it a name
let's call it a server or deployment server key and add ssh key and that's it so now
i'm going to go back to the droplets and create a deployment server so click on create droplet i'm going to choose ubuntu with the
latest version basic plan regular with ssd and i'm going to select the smallest server with the smallest resources because we're just going to be
deploying one application container which is our python app so we don't need much resources and finally we're gonna choose the region
which is closest to our location and that's it all the other stuff will stay the same right here in the authentication you already see that ssh keys is
selected so that's going to be how we connect to the server and that's the key that we edit so we leave everything else with defaults
we are creating one droplet and create and let's wait for our droplet to be fully initialized
there you go and once it is initialized we're gonna need to configure one thing on that server before we are able to deploy
and run our docker image to that server which is we need to install docker because we're going to be running docker container on it so we need docker available and for
that we're going to grab the public ip address of our droplet of our server which is this one right here let's copy and we're going to connect to that
machine locally and we're going to ssh into the server using an ssh command and the ip address of the server which is a public ip address
but of course when we ssh into a remote server we need to authenticate ourselves right so we need to provide credentials in our case username
or linux username and private key so the linux user on droplet servers is root so that's the user we're connecting with and here we're
going to specify the private key that we want the ssh command to use to connect to the server using dash i option
and we need the location so that's the home directory of my user dot ssh and digitalocean key
so with this command we should now be able to connect to the server so let's execute and this is a confirmation to save that
remote server locally as a known host so let's confirm and there you go we are inside our droplet connected as a root user and as
i said we need to install docker on this machine so if i do docker obviously right now it's not available and i'm going to use this suggested command to install docker but before that we're going to do
apt update and now we can do apt install docker there you go and now we should have docker commands
available let's do docker ps to check and there you go currently nothing is running but we're going to deploy to the server
from our pipeline so we can exit from the server our work is done here and we're going to go back to our pipeline configuration
and add a third job that will deploy to that server so how is gitlab going to deploy a docker image to the droplet server or how is it going to connect to that
server in order to deploy the docker image well actually the same way as we did locally using ssh command so again gitlab runner will start a container for
the deploy job and inside the container we're going to execute ssh command to connect to the remote server and for that ssh command we're going to need
exactly the same information right so we're going to have the root user and we're going to need that private key to connect to the server which means we need to make this private key also
available to gitlab and just like we created variables secret variables for docker hub private repository we're going to create a secret variable for
the private key in the cicd settings so going to settings cicd in the variables section we're going to
add a third variable and we're going to call this ssh key and the value of that key will actually be the contents of
that key file right the private key file so i'm simply going to cut that print that out to the console
like this and copy this whole private key and paste it in here so these are the file contents of the
ssh key and right here in the type field if i click inside this drop down you see that we have two options we have variable which is a simple text variable
and we have a file in this case we actually want to have a file variable because we're going to reference this as an ssh key file so let's select that one
and there's one kind of a workaround so that gitlab can create a temporary file from this contents with a correct format and it's kind of a weird workaround
to fix the issue so it could be a bug in gitlab i'm not really sure so what we're going to do is we're going to add a new line here at the end of the contents so
that a correct private key format is created by gitlab so what gitlab will do in the job execution environment it will take actually these contents of the variable and it will create a temporary
file from this because we have a file type specified so it will create a temporary file with the contents that we have here
we're going to add the variable so now this ssh key file will be available inside the pipeline so let's go to editor and
first i'm going to add a new stage here let's call this deploy and here i'm going to create a new job called deploy you can call it deploy to
development whatever i'm going to keep it simple this is going to run in deploy stage and now we have our script
and i can actually copy this whole command and modify the respective values here so that's command we're going to be
executing because we want to ssh from the job environment to the remote server so the public ip address obviously will
be the same the root user and this is the file that we have as a reference using a variable so we're going to reference that variable dollar sign ssh
key and remember this confirmation we had to make here to save that remote server in the list of known hosts and that's
actually an interactive step right so it needs some manual input and if we don't provide a manual input this will not work so want to actually skip this step because in the pipeline execution we
don't have a manual step so we want to tell the ssh command you know what forget about that check just connect to the server and that's it and
for that we're going to add an option here called strict host key checking equals no so it will skip that so no manual input
will be required here so that's an ssh command to connect to the server but once we connect to the server obviously we need to do something right we need to start a container using this image that
we just built right so we need to execute docker run or some kind of similar command and we can do that passing that docker
command to the ssh command so we're saying once the ssh please then execute whatever is defined here to a line break here and let's write our docker run command so
first of all our application is running on port 5000 so we're gonna expose that port on the host like this
then we have the image and since we have parameterized this we can just copy those values so that's the image now there are three more things that we need to configure in order to make this
deploy drop successful first of all this command will be executed on a droplet server and it will actually pull the image that we specify here from the
private registry and that means we need to authenticate with that registry to be able to pull the image so just like we had to do docker login here in order to push an image we need
to do docker login here in order to pull the image so we're going to copy this thing and edit here before we run the image and since these
two are going to execute inside the ssh we need to add these ampersands here so we, are, kind containing, multiple, ssh commands together so that's one thing which will take care of authenticating
with the docker registry and pulling the image from the repository so this is the first one the second one is that when we execute this pipeline the first
time it will actually succeed right it will create and start the docker container on the server and that's it but on the second execution or any following execution it will try to
create and start a new container on the same host port and obviously this is going to fail because you can't run multiple processes on the same port right so before each docker run command
we also need to make sure to stop and remove any existing containers or at least stop any currently running containers on port 5000 so that a new
one can be created so that's the second thing we need to do and because we know that this server will only host and run our application we know that there is going to be one container and we just
want to stop that and we're going to do that using the following commit we're going to do docker ps dash a so this will list all the containers whether they're running or stopped
doesn't matter and it's going to list them by id so with dash aq version we're going to have a list of all containers again in our case it's just going to be one
and then we're going to pipe that command and we're going to stop that container using docker stop command so we're going to take whatever
this displays as an argument for docker stop commit like this and we're also gonna take that list as an argument
for docker remove command so this will stop any containers using their ids and it will also remove those containers and then we're going to
create a new one with a new image and let's not forget the end here to change the commands and the third thing we need to do is
relate it to the ssh key file so when gitlab creates a temporary file from this ssh key variable that we created it will actually set access permissions on
that ssh private key file and those access permissions by default are to open so anyone can read and write to that file and when we try to connect with a
private key file which has open access permissions so anyone can read and write to it we're gonna get an error that it's insecure that the file
is not restricted and it has two loose or open access permissions so we're gonna need to fix that and to shortly explain to you what that means
so if i go to ssh and our digital ocean key and if i do ls with option l
so long input this will actually show me who has access to this file right so these are permissions for the owner which is my user the group
and anyone else so for this file the owner has read and write permissions but no one else can actually read or write or access this file in any way so that's
a restricted access and that's what we need to do here on gitlab as well and as i said by default gitlab actually gives everyone a read write permission when it creates
a temporary file from the file type of variable and we can fix that very easily by setting the permission to a stricter access
before the script will run inside a before script section so right here we're going to do change mode of the file
the temporary file to access 400 so 400 will actually be read so without write and then 0 0 so no
access for anything else and that's actually strict enough so that's the third thing we need to do for this job to run successfully and we also need
to run the docker run command in a background or in a detached mode with minus d because otherwise it will block our terminal or the jobs terminal and it
will basically just endlessly wait and with minus d or detached mode it will just send the command in the background and will complete the job okay so that's
the whole configuration now let's actually commit that and trigger a pipeline which will deploy our application to a deployment server so let's go to
the pipelines and let's wait for the execution awesome so our pipeline went through we have a successful deploy job let's actually check the locks
and first of all i want to mention that because we didn't overwrite the image the default ruby image was used to create the job container
then we have our change mod before script command and here is a result of docker login command and then the container has also started
and we also have three stages now test build and deploy stage with each one having its respective job so we can now validate that our application is running first by going to
the remote server let's ssh into it and checking that the docker container is running
and there you go we have our python app that's the image running on port 5000 and the second way to validate of course we want to
access this web application from a browser right so on the droplets public ip address on port 5000 because that's where the
application is exposed there you go we have our application accessible from the browser so with this we have actually successfully built a
basic cicd pipeline for our python application and as i said at the beginning this actually applies to any application written in any programming
language these two jobs are anyways very generic and for this one you would just have to find a respective image and comment and that's basically it also when you're done with the demo make sure
to go to your digitalocean account and delete your droplet so you don't get charged for the server resources so just wanted to remind you of that and
with that our demo project is complete and we have learned a few core concepts of gitlab cicd now of course you can configure much more in your github
pipeline code and as i said this is a basic pipeline and you can already work with this knowledge because you basically learn the core concepts of gitlab cicd and you can extend on it
however there is much more to gitlab cicd platform much more features with more advanced use cases of using artifacts caching deploying with docker
compose deploying to kubernetes cluster creating pipelines for microservices applications using job templates to avoid code duplication
and much much more so if you want to really dive into gitlab ci city and learn all these features and concepts and become an expert in that as
mentioned i have a complete kit lab csd course so be sure to check out the video description to see exactly what's in the course and to enroll there i hope you
enjoyed the video and you learned a lot let me know in the comments what is your experience and your use case with gitlab cicd and whether you're using it at work or any other projects and with that
thank you for watching and see you in the next video
Loading video analysis...