LongCut logo

Every Networking Concept Explained In 20 Minutes

By TechWorld with Nana

Summary

Topics Covered

  • DNS Hides IP Complexity
  • Ports Route Inside Servers
  • NAT Enables Private Outbound
  • Cloud Reuses Core Concepts
  • Services Stabilize Ephemeral Pods

Full Transcript

In this video, I'm going to show you the essential networking concepts that every software engineer needs to understand. We will follow a simple approach, basically watch how one application grows from a single server to a complex cloud system and learn each networking concept exactly when it becomes necessary. So, meet Travel. This is going to be our imaginary travel booking website and we will see how its networking needs evolved over time and you will understand why each networking

piece exists and how it solves real problems. We also created a detailed handout that breaks down everything that I'm going to cover today. So, make sure to grab it from the link below. It's completely free. So, let's start at the beginning. When we first launched TroubleBody, we had one server running our entire application. Simple, right? But immediately we faced our first networking question. How do customers actually find our server on the internet? Every device connected to a

network needs an identifier so that other devices can send data to it. And this identifier is called an IP address. Think of it like a house address for mail delivery. Without it, no one knows where to send anything. So our travel body server got a public IP address which looks like this. This means any device on the internet can send a request to this specific number and reach our server. Now you may be thinking do I need to remember numbers like 203 0 113 and 10 to reach a

website? No. Just like you don't memorize phone numbers anymore, we don't memorize IP addresses either. And this is where DNS comes in. DNS translates easy to remember names into IP addresses. So when someone types travelbody.com into their browser, DNS automatically looks up the IP address that will look like this and connects them to our server. DNS works like contacts in your phone. Usually not type in the phone number, the actual phone number. You just tap on the name like

website? No. Just like you don't memorize phone numbers anymore, we don't memorize IP addresses either. And this is where DNS comes in. DNS translates easy to remember names into IP addresses. So when someone types travelbody.com into their browser, DNS automatically looks up the IP address that will look like this and connects them to our server. DNS works like contacts in your phone. Usually not type in the phone number, the actual phone number. You just tap on the name like

mom and your phone finds the actual phone number in the background just like you type google.com and DNS finds the actual IP address behind that name. So now customers can find our server. Good. But here is the next problem. Our single server is now running three different things. The website that customers see, a database storing all the booking information and a payment processing service. All three share the same IP address. So when the request arrives at our server, how will the

server know which application should receive it? This is where ports solve our problem. Ports are numbered channels on a server, ranging from 1 all the way to 65,535. And each application listens on a different port number. So let's say this is how we set up our application. the website listening on port 80 which is standard for web applications or for a secure connection. There is a standard port for secure web application connections on port 443. Then we have a

MySQL database listening on port 3306. That's a standard MySQL port. And then we have some custom payment service that we decided to run on port 9090. Now when a customer visits travelbody.com their browser automatically connects to port 80 or 443 and the server knows to send the traffic to the web application and not to some other program on the server. So think of it like apartment building. The building has one street address. That's the IP address. But inside the

building there are different apartment numbers like the ports. Great. So that's taken care of. But now we're growing and a new problem appears. Travel body is now handling customer credit cards and some personal information. And having everything on one server creates a big security risk for us and for our users. Because if a hacker now broke into our server that runs all these applications, they get access to everything to the database, to the payment service, everything. So we

need to separate things. This is called network segmentation. And subnets let us divide our network into separate sections. Think of it like a hospital that has different floors and wings for different types of patients like maternity ward on one floor, surgery on another to keep things cleanly separated. We do the same with our network. Let's say our front end servers which are public facing go in subnet A with this IP address range. Application servers go in subnet 2 with

this IP address range and database servers go in another subnet. So now our network is divided. But wait, if the website is in one subnet and the database is in another subnet, how does the website talk to the database now? And this is where routing becomes necessary. Routing directs traffic between different network segments. When the website needs data from the database, router will determine the path. So it's basically like a GPS for network data. It figures out how to get

from point A to point B. But now we have a new problem. We've separated things into different areas. But what stops everything from talking to everything else? We've created separate rooms, but all the doors are now wide open and unlocked. Just because we can route traffic between subnets does not mean we should allow all traffic in all directions. And that's where firewalls become necessary. A firewall is like a security guard that checks every piece of traffic and decides whether to allow

it based on rules that we set. We have host firewalls that protect individual servers. So we put firewall on the database server and create a rule that says only accept connections on port 3306 and only from IP addresses in the front-end subnet. Anything else gets blocked. But we also have network firewalls that sit between subnets and filter traffic. So we may place one between the internet and our front-end subnet with a rule saying allow incoming traffic on port 80 and port 443 but

block everything else. So if we have another program or application running in front of subnet on a different port it will be blocked right away with this firewall rule. So this layered approach means that an attacker has to get through multiple security checkpoints like the network firewall then the host firewall to actually do any damage because security is always layered. With firewalls in place we have now created secure zones in our network. But another problem is about to appear.

Travel body is growing fast. We now have 50 backend servers in a private subnet for security and these servers have private IP addresses like 10 025, 1026, 1027 and so on. So private IP addresses work inside your own network but they cannot communicate directly with the internet. It's like having an internal extension number at the company. You can call other extensions inside the building, but you can't dial an extension number from your home phone from outside the

building, from your home office. But our backend servers do need to reach the internet sometimes to download software updates to connect to external payment APIs maybe or to send data to thirdparty services. So how do we solve this problem? because now we have them secured so that nobody can just directly reach them from internet but they cannot reach internet either and we can't give each server a public IP address because first public IP addresses cost money and

we need to manage them and we would now need 50 of them. Well, that's where NAT or network address translation comes in. So net basically allows multiple devices with private IP addresses to share one public address when accessing the internet. So here's how it works. When backend server 10025 wants to reach an external website to let's say download updates for a database, it sends a request to the net device. The net device replaces the private source address with its own public IP address

and sends the request out to the internet. When the response comes back, the net device remembers, oh, this response is meant for server 10025 and sends it to the right place. So, think of it like a receptionist at an office. When an employee needs to make an external call, they go through the company's main phone line. The receptionist places the call using the company number and then routes the response back to the correct employees desk. So now all 50 of our backend

servers can reach the internet through one public IP address. So they remain hidden and protected. No direct access to them from internet but they can still get what they want from outside. At this point we've built a solid networking foundation but maintaining all these physical servers is becoming expensive and slow. We're spending too much time managing hardware now. We have to predict capacity month in advance. We have to buy servers, install them, maintain

them. When we need more capacity, it just takes weeks to set them up. So, we decide to move Travel Body to cloud. The cloud means we're renting computing resources instead of owning them. So, someone else manages the hardware and we can increase or decrease capacity in minutes now instead of weeks. But here is an important part. The networking concepts we learned do not change. We still need IP addresses. We still have ports on servers. We still have subnets,

them. When we need more capacity, it just takes weeks to set them up. So, we decide to move Travel Body to cloud. The cloud means we're renting computing resources instead of owning them. So, someone else manages the hardware and we can increase or decrease capacity in minutes now instead of weeks. But here is an important part. The networking concepts we learned do not change. We still need IP addresses. We still have ports on servers. We still have subnets,

routing, firewalls, and the cloud just provides these as managed services. So in the cloud, we create what's called a virtual private cloud or VPC. And this is our own isolated section of the cloud provider's network. Think of it like renting an office floor in a large office building. Other companies are on different floors, but your area is yours. Nobody can enter your office even though they're in the same building. Inside our VPC, we create subnets. Just like before, we still have public

subnets for things that need internet access and private subnets for things that should be protected. We use an internet gateway to connect our public subnets to the internet. It's basically like a main entrance to our building. We have route tables which are like signposts that tell the data where to go in our network. And each subnet has a route table that directs traffic. For private subnets, we use net gateway. Remember net from earlier? It's the same concept just managed by the cloud

provider. We place a net gateway in a public subnet and configure the private subnets to route their outbound internet traffic through it. So now we have the same secure network architecture we built before but running in the cloud with all the benefits of cloud flexibility. But our application is about to change again. But before we move on, I want to give a huge shout out to Palumi for making this video possible. So you just saw all the infrastructure we built in the cloud,

provider. We place a net gateway in a public subnet and configure the private subnets to route their outbound internet traffic through it. So now we have the same secure network architecture we built before but running in the cloud with all the benefits of cloud flexibility. But our application is about to change again. But before we move on, I want to give a huge shout out to Palumi for making this video possible. So you just saw all the infrastructure we built in the cloud,

the VPC, the subnets, the security groups and so on. But here's a question. How do you actually define and manage all of these infrastructure as code? And this is where Palumi really helps because unlike other infrastructures code tools that use their own domain specific languages, Palumi lets you use the programming languages that you already know like TypeScript, Python, Go, Java, whatever that you're already using in your tech stack. This means you can use your favorite IDE to write

infrastructure code with all the features like error checking, autocomplete refactoring debugging tools that you're already comfortable with. And you can write real programming logic instead of being limited by the configuration syntax. And here's what's really cool. Palumi just launched Palumi Neo which is their new agentic AI built specifically for infrastructure. Neo understands your entire infrastructure setup, respects your policies and it can handle complex tasks end to end. So you

can describe what you need in natural language and Neo will generate the code, review your poll requests and even help you debug deployments with full understanding of your organization's infrastructure. Palumi is open source, so you can use it for free. But if you want the enterprise features, you can use my code NA 500 and you're going to get $500 worth of credits to test out those enterprise features. I will leave the link in the description. Now let's continue with the video.

As travel body grows, our application becomes more complex. So we have more services, more dependencies, more things to install and configure. And we also move to microservices architecture to make our application more scalable. And managing our deployments is actually becoming more and more complex. And we're increasingly running into it works on my development laptop but not on the production server issues. And this is where containers solve our problem. A container packages everything an

application needs, the code, the runtime, all the libraries and settings into one portable package. So think of it like the difference between a foot truck and a restaurant. With a food truck, everything is already inside. just drive it somewhere and start cooking and serving your food. Whereas restaurant you have to find a place and you have to set everything up and it's really difficult to move location because you have everything tied up in that one restaurant. So we use Docker to

containerize all of the travel bodies services and now we can run the same container on a developer laptop on our test servers in production and it works almost identically every time. But containers introduce new networking concepts that you need to understand. When you run containers on a server, they need to talk to each other. Docker creates something called a bridge network. A private network that exists only on that server. All containers connected to the same bridge network can

actually communicate with each other using just the container names. But containers have their own private networking inside them. When our payment service container runs, it might listen on port 1990 for example inside the container. So the question is how do external requests then reach that application running inside the container on port 9090? Well, we need to map the container's internal port to a port on the host server. So docker run command has a parameter that lets you bind or

map the port inside container to your host or servers port number. So this tells docker take all the traffic that arrives on this host server on port 1990 and forward it to port 1990 inside the container called payment. Notice this is similar to the net concept that we learned earlier. We're translating addresses and ports to bridge between two different networks. Now as we grow even more and we run containers on multiple servers now because one server is not enough to run all our containers

but for all our microservices we need them to communicate with each other across servers not just on a single server. So Docker's overlay network creates a virtual network that spans multiple hosts making containers on different servers appear as if they were on the same network. So now with Docker, we've made our application portable and consistent. But now we're facing a new challenge, which is managing hundreds of containers, which is absolutely quickly achieved when you have microservices

application that is running in multiple replicas or copies of the same service for scalability and performance. And our travel body is successful. So, we're running hundreds of containers across dozens of servers, and managing this manually is becoming impossible. Like, which server should a new container run on? What happens when a container crashes? How do you troubleshoot when something goes wrong when you have hundreds of containers? How do you even know that a container

crashed when you have hundreds of them? How do containers find each other when they keep moving around from one server to another? And this is where Kubernetes comes into the picture. So, Kubernetes automates container management. Think of it like an automated building manager that assigns apartments, ensures everything is running, and handles maintenance. In Kubernetes, the basic unit is a pod. A pod is a group of one or more containers that work closely together. Usually, it's just one

container per pod. And each pod gets its own IP address. So, think of a pod like an apartment unit. The apartment has one address and everyone living in that apartment shares that address. So all the containers inside a pod will share that same IP address. But here's the problem. Pods are temporary. Kubernetes can create a pod, destroy it, move it around, create a new one at any time when you do an update. Maybe a pod crashed. Maybe you are updating a version of the application. And each

time Kubernetes creates a new pod, that pod gets a new IP address. So if our website pod is trying to connect to our database pod and the database pod gets recreated with a new IP address, the website pod's connection breaks and pods are ephemereral. So we shouldn't rely on them being around for a long time. So how do we solve this problem? Well, this is where Kubernetes services help us. A Kubernetes service provides a stable IP address and DNS name that never changes

even as pods behind it come and go. So we create a service for each of our pods like our database pods and service gets a permanent IP address and a DNS name like database service. So now when our website pod needs to connect to the database, it connects to database service instead of connecting to a pod directly and the service automatically forwards the connection to one of the healthy active database pods. If a database pod dies in the background and gets replaced with a new one, the

service automatically updates and the website pod does not notice anything. It's still connecting to the same database service. It doesn't know what even happened in the background. So, think of it like a department phone number at a company. You call the sales department number and it rings at someone's desk. The person might change, but the department number stays the same and there will always be someone who will answer the phone. And this is crucial in Kubernetes because pods are

constantly being created and destroyed and services provide the stability that we need. Now, we need to expose our application to the internet. We have multiple services running inside our cluster. So we need to think how do external users actually reach our applications running inside the cluster. Well, we use something called ingress. So Kubernetes ingress is like a reception desk that routes visitors to the right department based on what they're asking for. So a single ingress

can handle all incoming traffic into the cluster and route it to the correct service inside the cluster based on the rules that we configure. For example, you can say all the requests coming to travelbuddy.com go to website service. All the requests coming to travel.com/appibooking go to the booking service / API/payment requests to that URL go to the payment service. So let's recap what we learned by following travel bodies journey. We learned five basic foundational concepts

of networking. First of all, every device needs a unique identifier which is an IP address so others can find it and talk to it. And DNS then translates human friendly names to these IP addresses. Second, multiple applications on the same server need different doors to listen on so that traffic goes to the right application. So that's the concept of ports. We need network segmentation where we divide networks into separate sections or subnets for security and

of networking. First of all, every device needs a unique identifier which is an IP address so others can find it and talk to it. And DNS then translates human friendly names to these IP addresses. Second, multiple applications on the same server need different doors to listen on so that traffic goes to the right application. So that's the concept of ports. We need network segmentation where we divide networks into separate sections or subnets for security and

organization and routing then connects these sections. We also need security in order to control what traffic is allowed between different network segments and to different ports. So we need firewall concept for that. And finally, when we secure our backend applications within private networks, we need them to still reach the internet directly. So net acts as a gateway translating their private addresses to a shared public IP address to allow communication to the internet.

So these five concepts are the foundation of networking and whether you are working with physical servers, cloud infrastructure, Docker containers or Kubernetes pods, these principles actually remain the same. So the tools change. We went from physical routers to VPCs, from physical firewalls to security groups, but the concepts never change. So if you master these fundamentals and you will understand those basic networking concepts then you will understand anyworked system and

you'll be able to troubleshoot optimized applications at any scale. Now if this was helpful then share it with one friend or colleague that you know will benefit from this video. Thank you for watching and I'll see you in the next

Loading...

Loading video analysis...