LongCut logo

NGINX is Dead? // Angie Web Server Migration Guide

By Christian Lempa

Summary

Topics Covered

  • NGINX Creator Quits Over Loss of Open Source Values
  • Angie Offers NGINX Features F5 Put Behind Paywall
  • ACME Integration Built Into Core Library
  • I Still Prefer Traefik Over NGINX/ANG
  • Best Time to Migrate from NGINX to ANGIE

Full Transcript

What is actually going on with EngineX?

Over the last few years, there's been a lot of bad news around it. The first one was that the original creator of EngineX left F5, the company behind it back in

22. Then in 24, one of the last core

22. Then in 24, one of the last core developers quit and literally forked the entire project because, and I'm quoting here, he no longer saw EngineX as a free

and open source project developed and maintained for the public good. Then

last year, Ingress nightmare hit. five

critical vulnerabilities found including unauthenticated remote code execution and it exposed the fact that the most popular ingress controller ingress engine X was basically maintained by

just one or two people in their spare time and now as of March 26 Ingress engine X is officially end of life. What

a surprise. Now honestly and I want to make this clear it probably sounds a little more dramatic than it actually is. This end of life for ingress enginex

is. This end of life for ingress enginex doesn't mean that the free version of EngineX itself is disappearing. It still

works. It's still used nearly everywhere. But all of this got me

everywhere. But all of this got me thinking if we're still running EngineX today, maybe it is time to move on to something else. Now, luckily, I found a

something else. Now, luckily, I found a project that is around called ANG, built by the former EngineX developers as a drop-in replacement that is compatible

with all your existing EngineX configs, and it ships even with features free enginex server never had. That all

sounded super interesting to me. So,

today I'm going to show you how to migrate from EngineX to ANG, and I'll walk you through some of the new features and how to configure them.

Don't worry, we'll cover exactly how to do this in a minute. I just quickly want to remind you about another very important topic when we're talking about authentication and security and that is

remote access because most of my home lab services are behind my open sense firewall secured in a local network. But

when I need to access my network from outside, my first choice is Twate, the sponsor of today's video. Twate is a secure, fast, and super simple solution.

It's what we call ZTNA in tech, a zero trust network access platform that establishes secure connections between all of your devices. Every connection

has to be always verified and authorized, otherwise Twinate won't let the request go through. And the cool thing is that this technology even works through Nut devices and firewalls without any port forwarding or firewall

exceptions required. So, you can just

exceptions required. So, you can just securely log into your devices no matter where you are or where you're connecting from. If you would like to try it out,

from. If you would like to try it out, then check out my tutorials on how to install Twinate in your network and how to integrate it into Docker, Kubernetes or your DevOps environments using Terraform. It is really amazing and by

Terraform. It is really amazing and by the way, completely free with up to five users and connects to 10 different remote networks. So start making your

remote networks. So start making your network more secure and accessible with Twinate. Of course, you will find a link

Twinate. Of course, you will find a link to it in the description box down below.

All right, guys. So now let's get started with Angie. As I said before, it is a drop in the replacement for engine X. So, same syntax, same module layout.

X. So, same syntax, same module layout.

All of your existing configs will work with no or just minimal change. Of

course, we will later go through what exactly these minimal changes are. But

it is a little more than just a simple drop in replacement because it includes all of the features of EngineX 1.293 293, plus adding a bunch of new features that the EngineX community was asking

about for years, and F5 just never really wanted to implement them or just wanted to put them behind a payw wall for their paying EngineX Plus customers.

comes with HTTP free support, metrics and statistics, Prometheus statistics exporters, automation for containerized services, something like traffic does with the Docker labels, automatic HTTPS

configuration that supports obtaining TLS certificates with the AKMAP protocol. However, there are a few

protocol. However, there are a few important limitations to that. Of

course, we will later go through them.

But as you can already see, this is a pretty promising project that was created by NG software or web server LLC. that is a Russian IT company

LLC. that is a Russian IT company founded by the former developers of EngineX. But yeah, I think if you're

EngineX. But yeah, I think if you're interested in all the details behind that, you can read it on the official website. Let's now cover some of the

website. Let's now cover some of the technical parts of it. So how to actually install this similar like engine X, you can install it with binary packages depending on what Linux distribution you are. You can just

install this through the package managers of these distributions. You can

also build it from source or as probably all of you and also me are preferring using some of the officially supported docker images. So we can easily just

docker images. So we can easily just hook that in our docker or kubernetes container orchestration stack. There are

also minimal ng images around that really only include the ng packages and are based on the alpine docker images.

Here you also just need to follow the instructions how to deploy this or how to spin up a simple test container. But

I thought let's just run through a few practical use cases together. This is a very typical EngineX Docker Compose setup. So it's a simple web server

setup. So it's a simple web server that's just serving a static website located in the data directory. Just a

very simple demo website that we know everything is up and running. But here

you can see I'm using the EngineX 1.25 container image. I'm exposing the port

container image. I'm exposing the port 80 on port 8085. I'm also exposing the Prometheus metric port. We will later use that for the new ANG features. Also,

all of the comments here, just ignore them. We will use them later. But here,

them. We will use them later. But here,

I'm mounting the defaultcon configuration file into the enginex confda.com file. And this is also the volume mount

file. And this is also the volume mount that's putting the index.html inside the var/varwh directory inside the containers file system. Here's a simple server that's

system. Here's a simple server that's listening on port 80 and it's just serving everything that is located in the root directory where I'm placing in

the index.html file. So nothing really special in here. Again, as NG is a full drop-in replacement, the configuration could be a lot more complicated. As long

as it worked in the engineext 1.25, it will also work in NG the latest version the same way. Okay. So, how you normally would migrate, of course, you should take a backup of your data directory and

the configuration files and take the containers down and then go back to your compost stack and simply replace the engineext container image with the ng container image. The only thing that I

container image. The only thing that I would need to change is the directory where we're putting the configuration file. This is also described in the ng's

file. This is also described in the ng's configuration files documentation. So,

here it usually is named ng.com located in the etc. ng directory but it's also including configuration files to simplify the configuration management

they're using these include directives and it reads the configuration files from these directories so what we need to change is replace the path where it's

etc conff with etc and then not conf but httpd but yeah that's basically everything we need to change the configuration file stays completely the

same Now we are starting the NG web server in the version 1.113.

It's starting the worker process. And

now when we try to open the website, you can see the website is already there with all the existing configuration that you got. It's very simple. And yeah, if

you got. It's very simple. And yeah, if your goal was to just do a very simple migration from EngineX to ANG, you're already done. You can now close the

already done. You can now close the video. But I think you're all going to

video. But I think you're all going to be interested in some of the new features that the ANG developers promised here on the website. Let's

start by covering the ACMA integration.

This feature is probably what I'm most excited about because with EngineX, you always had to load certificate files from your disk and then use other tools like searchbot or a chron job to

automatically obtain or renew these certificates and reload them in the engineext configuration. Now, with AKMA

engineext configuration. Now, with AKMA is built right into the core library.

So, nothing extra to install or to chronrop. I'm going to show you a simple

chronrop. I'm going to show you a simple example by using the HTTP validation because when you want to obtain a TLS certificate from let's encrypt completely for free that will automatically generate a small request

and then let's encrypt will need to verify if you're really the owner of that domain you are requesting the certificate for. So therefore connect to

certificate for. So therefore connect to port 80 of your web server and sends a little confirmation uh package. Now

automatically intercepts these requests from let's encrypt challenges on port 80. There's no extra configuration

80. There's no extra configuration needed. There are just two requirements

needed. There are just two requirements for the HTTP challenge to work. First of

all, your server needs to have a public domain name pointing to it. So XYZ tople domain and it needs to listen on port 80

and 4 for free. and they need to be open to the public internet because as I just explained, let's encrypt will try to connect to your public IP address back and therefore these ports need to be

terminated on your NG's web server otherwise it won't work. So if NG is running behind your firewall or it's purely an internal service for your home lab, you would have to use a different

way that's using a DNS challenge. But

more about that later. Let's first of all start by showing you another example project on one of my remote servers. So

here we also have a docker compost stack with running at the latest versions. I'm

exposing port 80 and 4 for free on the public IP address of this web server.

I'm also basically just uh providing or serving the same website. I'm having a slightly different but similar configuration here. I'm listening on

configuration here. I'm listening on port 80 on the server name web demo server test 7 on my cloud uh service. So

that domain also points to the public IP address of where the NG's web server is listening to when we make a simple test uh request. You can see that we get a

uh request. You can see that we get a warning because we're not using HTTPS right now. Chrome doesn't support HTTP

right now. Chrome doesn't support HTTP over the public internet. So um let's take a look on how to actually do this.

First of all, we need to uncomment these lines here. So that's enabling the ACMA

lines here. So that's enabling the ACMA client in the NG's configuration. Set it

to let's encrypt. I'm using the real production URL. There's also a second

production URL. There's also a second one that is for staging. So when you just want to test um how the certificate renewal is happening and if everything is working. So that is having some more

is working. So that is having some more um yeah conservative rate limitings.

Usually the process is you test it with the staging URL. If everything works in your setup then you move to the production URL and then you get a real certificate that is trusted by your browser. You also need to set up your

browser. You also need to set up your email address of course and then uh we need to add an HTTPS configuration that's done by listening on the port for

free enabling SSL. So I will just uncomment the secondary server configuration here is also the same server name but I'm using the SSL certificate akmmer let's encrypt. By the

way if you want to have all of these example configurations I will upload them to my website Christiana.de. I will

put your link in the description of this video. So then you can just copy and

video. So then you can just copy and paste the same configuration. I'm also

thinking about adding some of these templates to my boiler plates library.

By the way, if you don't know about my boiler plates project, check it out on GitHub. Link is also in the description

GitHub. Link is also in the description box down below. And there I might add an ang Docker compos template with some of the configurations uh you could enable or not. But yeah, I'm not going to talk

or not. But yeah, I'm not going to talk too much about this project right now because there will be some huge updates the next weeks. I will let you know on my Discord server and also on YouTube.

So keep an eye on the boiler plates. But

yeah, again just use my website to get these uh configuration settings here.

And then you also of course need to make sure that the port for free is exposed.

But I think apart from that everything should work. Oh no, there's one thing

should work. Oh no, there's one thing that I want to enable and that is a redirection from HTTP to HTTPS. If for

whatever reason some of the web clients is still trying to access the insecure address of my web server, then of course I want them to be redirected to the

actual secure um HTTPS connection. All

right, so that's basically everything. I

need to just connect to my uh server and let's see. I think I didn't start the

let's see. I think I didn't start the project yet. Okay, so yeah, that's good.

project yet. Okay, so yeah, that's good.

Let's run this. I probably forgot. Uh

yeah, that's the first line here. We

need to add a resolver for the acme certificates. That's quite important. Of

certificates. That's quite important. Of

course, I will also add that to the documentation. And then everything

documentation. And then everything should work. You can see that it

should work. You can see that it automatically starts connecting to the Let's Encrypt server. Don't worry about some of the errors here. These warnings

just are there because that was trying to use an IPv6 connection. That

certainly didn't work. So, it was falling back to the IPv4. But here at the end, this is the important line.

Certificate was renewed. Uh next renewal date is May 24. So everything is working. We now should have a real

working. We now should have a real trusted TLS certificate. And here that's the important part. The connection is secure. In German it's called Fabinuma.

secure. In German it's called Fabinuma.

And here you can also get the certificate dates. This is really just

certificate dates. This is really just uh obtained by the NG protocol integration. Pretty cool. So with these

integration. Pretty cool. So with these simple lines you can have automatic TLS certificate retrieval and renewal. I

really really like this a lot. The only

downside to this is that this currently just only works with the HTTP challenge fully automatic. The DNS challenge is

fully automatic. The DNS challenge is also kind of supported. So this is described in the configuring AMA DNS validation. The important part is and

validation. The important part is and that is really a bit annoying that when you need to process a certificate request, the Akma server performs a special DNS query on this AKMA challenge

subdomain of the domain being verified.

So that means let's encrypt will do a DNS request. NG's web server will need

DNS request. NG's web server will need to receive that DNS request which also means you need to open the port 53. So

it's basically kind of the same thing like with the HTTP challenge. Your NG's

web server need to be somehow accessible from the public internet and therefore it just won't work if you're trying to get a certificate for an internal domain or if that's behind a firewall or anything like this. If you compare this

to other web servers or reverse proxies like traffic or caddy for example where you just pass in a Cloudflare API token and you're done with that. This is

unfortunately not yet integrated.

However, I think it shouldn't be that complicated to add it. There are already tools around like a.sh or Lego that supports already hundreds of DNS providers out of the box. So therefore I

think either use the hook validation if you really need DNS challenge or somehow make it possible um that the NG's web server is accessible from the public internet. Let me quickly show you the

internet. Let me quickly show you the Prometheus metrics that are already built into NG. This is super cool. Uh

they also have a Graphana dashboard for this by the way also super super interesting. What you actually just need

interesting. What you actually just need to do to enable this is you just go to the docker compos file. Make sure that you expose the port. And then in the configuration file, you just enable

another server on whatever port you want and set the location to /metrics and enable Prometheusall.

So this you need to do and you also need to enable the include statement for the Prometheusall.configuration.

Prometheusall.configuration.

So when you then start up your container and make a simple request to the local host on that port/metricx, you should automatically see a matrix

being exposed. Uh and then you can start

being exposed. Uh and then you can start creating a Prometheus scrape or a graphana alloy scrape job that automatically pushes this to your Prometheus instance and get all of the important metrics about accepted

connections, dropped connections, active connections, and so on. And of course, you can customize many, many more settings. I'm not going to go through

settings. I'm not going to go through all of the configuration settings here, but if that's interesting for you, you can take a look here. And there should also be a Graphana dashboard. Yeah. So,

here it's called Prometheus dashboard, which actually is a graphana dashboard, but anyways, I'm not going to be pedantic to this documentation here. But

yeah, uh this you can visualize uh the HTTP requests, the connections and get more detailed metrics and statistics about your web server just right out of the box without adding any third party

modules or whatever just with these small configuration lines and then you're done. And what is also certainly

you're done. And what is also certainly interesting for all the Docker fanboys out there such as me or hopefully many of you as well, there's also a Docker upstream module documentation here. So

that provides dynamic configuration of proxied servers based on container labels. Some of you that are familiar

labels. Some of you that are familiar with traffic might already know, oh there's this is something similar. There

are one or two other settings in the configuration file required here. For

example, you need to enable the docker endpoint that will be mounted into the containers file system on Unix varun docker.sock. And then you need to define

docker.sock. And then you need to define an upstream service. So this upstream service for example will be who am I simple test container. So I'm using a different location here. For example, if

I want that in /h who am I should be forwarded to this container I just will add this location here proxy path http who am I. Make sure that this container is running in the same docker network as

the ng web server. And then you also need to mount the docker socket into the ng's web server container because the ng web server will listen for any upcoming

events for containers that start with the labels on that port. So this way knows where to forward the request to.

If we add a simple container that has the upstream who am I defined on port 80 and then ng looks in its uh configuration. Is there an upstream um

configuration. Is there an upstream um service defined? Okay. And then if

service defined? Okay. And then if you're using uh that upstream service, it automatically forwards the incoming request to that container. And this way

you can easily expose any other container that has a web service using obtained and trusted TLS certificates when for example I start up the container. You can already see here the

container. You can already see here the docker pier was added to HTTP upstream who am I. So this way that container was automatically discovered. And then if we

automatically discovered. And then if we open a new window uh on the port this is my website but when I start using who am I then it will be forwarded to the who

am I container running on the same do network this will automatically be proxied by the NG's web server and that of course works with I don't know so many other containers as you want and

just as flexible as NG's configuration is you can add as many uh location routings or you can add more servers listening on a different domain on a

different server name forwarding that to different applications and so on similar how you do it with traffic maybe a little more complicated that's why I still prefer traffic for the most time

but it's great that we have similar kind of capabilities and features in as well so one thing I also want to say about reverse proxying of course we haven't yet covered ingress engine x so the

ingress controller running in kubernetes that is now end of life and if you still using that you need a real migration plan. Yeah, I'm not going to do a full

plan. Yeah, I'm not going to do a full Kubernetes tutorial here because I think that would deserve its own video if you are interested in. But my opinion is and has always been in Docker and

Kubernetes. Although ANG is great, I

Kubernetes. Although ANG is great, I still would prefer traffic. This is also a real alternative to ingress engine X because that is a well-known and often used reverse proxy in Docker and

Kubernetes environments with a strong community. I also have dedicated

community. I also have dedicated tutorials on traffic as a reverse proxy.

So if you prefer that, you will find links in the description box down below.

Nevertheless, if you are an ingress engineext fanboy and you don't want to migrate to a different application, there is also something from NG called ENIC. So this is kind of the ingress

ENIC. So this is kind of the ingress engineext equivalent and that is a drop-in replacement for that end of life ingress controller in Kubernetes. So you

can see it almost has feature parity with that project but not entirely. Here

you can also see a comparison matrix to traffic or tot haroxxy and so on. If

you're interested in a full tutorial deep dive on this leave me a comment. I

might do one but it's a very niche topic. So I don't really know. Maybe in

topic. So I don't really know. Maybe in

the next videos about Kubernetes networking we might want to take another look at it for example or something like that might be a little more interesting.

But yeah, these are my final thoughts on this topic. I think it's now the best

this topic. I think it's now the best time than ever to migrate from something else. And what could be better than ANG

else. And what could be better than ANG that literally is kind of the same project. It has all of the features that

project. It has all of the features that Engineix had. It works with all your

Engineix had. It works with all your existing configurations plus it adds more capabilities. Some of them might be

more capabilities. Some of them might be more useful than others like we've seen, but I think it is really great that we have this project created by the former developers of EngineX. Of course, as I said before, I will link you everything

in the description of this video. Thank

you so much for watching. If you find that useful, give it a like, subscribe to the channel if you want to see more, and of course, I'm going to catch you in the next one. Take care. Bye-bye.

Loading...

Loading video analysis...