added nginx
This commit is contained in:
parent
af9a204b30
commit
7b5265805c
@ -16,7 +16,7 @@ This repository contains guides and notes written in markdown.
|
||||
- [x] SSH Server
|
||||
- [x] Postgresql
|
||||
- [ ] Gitea
|
||||
- [ ] Nginx
|
||||
- [x] Nginx
|
||||
- [ ] Solidworks
|
||||
- [ ] Docker
|
||||
|
||||
|
86
nginx/README.md
Normal file
86
nginx/README.md
Normal file
@ -0,0 +1,86 @@
|
||||
# Nginx Server
|
||||
|
||||
Walkthrough of installing Nginx Webserver
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Step One - Installing Nginx](#step-one---installing-nginx)
|
||||
- [Step Two - Configuring our Server](#step-2---configuring-our-server)
|
||||
- [Step Three - Troubleshooting your Server](#step-3---troubleshooting-your-server)
|
||||
- [Step Four - OPTIONAL - Setting up HTTPS with Certbot](#step-4---setting-up-https-with-certbot)
|
||||
- [Resources](#resoureces)
|
||||
|
||||
### Step One - Installing Nginx
|
||||
|
||||
First we need to install Nginx.<br>
|
||||
`sudo apt update`<br>
|
||||
`sudo apt install nginx`<br>
|
||||
|
||||
If you have UFW on you should allow HTTP with:<br>
|
||||
`sudo ufw allow 'Nginx Full'`
|
||||
|
||||
|
||||
|
||||
|
||||
Nginx should now be fully operational, to check.<br>
|
||||
`sudo systemctl enable nginx` - To set up nginx to start up on reboot.<br>
|
||||
`sudo systemctl start nginx`<br>
|
||||
`sudo systemctl status nginx` - Should give you the status of the server.<br>
|
||||
|
||||
Go to your ip address to see the nginx default page. Next, we'll configure our server.
|
||||
|
||||
### Step 2 - Configuring Our Server
|
||||
|
||||
Now that nginx is install and running, we should now configure our server. Here are the files we normally edit.<br>
|
||||
`/var/www/html` Typically you'll remove the default and create a folder like `/var/www/mysite.com`<br>
|
||||
`/etc/nginx`: The Nginx configuration directory. All of the Nginx configuration files reside here.<br>
|
||||
`/etc/nginx/nginx.conf`:The main Nginx configuration file. This can be modified to make changes to the Nginx global configuration.<br>
|
||||
|
||||
`/etc/nginx/snippets`: This directory contains configuration fragments that can be included elsewhere in the Nginx configuration. Potentially repeatable configuration segments are good candidates for refactoring into snippets.<br>
|
||||
|
||||
`/etc/nginx/sites-available/mysite.com`:The directory where per-site server blocks can be stored. Nginx will not use the configuration files found in this directory unless they are linked to the **sites-enabled** directory. Typically, all server block configuration is done in this directory, and then enabled by linking to the other directory.<br>
|
||||
`/etc/nginx/sites-enabled/mysite.com`: he directory where enabled per-site server blocks are stored. Typically, these are created by linking to configuration files found in the **sites-available** directory.<br>
|
||||
|
||||
You can link those by:<br>
|
||||
`sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/`<br>
|
||||
|
||||
Here are some samples of files you may encounter, and settings you may set.<br>
|
||||
[/etc/nginx/nginx.conf](./defaults/nginx.conf)<br>
|
||||
[/etc/nginx/sites-available](./defaults/nginx.conf)<br>
|
||||
[Digital Ocean - Server Blocks](./resources/DigitalOcean_ServerBlocks.md)<br>
|
||||
[Rate Limiting With Nginx](./resources/Rate_Limiting_With_Nginx)<br>
|
||||
|
||||
|
||||
### Step 3 - Troubleshooting Your Server
|
||||
|
||||
`/var/log/nginx/access.log`: Every request to your web server is recorded in this log file unless Nginx is configured to do otherwise.<br>
|
||||
`/var/log/nginx/error.log`:
|
||||
Any Nginx errors will be recorded in this log.<br>
|
||||
|
||||
### Step 4 - Setting up HTTPS with Certbot
|
||||
Now with Nginx configured and running we should set up HTTPS with Certbot.<br>
|
||||
`sudo apt update`<br>
|
||||
`sudo apt install snapd`<br>
|
||||
`sudo snap install core`<br>
|
||||
`sudo snap refresh core`<br>
|
||||
`sudo snap install --classic certbot`<br>
|
||||
`sudo ln -s /snap/bin/certbot /usr/bin/certbot`<br>
|
||||
|
||||
`sudo certbot --nginx`
|
||||
Fill out the questions about domain name email etc.<br>
|
||||
|
||||
`sudo nginx -t`<br>
|
||||
`sudo systemctl reload nginx`<br>
|
||||
|
||||
Now you should have a fully functional HTTPS server.
|
||||
|
||||
### Resources
|
||||
I found these resources helpful.
|
||||
|
||||
[Digital Ocean - Nginx ](./resources/DigitalOcean_Nginx.md)<br>
|
||||
[Digital Ocean - Let's Encrypt](./resources/DigitalOcean_LetsEncrypt.md)<br>
|
||||
[Digital Ocean - Server Blocks](./resources/DigitalOcean_ServerBlocks.md)<br>
|
||||
[Rate Limiting With Nginx](./resources/Rate_Limiting_With_Nginx)<br>
|
||||
[![Youtube](https://i.ytimg.com/vi/HaY8QB5kkGw/hqdefault.jpg)](https://youtu.be/HaY8QB5kkGw?si=9k44i9hon35KsaYp)<br>
|
||||
[![YouTube](https://i.ytimg.com/vi/-lrSPJTeGhQ/hqdefault.jpg)](https://www.youtube.com/watch?v=-lrSPJTeGhQ)
|
||||
|
140
nginx/resources/DigitalOcean_LetsEncrypt.md
Normal file
140
nginx/resources/DigitalOcean_LetsEncrypt.md
Normal file
@ -0,0 +1,140 @@
|
||||
### [Introduction](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#introduction)
|
||||
|
||||
Let’s Encrypt is a Certificate Authority (CA) that provides an accessible way to obtain and install free [TLS/SSL certificates](https://www.digitalocean.com/community/tutorials/openssl-essentials-working-with-ssl-certificates-private-keys-and-csrs), thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
|
||||
|
||||
In this tutorial, you will use Certbot to obtain a free SSL certificate for Nginx on Ubuntu 22.04 and set up your certificate to renew automatically.
|
||||
|
||||
This tutorial will use a separate Nginx server configuration file instead of the default file. [We recommend](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-22-04#step-5-%E2%80%93-setting-up-server-blocks-(recommended)) creating new Nginx server block files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration.
|
||||
|
||||
## [Prerequisites](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#prerequisites)
|
||||
|
||||
To follow this tutorial, you will need:
|
||||
|
||||
- One Ubuntu 22.04 server set up by following this [initial server setup for Ubuntu 22.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-22-04) tutorial, including a sudo-enabled non-**root** user and a firewall.
|
||||
|
||||
- A registered domain name. This tutorial will use `example.com` throughout. You can purchase a domain name from [Namecheap](https://namecheap.com/), get one for free with [Freenom](https://www.freenom.com/), or use the domain registrar of your choice.
|
||||
|
||||
- Both of the following DNS records set up for your server. If you are using DigitalOcean, please see our [DNS documentation](https://www.digitalocean.com/docs/networking/dns/) for details on how to add them.
|
||||
|
||||
- An A record with `example.com` pointing to your server’s public IP address.
|
||||
- An A record with `www.example.com` pointing to your server’s public IP address.
|
||||
- Nginx installed by following [How To Install Nginx on Ubuntu 22.04](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-22-04). Be sure that you have a [server block](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-22-04#step-5-%E2%80%93-setting-up-server-blocks-(recommended)) for your domain. This tutorial will use `/etc/nginx/sites-available/example.com` as an example.
|
||||
|
||||
|
||||
## [Step 1 — Installing Certbot](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#step-1-installing-certbot)
|
||||
|
||||
Certbot recommends using their _snap_ package for installation. Snap packages work on nearly all Linux distributions, but they require that you’ve installed snapd first in order to manage snap packages. Ubuntu 22.04 comes with support for snaps out of the box, so you can start by making sure your snapd core is up to date:
|
||||
|
||||
If you’re working on a server that previously had an older version of certbot installed, you should remove it before going any further:
|
||||
|
||||
After that, you can install the `certbot` package:
|
||||
|
||||
Finally, you can link the `certbot` command from the snap install directory to your path, so you’ll be able to run it by just typing `certbot`. This isn’t necessary with all packages, but snaps tend to be less intrusive by default, so they don’t conflict with any other system packages by accident:
|
||||
|
||||
Now that we have Certbot installed, let’s run it to get our certificate.
|
||||
|
||||
## [Step 2 — Confirming Nginx’s Configuration](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#step-2-confirming-nginx-s-configuration)
|
||||
|
||||
Certbot needs to be able to find the correct `server` block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a `server_name` directive that matches the domain you request a certificate for.
|
||||
|
||||
If you followed the [server block set up step in the Nginx installation tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-22-04#step-5-%E2%80%93-setting-up-server-blocks-(recommended)), you should have a server block for your domain at `/etc/nginx/sites-available/example.com` with the `server_name` directive already set appropriately.
|
||||
|
||||
To check, open the configuration file for your domain using `nano` or your favorite text editor:
|
||||
|
||||
Find the existing `server_name` line. It should look like this:
|
||||
|
||||
/etc/nginx/sites-available/example.com
|
||||
|
||||
If it does, exit your editor and move on to the next step.
|
||||
|
||||
If it doesn’t, update it to match. Then save the file, quit your editor, and verify the syntax of your configuration edits:
|
||||
|
||||
If you get an error, reopen the server block file and check for any typos or missing characters. Once your configuration file’s syntax is correct, reload Nginx to load the new configuration:
|
||||
|
||||
Certbot can now find the correct `server` block and update it automatically.
|
||||
|
||||
Next, let’s update the firewall to allow HTTPS traffic.
|
||||
|
||||
## [Step 3 — Allowing HTTPS Through the Firewall](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#step-3-allowing-https-through-the-firewall)
|
||||
|
||||
If you have the `ufw` firewall enabled, as recommended by the prerequisite guides, you’ll need to adjust the settings to allow for HTTPS traffic. Luckily, Nginx registers a few profiles with `ufw` upon installation.
|
||||
|
||||
You can see the current setting by typing:
|
||||
|
||||
It will probably look like this, meaning that only HTTP traffic is allowed to the web server:
|
||||
|
||||
```
|
||||
<p>Output</p>Status: active
|
||||
|
||||
To Action From
|
||||
-- ------ ----
|
||||
OpenSSH ALLOW Anywhere
|
||||
Nginx HTTP ALLOW Anywhere
|
||||
OpenSSH (v6) ALLOW Anywhere (v6)
|
||||
Nginx HTTP (v6) ALLOW Anywhere (v6)
|
||||
```
|
||||
|
||||
To additionally let in HTTPS traffic, allow the Nginx Full profile and delete the redundant Nginx HTTP profile allowance:
|
||||
|
||||
Your status should now look like this:
|
||||
|
||||
```
|
||||
<p>Output</p>Status: active
|
||||
|
||||
To Action From
|
||||
-- ------ ----
|
||||
OpenSSH ALLOW Anywhere
|
||||
Nginx Full ALLOW Anywhere
|
||||
OpenSSH (v6) ALLOW Anywhere (v6)
|
||||
Nginx Full (v6) ALLOW Anywhere (v6)
|
||||
```
|
||||
|
||||
Next, let’s run Certbot and fetch our certificates.
|
||||
|
||||
## [Step 4 — Obtaining an SSL Certificate](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#step-4-obtaining-an-ssl-certificate)
|
||||
|
||||
Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:
|
||||
|
||||
This runs `certbot` with the `--nginx` plugin, using `-d` to specify the domain names we’d like the certificate to be valid for.
|
||||
|
||||
When running the command, you will be prompted to enter an email address and agree to the terms of service. After doing so, you should see a message telling you the process was successful and where your certificates are stored:
|
||||
|
||||
```
|
||||
<p>Output</p>IMPORTANT NOTES:
|
||||
Successfully received certificate.
|
||||
Certificate is saved at: /etc/letsencrypt/live/<mark>your_domain</mark>/fullchain.pem
|
||||
Key is saved at: /etc/letsencrypt/live/<mark>your_domain</mark>/privkey.pem
|
||||
This certificate expires on 2022-06-01.
|
||||
These files will be updated when the certificate renews.
|
||||
Certbot has set up a scheduled task to automatically renew this certificate in the background.
|
||||
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
If you like Certbot, please consider supporting our work by:
|
||||
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
|
||||
* Donating to EFF: https://eff.org/donate-le
|
||||
```
|
||||
|
||||
Your certificates are downloaded, installed, and loaded, and your Nginx configuration will now automatically redirect all web requests to `https://`. Try reloading your website and notice your browser’s security indicator. It should indicate that the site is properly secured, usually with a lock icon. If you test your server using the [SSL Labs Server Test](https://www.ssllabs.com/ssltest/), it will get an **A** grade.
|
||||
|
||||
Let’s finish by testing the renewal process.
|
||||
|
||||
## [Step 5 — Verifying Certbot Auto-Renewal](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#step-5-verifying-certbot-auto-renewal)
|
||||
|
||||
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The `certbot` package we installed takes care of this for us by adding a systemd timer that will run twice a day and automatically renew any certificate that’s within thirty days of expiration.
|
||||
|
||||
You can query the status of the timer with `systemctl`:
|
||||
|
||||
```
|
||||
<p>Output</p>○ snap.certbot.renew.service - Service for snap application certbot.renew
|
||||
Loaded: loaded (/etc/systemd/system/snap.certbot.renew.service; static)
|
||||
Active: inactive (dead)
|
||||
TriggeredBy: ● snap.certbot.renew.timer
|
||||
```
|
||||
|
||||
To test the renewal process, you can do a dry run with `certbot`:
|
||||
|
||||
If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.
|
||||
|
||||
## [Conclusion](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04#conclusion)
|
||||
|
||||
In this tutorial, you installed the Let’s Encrypt client `certbot`, downloaded SSL certificates for your domain, configured Nginx to use these certificates, and set up automatic certificate renewal. If you have further questions about using Certbot, [the official documentation](https://certbot.eff.org/docs/) is a good place to start.
|
211
nginx/resources/DigitalOcean_Nginx.md
Normal file
211
nginx/resources/DigitalOcean_Nginx.md
Normal file
@ -0,0 +1,211 @@
|
||||
### [Introduction](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#introduction)
|
||||
|
||||
[Nginx](https://www.nginx.com/) is one of the most popular web servers in the world and is responsible for hosting some of the largest and highest-traffic sites on the internet. It is a lightweight choice that can be used as either a web server or reverse proxy.
|
||||
|
||||
In this guide, we’ll discuss how to install Nginx on your Ubuntu 20.04 server, adjust the firewall, manage the Nginx process, and set up server blocks for hosting more than one domain from a single server.
|
||||
|
||||
Simplify deploying applications with [DigitalOcean App Platform](https://www.digitalocean.com/products/app-platform). Deploy directly from GitHub in minutes.
|
||||
|
||||
## [Prerequisites](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#prerequisites)
|
||||
|
||||
Before you begin this guide, you should have a regular, non-root user with sudo privileges configured on your server. You can learn how to configure a regular user account by following our [Initial server setup guide for Ubuntu 20.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-20-04).
|
||||
|
||||
You will also optionally want to have registered a domain name before completing the last steps of this tutorial. To learn more about setting up a domain name with DigitalOcean, please refer to our [Introduction to DigitalOcean DNS](https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-dns).
|
||||
|
||||
When you have an account available, log in as your non-root user to begin.
|
||||
|
||||
## [Step 1 – Installing Nginx](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-1-installing-nginx)
|
||||
|
||||
Because Nginx is available in Ubuntu’s default repositories, it is possible to install it from these repositories using the `apt` packaging system.
|
||||
|
||||
Since this is our first interaction with the `apt` packaging system in this session, we will update our local package index so that we have access to the most recent package listings. Afterwards, we can install `nginx`:
|
||||
|
||||
After accepting the procedure, `apt` will install Nginx and any required dependencies to your server.
|
||||
|
||||
## [Step 2 – Adjusting the Firewall](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-2-adjusting-the-firewall)
|
||||
|
||||
Before testing Nginx, the firewall software needs to be adjusted to allow access to the service. Nginx registers itself as a service with `ufw` upon installation, making it straightforward to allow Nginx access.
|
||||
|
||||
List the application configurations that `ufw` knows how to work with by typing:
|
||||
|
||||
You should get a listing of the application profiles:
|
||||
|
||||
```
|
||||
<p>Output</p>Available applications:
|
||||
Nginx Full
|
||||
Nginx HTTP
|
||||
Nginx HTTPS
|
||||
OpenSSH
|
||||
```
|
||||
|
||||
As demonstrated by the output, there are three profiles available for Nginx:
|
||||
|
||||
- **Nginx Full**: This profile opens both port 80 (normal, unencrypted web traffic) and port 443 (TLS/SSL encrypted traffic)
|
||||
- **Nginx HTTP**: This profile opens only port 80 (normal, unencrypted web traffic)
|
||||
- **Nginx HTTPS**: This profile opens only port 443 (TLS/SSL encrypted traffic)
|
||||
|
||||
It is recommended that you enable the most restrictive profile that will still allow the traffic you’ve configured. Right now, we will only need to allow traffic on port 80.
|
||||
|
||||
You can enable this by typing:
|
||||
|
||||
You can verify the change by typing:
|
||||
|
||||
The output will indicated which HTTP traffic is allowed:
|
||||
|
||||
```
|
||||
<p>Output</p>Status: active
|
||||
|
||||
To Action From
|
||||
-- ------ ----
|
||||
OpenSSH ALLOW Anywhere
|
||||
Nginx HTTP ALLOW Anywhere
|
||||
OpenSSH (v6) ALLOW Anywhere (v6)
|
||||
Nginx HTTP (v6) ALLOW Anywhere (v6)
|
||||
```
|
||||
|
||||
## [Step 3 – Checking your Web Server](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-3-checking-your-web-server)
|
||||
|
||||
At the end of the installation process, Ubuntu 20.04 starts Nginx. The web server should already be up and running.
|
||||
|
||||
We can check with the `systemd` init system to make sure the service is running by typing:
|
||||
|
||||
```
|
||||
<p>Output</p>● nginx.service - A high performance web server and a reverse proxy server
|
||||
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
|
||||
Active: <mark>active (running)</mark> since Fri 2020-04-20 16:08:19 UTC; 3 days ago
|
||||
Docs: man:nginx(8)
|
||||
Main PID: 2369 (nginx)
|
||||
Tasks: 2 (limit: 1153)
|
||||
Memory: 3.5M
|
||||
CGroup: /system.slice/nginx.service
|
||||
├─2369 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
|
||||
└─2380 nginx: worker process
|
||||
```
|
||||
|
||||
As confirmed by this out, the service has started successfully. However, the best way to test this is to actually request a page from Nginx.
|
||||
|
||||
You can access the default Nginx landing page to confirm that the software is running properly by navigating to your server’s IP address. If you do not know your server’s IP address, you can find it by using the [icanhazip.com](http://icanhazip.com/) tool, which will give you your public IP address as received from another location on the internet:
|
||||
|
||||
When you have your server’s IP address, enter it into your browser’s address bar:
|
||||
|
||||
```
|
||||
http://<mark>your_server_ip</mark>
|
||||
```
|
||||
|
||||
You should receive the default Nginx landing page:
|
||||
|
||||
![Nginx default page](https://assets.digitalocean.com/articles/nginx_1604/default_page.png)
|
||||
|
||||
If you are on this page, your server is running correctly and is ready to be managed.
|
||||
|
||||
## [Step 4 – Managing the Nginx Process](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-4-managing-the-nginx-process)
|
||||
|
||||
Now that you have your web server up and running, let’s review some basic management commands.
|
||||
|
||||
To stop your web server, type:
|
||||
|
||||
To start the web server when it is stopped, type:
|
||||
|
||||
To stop and then start the service again, type:
|
||||
|
||||
If you are only making configuration changes, Nginx can often reload without dropping connections. To do this, type:
|
||||
|
||||
By default, Nginx is configured to start automatically when the server boots. If this is not what you want, you can disable this behavior by typing:
|
||||
|
||||
To re-enable the service to start up at boot, you can type:
|
||||
|
||||
You have now learned basic management commands and should be ready to configure the site to host more than one domain.
|
||||
|
||||
## [Step 5 – Setting Up Server Blocks (Recommended)](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-5-setting-up-server-blocks-recommended)
|
||||
|
||||
When using the Nginx web server, _server blocks_ (similar to virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain from a single server. We will set up a domain called **your\_domain**, but you should **replace this with your own domain name**.
|
||||
|
||||
Nginx on Ubuntu 20.04 has one server block enabled by default that is configured to serve documents out of a directory at `/var/www/html`. While this works well for a single site, it can become unwieldy if you are hosting multiple sites. Instead of modifying `/var/www/html`, let’s create a directory structure within `/var/www` for our **your\_domain** site, leaving `/var/www/html` in place as the default directory to be served if a client request doesn’t match any other sites.
|
||||
|
||||
Create the directory for **your\_domain** as follows, using the `-p` flag to create any necessary parent directories:
|
||||
|
||||
Next, assign ownership of the directory with the `$USER` environment variable:
|
||||
|
||||
The permissions of your web roots should be correct if you haven’t modified your `umask` value, which sets default file permissions. To ensure that your permissions are correct and allow the owner to read, write, and execute the files while granting only read and execute permissions to groups and others, you can input the following command:
|
||||
|
||||
Next, create a sample `index.html` page using `nano` or your favorite editor:
|
||||
|
||||
Inside, add the following sample HTML:
|
||||
|
||||
/var/www/your\_domain/html/index.html
|
||||
|
||||
Save and close the file by pressing `Ctrl+X` to exit, then when prompted to save, `Y` and then `Enter`.
|
||||
|
||||
In order for Nginx to serve this content, it’s necessary to create a server block with the correct directives. Instead of modifying the default configuration file directly, let’s make a new one at `/etc/nginx/sites-available/your_domain`:
|
||||
|
||||
Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name:
|
||||
|
||||
/etc/nginx/sites-available/your\_domain
|
||||
|
||||
Notice that we’ve updated the `root` configuration to our new directory, and the `server_name` to our domain name.
|
||||
|
||||
Next, let’s enable the file by creating a link from it to the `sites-enabled` directory, which Nginx reads from during startup:
|
||||
|
||||
**Note:** Nginx uses a common practice called symbolic links, or symlinks, to track which of your server blocks are enabled. Creating a symlink is like creating a shortcut on disk, so that you could later delete the shortcut from the `sites-enabled` directory while keeping the server block in `sites-available` if you wanted to enable it.
|
||||
|
||||
Two server blocks are now enabled and configured to respond to requests based on their `listen` and `server_name` directives (you can read more about how Nginx processes these directives [here](https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms)):
|
||||
|
||||
- `your_domain`: Will respond to requests for `your_domain` and `www.your_domain`.
|
||||
- `default`: Will respond to any requests on port 80 that do not match the other two blocks.
|
||||
|
||||
To avoid a possible hash bucket memory problem that can arise from adding additional server names, it is necessary to adjust a single value in the `/etc/nginx/nginx.conf` file. Open the file:
|
||||
|
||||
Find the `server_names_hash_bucket_size` directive and remove the `#` symbol to uncomment the line. If you are using nano, you can quickly search for words in the file by pressing `CTRL` and `w`.
|
||||
|
||||
**Note:** Commenting out lines of code – usually by putting `#` at the start of a line – is another way of disabling them without needing to actually delete them. Many configuration files ship with multiple options commented out so that they can be enabled or disabled, by toggling them between active code and documentation.
|
||||
|
||||
/etc/nginx/nginx.conf
|
||||
|
||||
```
|
||||
...
|
||||
http {
|
||||
...
|
||||
server_names_hash_bucket_size 64;
|
||||
...
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Save and close the file when you are finished.
|
||||
|
||||
Next, test to make sure that there are no syntax errors in any of your Nginx files:
|
||||
|
||||
If there aren’t any problems, restart Nginx to enable your changes:
|
||||
|
||||
Nginx should now be serving your domain name. You can test this by navigating to `http://your_domain`, where you should see something like this:
|
||||
|
||||
![Nginx first server block](https://assets.digitalocean.com/articles/how-to-install-nginx-u18.04/your-domain-server-block-nginx.PNG)
|
||||
|
||||
## [Step 6 – Getting Familiar with Important Nginx Files and Directories](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#step-6-getting-familiar-with-important-nginx-files-and-directories)
|
||||
|
||||
Now that you know how to manage the Nginx service itself, you should take a few minutes to familiarize yourself with a few important directories and files.
|
||||
|
||||
### [Content](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#content)
|
||||
|
||||
- `/var/www/html`: The actual web content, which by default only consists of the default Nginx page you saw earlier, is served out of the `/var/www/html` directory. This can be changed by altering Nginx configuration files.
|
||||
|
||||
### [Server Configuration](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#server-configuration)
|
||||
|
||||
- `/etc/nginx`: The Nginx configuration directory. All of the Nginx configuration files reside here.
|
||||
- `/etc/nginx/nginx.conf`: The main Nginx configuration file. This can be modified to make changes to the Nginx global configuration.
|
||||
- `/etc/nginx/sites-available/`: The directory where per-site server blocks can be stored. Nginx will not use the configuration files found in this directory unless they are linked to the `sites-enabled` directory. Typically, all server block configuration is done in this directory, and then enabled by linking to the other directory.
|
||||
- `/etc/nginx/sites-enabled/`: The directory where enabled per-site server blocks are stored. Typically, these are created by linking to configuration files found in the `sites-available` directory.
|
||||
- `/etc/nginx/snippets`: This directory contains configuration fragments that can be included elsewhere in the Nginx configuration. Potentially repeatable configuration segments are good candidates for refactoring into snippets.
|
||||
|
||||
### [Server Logs](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#server-logs)
|
||||
|
||||
- `/var/log/nginx/access.log`: Every request to your web server is recorded in this log file unless Nginx is configured to do otherwise.
|
||||
- `/var/log/nginx/error.log`: Any Nginx errors will be recorded in this log.
|
||||
|
||||
## [Conclusion](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04#conclusion)
|
||||
|
||||
Now that you have your web server installed, you have many options for the type of content to serve and the technologies you want to use to create a richer experience.
|
||||
|
||||
If you’d like to build out a more complete application stack, check out the article [How To Install Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 20.04](https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-20-04).
|
||||
|
||||
In order to set up HTTPS for your domain name with a free SSL certificate using _Let’s Encrypt_, you should move on to [How To Secure Nginx with Let’s Encrypt on Ubuntu 20.04](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04).
|
303
nginx/resources/DigitalOcean_ServerBlocks.md
Normal file
303
nginx/resources/DigitalOcean_ServerBlocks.md
Normal file
@ -0,0 +1,303 @@
|
||||
### [Introduction](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#introduction)
|
||||
|
||||
When using the Nginx web server, **server blocks** (similar to virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain on a single server.
|
||||
|
||||
In this guide, we’ll discuss how to configure server blocks in Nginx on an Ubuntu 16.04 server.
|
||||
|
||||
Deploy your applications from GitHub using [DigitalOcean App Platform](https://www.digitalocean.com/products/app-platform). Let DigitalOcean focus on scaling your app.
|
||||
|
||||
## [Prerequisites](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#prerequisites)
|
||||
|
||||
We’re going to be using a non-root user with `sudo` privileges throughout this tutorial. If you do not have a user like this configured, you can create one by following our [Ubuntu 16.04 initial server setup](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04) guide.
|
||||
|
||||
You will also need to have Nginx installed on your server. The following guides cover this procedure:
|
||||
|
||||
- [How To Install Nginx on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-16-04): Use this guide to set up Nginx on its own.
|
||||
- [How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04): Use this guide if you will be using Nginx in conjunction with MySQL and PHP.
|
||||
|
||||
When you have fulfilled these requirements, you can continue on with this guide.
|
||||
|
||||
## [Example Configuration](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#example-configuration)
|
||||
|
||||
For demonstration purposes, we’re going to set up two domains with our Nginx server. The domain names we’ll use in this guide are **[example.com](http://example.com/)** and **[test.com](http://test.com/)**.
|
||||
|
||||
If you do not have two spare domain names to play with, use placeholder names for now and we’ll show you later how to configure your local computer to test your configuration.
|
||||
|
||||
## [Step 1 — Setting Up New Document Root Directories](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-1-setting-up-new-document-root-directories)
|
||||
|
||||
By default, Nginx on Ubuntu 16.04 has one server block enabled. It is configured to serve documents out of a directory at `/var/www/html`.
|
||||
|
||||
While this works well for a single site, we need additional directories if we’re going to serve multiple sites. We can consider the `/var/www/html` directory the default directory that will be served if the client request doesn’t match any of our other sites.
|
||||
|
||||
We will create a directory structure within `/var/www` for each of our sites. The actual web content will be placed in an `html` directory within these site-specific directories. This gives us some additional flexibility to create other directories associated with our sites as siblings to the `html` directory if necessary.
|
||||
|
||||
We need to create these directories for each of our sites. The `-p` flag tells `mkdir` to create any necessary parent directories along the way:
|
||||
|
||||
Now that we have our directories, we will reassign ownership of the web directories to our normal user account. This will let us write to them without `sudo`.
|
||||
|
||||
**Note:** Depending on your needs, you might need to adjust the permissions or ownership of the folders again to allow certain access to the `www-data` user. For instance, dynamic sites will often need this. The specific permissions and ownership requirements entirely depend on your configuration. Follow the recommendations for the specific technology you’re using.
|
||||
|
||||
We can use the `$USER` environmental variable to assign ownership to the account that we are currently signed in on (make sure you’re not logged in as **root**). This will allow us to easily create or edit the content in this directory:
|
||||
|
||||
The permissions of our web roots should be correct already if you have not modified your `umask` value, but we can make sure by typing:
|
||||
|
||||
Our directory structure is now configured and we can move on.
|
||||
|
||||
## [Step 2 — Creating Sample Pages for Each Site](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-2-creating-sample-pages-for-each-site)
|
||||
|
||||
Now that we have our directory structure set up, let’s create a default page for each of our sites so that we will have something to display.
|
||||
|
||||
Create an `index.html` file in your first domain:
|
||||
|
||||
Inside the file, we’ll create a really basic file that indicates what site we are currently accessing. It will look like this:
|
||||
|
||||
/var/www/example.com/html/index.html
|
||||
|
||||
```
|
||||
<html>
|
||||
<head>
|
||||
<title>Welcome to <mark>Example.com</mark>!</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Success! The <mark>example.com</mark> server block is working!</h1>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
Save and close the file when you are finished. To do this in `nano`, press `CTRL+o` to write the file out, then `CTRL+x` to exit.
|
||||
|
||||
Since the file for our second site is basically going to be the same, we can copy it over to our second document root like this:
|
||||
|
||||
Now, we can open the new file in our editor:
|
||||
|
||||
Modify it so that it refers to our second domain:
|
||||
|
||||
/var/www/test.com/html/index.html
|
||||
|
||||
```
|
||||
<html>
|
||||
<head>
|
||||
<title>Welcome to <mark>Test.com</mark>!</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Success! The <mark>test.com</mark> server block is working!</h1>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
Save and close this file when you are finished. We now have some pages to display to visitors of our two domains.
|
||||
|
||||
## [Step 3 — Creating Server Block Files for Each Domain](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-3-creating-server-block-files-for-each-domain)
|
||||
|
||||
Now that we have the content we wish to serve, we need to create the server blocks that will tell Nginx how to do this.
|
||||
|
||||
By default, Nginx contains one server block called `default` which we can use as a template for our own configurations. We will begin by designing our first domain’s server block, which we will then copy over for our second domain and make the necessary modifications.
|
||||
|
||||
### [Creating the First Server Block File](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#creating-the-first-server-block-file)
|
||||
|
||||
As mentioned above, we will create our first server block config file by copying over the default file:
|
||||
|
||||
Now, open the new file you created in your text editor with `sudo` privileges:
|
||||
|
||||
Ignoring the commented lines, the file will look similar to this:
|
||||
|
||||
/etc/nginx/sites-available/example.com
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
|
||||
root /var/www/html;
|
||||
index index.html index.htm index.nginx-debian.html;
|
||||
|
||||
server_name _;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
First, we need to look at the listen directives. **Only one of our server blocks on the server can have the `default_server` option enabled.** This specifies which block should serve a request if the `server_name` requested does not match any of the available server blocks. This shouldn’t happen very frequently in real world scenarios since visitors will be accessing your site through your domain name.
|
||||
|
||||
You can choose to designate one of your sites as the “default” by including the `default_server` option in the `listen` directive, or you can leave the default server block enabled, which will serve the content of the `/var/www/html` directory if the requested host cannot be found.
|
||||
|
||||
In this guide, we’ll leave the default server block in place to serve non-matching requests, so we’ll remove the `default_server` from this and the next server block. You can choose to add the option to whichever of your server blocks makes sense to you.
|
||||
|
||||
/etc/nginx/sites-available/example.com
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
. . .
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** You can check that the `default_server` option is only enabled in a single active file by typing:
|
||||
|
||||
If matches are found uncommented in more than on file (shown in the leftmost column), Nginx will complain about an invalid configuration.
|
||||
|
||||
The next thing we’re going to have to adjust is the document root, specified by the `root` directive. Point it to the site’s document root that you created:
|
||||
|
||||
/etc/nginx/sites-available/example.com
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
root /var/www/<mark>example.com</mark>/html;
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Next, we need to modify the `server_name` to match requests for our first domain. We can additionally add any aliases that we want to match. We will add a `www.example.com` alias to demonstrate.
|
||||
|
||||
When you are finished, your file will look something like this:
|
||||
|
||||
/etc/nginx/sites-available/example.com
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
root /var/www/<mark>example.com</mark>/html;
|
||||
index index.html index.htm index.nginx-debian.html;
|
||||
|
||||
server_name <mark>example.com</mark> www.<mark>example.com</mark>;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
That is all we need for a basic configuration. Save and close the file to exit.
|
||||
|
||||
### [Creating the Second Server Block File](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#creating-the-second-server-block-file)
|
||||
|
||||
Now that we have our initial server block configuration, we can use that as a basis for our second file. Copy it over to create a new file:
|
||||
|
||||
Open the new file with `sudo` privileges in your editor:
|
||||
|
||||
Again, make sure that you do not use the `default_server` option for the `listen` directive in this file if you’ve already used it elsewhere. Adjust the `root` directive to point to your second domain’s document root and adjust the `server_name` to match your second site’s domain name (make sure to include any aliases).
|
||||
|
||||
When you are finished, your file will likely look something like this:
|
||||
|
||||
/etc/nginx/sites-available/test.com
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
root /var/www/<mark>test.com</mark>/html;
|
||||
index index.html index.htm index.nginx-debian.html;
|
||||
|
||||
server_name <mark>test.com</mark> www.<mark>test.com</mark>;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When you are finished, save and close the file.
|
||||
|
||||
## [Step 4 — Enabling your Server Blocks and Restart Nginx](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-4-enabling-your-server-blocks-and-restart-nginx)
|
||||
|
||||
Now that we have our server block files, we need to enable them. We can do this by creating symbolic links from these files to the `sites-enabled` directory, which Nginx reads from during startup.
|
||||
|
||||
We can create these links by typing:
|
||||
|
||||
These files are now linked into the enabled directory. We now have three server blocks enabled, which are configured to respond based on their `listen` directive and the `server_name` (you can read more about how Nginx processes these directives [here](https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms)):
|
||||
|
||||
- `example.com`: Will respond to requests for `example.com` and `www.example.com`
|
||||
- `test.com`: Will respond to requests for `test.com` and `www.test.com`
|
||||
- `default`: Will respond to any requests on port 80 that do not match the other two blocks.
|
||||
|
||||
In order to avoid a possible hash bucket memory problem that can arise from adding additional server names, we will also adjust a single value within our `/etc/nginx/nginx.conf` file. Open the file now:
|
||||
|
||||
Within the file, find the `server_names_hash_bucket_size` directive. Remove the `#` symbol to uncomment the line:
|
||||
|
||||
/etc/nginx/nginx.conf
|
||||
|
||||
```
|
||||
http {
|
||||
. . .
|
||||
|
||||
server_names_hash_bucket_size 64;
|
||||
|
||||
. . .
|
||||
}
|
||||
```
|
||||
|
||||
Save and close the file when you are finished.
|
||||
|
||||
Next, test to make sure that there are no syntax errors in any of your Nginx files:
|
||||
|
||||
If no problems were found, restart Nginx to enable your changes:
|
||||
|
||||
Nginx should now be serving both of your domain names.
|
||||
|
||||
## [Step 5 — Modifying Your Local Hosts File for Testing (Optional)](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-5-modifying-your-local-hosts-file-for-testing-optional)
|
||||
|
||||
If you have not been using domain names that you own and instead have been using placeholder values, you can modify your local computer’s configuration to let you to temporarily test your Nginx server block configuration.
|
||||
|
||||
This will not allow other visitors to view your site correctly, but it will give you the ability to reach each site independently and test your configuration. This works by intercepting requests that would usually go to DNS to resolve domain names. Instead, we can set the IP addresses we want our local computer to go to when we request the domain names.
|
||||
|
||||
**Note:** Make sure you are operating on your local computer during these steps and not a remote server. You will need to have root access, be a member of the administrative group, or otherwise be able to edit system files to do this.
|
||||
|
||||
If you are on a Mac or Linux computer at home, you can edit the file needed by typing:
|
||||
|
||||
If you are on Windows, you can [find instructions for altering your hosts file](https://www.thewindowsclub.com/hosts-file-in-windows) here.
|
||||
|
||||
You need to know your server’s public IP address and the domains you want to route to the server. Assuming that my server’s public IP address is `203.0.113.5`, the lines I would add to my file would look something like this:
|
||||
|
||||
/etc/hosts
|
||||
|
||||
```
|
||||
127.0.0.1 localhost
|
||||
. . .
|
||||
|
||||
<mark>203.0.113.5 example.com www.example.com</mark>
|
||||
<mark>203.0.113.5 test.com www.test.com</mark>
|
||||
```
|
||||
|
||||
This will intercept any requests for `example.com` and `test.com` and send them to your server, which is what we want if we don’t actually own the domains that we are using.
|
||||
|
||||
Save and close the file when you are finished.
|
||||
|
||||
## [Step 6 — Testing Your Results](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#step-6-testing-your-results)
|
||||
|
||||
Now that you are all set up, you should test that your server blocks are functioning correctly. You can do that by visiting the domains in your web browser:
|
||||
|
||||
```
|
||||
http://<mark>example.com</mark>
|
||||
```
|
||||
|
||||
You should see a page that looks like this:
|
||||
|
||||
![Nginx first server block](https://assets.digitalocean.com/articles/nginx_server_block_1404/first_block.png)
|
||||
|
||||
If you visit your second domain name, you should see a slightly different site:
|
||||
|
||||
```
|
||||
http://<mark>test.com</mark>
|
||||
```
|
||||
|
||||
![Nginx second server block](https://assets.digitalocean.com/articles/nginx_server_block_1404/second_block.png)
|
||||
|
||||
If both of these sites work, you have successfully configured two independent server blocks with Nginx.
|
||||
|
||||
At this point, if you adjusted your `hosts` file on your local computer in order to test, you’ll probably want to remove the lines you added.
|
||||
|
||||
If you need domain name access to your server for a public-facing site, you will probably want to purchase a domain name for each of your sites.
|
||||
|
||||
## [Conclusion](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04#conclusion)
|
||||
|
||||
You should now have the ability to create server blocks for each domain you wish to host from the same server. There aren’t any real limits on the number of server blocks you can create, so long as your hardware can handle the traffic.
|
232
nginx/resources/Rate_Limiting_With_Nginx.md
Normal file
232
nginx/resources/Rate_Limiting_With_Nginx.md
Normal file
@ -0,0 +1,232 @@
|
||||
One of the most useful, but often misunderstood and misconfigured, features of NGINX is rate limiting. It allows you to limit the amount of HTTP requests a user can make in a given period of time. A request can be as simple as a `GET` request for the homepage of a website or a `POST` request on a log‑in form.
|
||||
|
||||
Rate limiting can be used for security purposes, for example to slow down brute‑force password‑guessing attacks. It can help [protect against DDoS attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/) by limiting the incoming request rate to a value typical for real users, and (with logging) identify the targeted URLs. More generally, it is used to protect upstream application servers from being overwhelmed by too many user requests at the same time.
|
||||
|
||||
In this blog we will cover the basics of rate limiting with NGINX as well as more advanced configurations. Rate limiting works the same way in NGINX Plus.
|
||||
|
||||
To learn more about rate limiting with NGINX, watch our [on-demand webinar](https://www.nginx.com/resources/webinars/rate-limiting-nginx/).
|
||||
|
||||
## How NGINX Rate Limiting Works
|
||||
|
||||
NGINX rate limiting uses the “leaky bucket algorithm”, which is widely used in telecommunications and packet‑switched computer networks to deal with burstiness when bandwidth is limited. The analogy is with a bucket where water is poured in at the top and leaks from the bottom; if the rate at which water is poured in exceeds the rate at which it leaks, the bucket overflows. In terms of request processing, the water represents requests from clients, and the bucket represents a queue where requests wait to be processed according to a first‑in‑first‑out (FIFO) scheduling algorithm. The leaking water represents requests exiting the buffer for processing by the server, and the overflow represents requests that are discarded and never serviced.
|
||||
|
||||
## Configuring Basic Rate Limiting
|
||||
|
||||
Rate limiting is configured with two main directives, `limit_req_zone` and `limit_req`, as in this example:
|
||||
|
||||
```
|
||||
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
|
||||
|
||||
server {
|
||||
location /login/ {
|
||||
limit_req zone=mylimit;
|
||||
|
||||
proxy_pass http://my_upstream;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The [`limit_req_zone`](http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone) directive defines the parameters for rate limiting while [`limit_req`](http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req) enables rate limiting within the context where it appears (in the example, for all requests to **/login/**).
|
||||
|
||||
The `limit_req_zone` directive is typically defined in the `http` block, making it available for use in multiple contexts. It takes the following three parameters:
|
||||
|
||||
- **Key** – Defines the request characteristic against which the limit is applied. In the example it is the NGINX variable `$binary_remote_addr`, which holds a binary representation of a client’s IP address. This means we are limiting each unique IP address to the request rate defined by the third parameter. (We’re using this variable because it takes up less space than the string representation of a client IP address, `$remote_addr`).
|
||||
|
||||
- **Zone** – Defines the shared memory zone used to store the state of each IP address and how often it has accessed a request‑limited URL. Keeping the information in shared memory means it can be shared among the NGINX worker processes. The definition has two parts: the zone name identified by the `zone=` keyword, and the size following the colon. State information for about 16,000 IP addresses takes 1 ;megabyte, so our zone can store about 160,000 addresses.
|
||||
- If storage is exhausted when NGINX needs to add a new entry, it removes the oldest entry. If the space freed is still not enough to accommodate the new record, NGINX returns status code `503 (Service Temporarily Unavailable)`. Additionally, to prevent memory from being exhausted, every time NGINX creates a new entry it removes up to two entries that have not been used in the previous 60 seconds.
|
||||
|
||||
- **Rate** – Sets the maximum request rate. In the example, the rate cannot exceed 10 requests per second. NGINX actually tracks requests at millisecond granularity, so this limit corresponds to 1 request every 100 milliseconds (ms). Because we are not allowing for bursts (see the [next section](https://blog.nginx.org/blog/rate-limiting-nginx#bursts)), this means that a request is rejected if it arrives less than 100ms after the previous permitted one.
|
||||
|
||||
The `limit_req_zone` directive sets the parameters for rate limiting and the shared memory zone, but it does not actually limit the request rate. For that you need to apply the limit to a specific `location` or `server` block by including a `limit_req` directive there. In the example, we are rate limiting requests to **/login/**.
|
||||
|
||||
So now each unique IP address is limited to 10 requests per second for **/login/** – or more precisely, cannot make a request for that URL within 100ms of its previous one.
|
||||
|
||||
## Handling Bursts
|
||||
|
||||
What if we get 2 requests within 100ms of each other? For the second request NGINX returns status code `503` to the client. This is probably not what we want, because applications tend to be bursty in nature. Instead we want to buffer any excess requests and service them in a timely manner. This is where we use the `burst` parameter to `limit_req`, as in this updated configuration:
|
||||
|
||||
```
|
||||
location /login/ {
|
||||
limit_req zone=mylimit <strong>burst=20</strong>;
|
||||
|
||||
proxy_pass http://my_upstream;
|
||||
}
|
||||
```
|
||||
|
||||
The `burst` parameter defines how many requests a client can make in excess of the rate specified by the zone (with our sample **mylimit** zone, the rate limit is 10 requests per second, or 1 every 100ms). A request that arrives sooner than 100ms after the previous one is put in a queue, and here we are setting the queue size to 20.
|
||||
|
||||
That means if 21 requests arrive from a given IP address simultaneously, NGINX forwards the first one to the upstream server group immediately and puts the remaining 20 in the queue. It then forwards a queued request every 100ms, and returns `503` to the client only if an incoming request makes the number of queued requests go over 20.
|
||||
|
||||
## Queueing with No Delay
|
||||
|
||||
A configuration with `burst` results in a smooth flow of traffic, but is not very practical because it can make your site appear slow. In our example, the 20th packet in the queue waits 2 seconds to be forwarded, at which point a response to it might no longer be useful to the client. To address this situation, add the `nodelay` parameter along with the `burst` parameter:
|
||||
|
||||
```
|
||||
location /login/ {
|
||||
limit_req zone=mylimit <strong>burst=20 nodelay</strong>;
|
||||
|
||||
proxy_pass http://my_upstream;
|
||||
}
|
||||
```
|
||||
|
||||
With the `nodelay` parameter, NGINX still allocates slots in the queue according to the `burst` parameter and imposes the configured rate limit, but not by spacing out the forwarding of queued requests. Instead, when a request arrives “too soon”, NGINX forwards it immediately as long as there is a slot available for it in the queue. It marks that slot as “taken” and does not free it for use by another request until the appropriate time has passed (in our example, after 100ms).
|
||||
|
||||
Suppose, as before, that the 20‑slot queue is empty and 21 requests arrive simultaneously from a given IP address. NGINX forwards all 21 requests immediately and marks the 20 slots in the queue as taken, then frees 1 slot every 100ms. (If there were 25 requests instead, NGINX would immediately forward 21 of them, mark 20 slots as taken, and reject 4 requests with status `503`.)
|
||||
|
||||
Now suppose that 101ms after the first set of requests was forwarded another 20 requests arrive simultaneously. Only 1 slot in the queue has been freed, so NGINX forwards 1 request and rejects the other 19 with status `503`. If instead 501ms have passed before the 20 new requests arrive, 5 slots are free so NGINX forwards 5 requests immediately and rejects 15.
|
||||
|
||||
The effect is equivalent to a rate limit of 10 requests per second. The `nodelay` option is useful if you want to impose a rate limit without constraining the allowed spacing between requests.
|
||||
|
||||
**Note:** For most deployments, we recommend including the `burst` and `nodelay` parameters to the `limit_req` directive.
|
||||
|
||||
## Two-Stage Rate Limiting
|
||||
|
||||
With NGINX Open Source 1.15.7, you can configure NGINX to allow a burst of requests to accommodate the typical web browser request pattern, and then throttle additional excessive requests up to a point, beyond which additional excessive requests are rejected. Two-stage rate limiting is enabled with the [`delay`](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_delay) parameter to the `limit_req` directive.
|
||||
|
||||
To illustrate two‑stage rate limiting, here we configure NGINX to protect a website by imposing a rate limit of 5 requests per second (r/s). The website typically has 4–6 resources per page, and never more than 12 resources. The configuration allows bursts of up to 12 requests, the first 8 of which are processed without delay. A delay is added after 8 excessive requests to enforce the 5 r/s limit. After 12 excessive requests, any further requests are rejected.
|
||||
|
||||
```
|
||||
limit_req_zone $binary_remote_addr zone=ip:10m rate=5r/s;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
limit_req zone=ip burst=12 delay=8;
|
||||
proxy_pass http://website;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `delay` parameter defines the point at which, within the burst size, excessive requests are throttled (delayed) to comply with the defined rate limit. With this configuration in place, a client that makes a continuous stream of requests at 8 r/s experiences the following behavior.
|
||||
|
||||
![Illustration of rate‑limiting behavior with rate=5r/s burst=12 delay=8](https://nginxblog-8de1046ff5a84f2c-endpoint.azureedge.net/blobnginxbloga72cde487e/wp-content/uploads/2024/06/two-stage-rate-limiting-example.png)
|
||||
|
||||
Illustration of rate‑limiting behavior with `rate=5r/s` `burst=12` `delay=8`
|
||||
|
||||
The first 8 requests (the value of `delay`) are proxied by NGINX Plus without delay. The next 4 requests (`burst` `-` `delay`) are delayed so that the defined rate of 5 r/s is not exceeded. The next 3 requests are rejected because the total burst size has been exceeded. Subsequent requests are delayed.
|
||||
|
||||
## Advanced Configuration Examples
|
||||
|
||||
By combining basic rate limiting with other NGINX features, you can implement more nuanced traffic limiting.
|
||||
|
||||
### Allowlisting
|
||||
|
||||
This example shows how to impose a rate limit on requests from anyone who is not on an “allowlist”.
|
||||
|
||||
```
|
||||
geo $limit {
|
||||
default 1;
|
||||
10.0.0.0/8 0;
|
||||
192.168.0.0/24 0;
|
||||
}
|
||||
|
||||
map $limit $limit_key {
|
||||
0 "";
|
||||
1 $binary_remote_addr;
|
||||
}
|
||||
|
||||
limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;
|
||||
|
||||
server {
|
||||
location / {
|
||||
limit_req zone=req_zone burst=10 nodelay;
|
||||
|
||||
# ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This example makes use of both the [`geo`](http://nginx.org/en/docs/http/ngx_http_geo_module.html#geo) and [`map`](http://nginx.org/en/docs/http/ngx_http_map_module.html#map) directives. The `geo` block assigns a value of `0` to `$limit` for IP addresses in the allowlist and `1` for all others. We then use a map to translate those values into a key, such that:
|
||||
|
||||
- If `$limit` is `0`, `$limit_key` is set to the empty string
|
||||
- If `$limit` is `1`, `$limit_key` is set to the client’s IP address in binary format
|
||||
|
||||
Putting the two together, `$limit_key` is set to an empty string for allowlisted IP addresses, and to the client’s IP address otherwise. When the first parameter to the `limit_req_zone` directory (the key) is an empty string, the limit is not applied, so allowlisted IP addresses (in the 10.0.0.0/8 and 192.168.0.0/24 subnets) are not limited. All other IP addresses are limited to 5 requests per second.
|
||||
|
||||
The `limit_req` directive applies the limit to the **/** location and allows bursts of up to 10 packets over the configured limit with no delay on forwarding
|
||||
|
||||
### Including Multiple `limit_req` Directives in a Location
|
||||
|
||||
You can include multiple `limit_req` directives in a single location. All limits that match a given request are applied, meaning the most restrictive one is used. For example, if more than one directive imposes a delay, the longest delay is used. Similarly, requests are rejected if that is the effect of any directive, even if other directives allow them through.
|
||||
|
||||
Extending the previous example we can apply a rate limit to IP addresses on the allowlist:
|
||||
|
||||
```
|
||||
http {
|
||||
# ...
|
||||
|
||||
limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;
|
||||
limit_req_zone $binary_remote_addr zone=req_zone_wl:10m rate=15r/s;
|
||||
|
||||
server {
|
||||
# ...
|
||||
location / {
|
||||
limit_req zone=req_zone burst=10 nodelay;
|
||||
limit_req zone=req_zone_wl burst=20 nodelay;
|
||||
# ...
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
IP addresses on the allowlist do not match the first rate limit (**req\_zone**) but do match the second (**req\_zone\_wl**) and so are limited to 15 requests per second. IP addresses not on the allowlist match both rate limits so the more restrictive one applies: 5 requests per second.
|
||||
|
||||
## Configuring Related Features
|
||||
|
||||
### Logging
|
||||
|
||||
By default, NGINX logs requests that are delayed or dropped due to rate limiting, as in this example:
|
||||
|
||||
```
|
||||
2015/06/13 04:20:00 [error] 120315#0: *32086 limiting requests, excess: 1.000 by zone "mylimit", client: 192.168.1.2, server: nginx.com, request: "GET / HTTP/1.0", host: "nginx.com"
|
||||
```
|
||||
|
||||
Fields in the log entry include:
|
||||
|
||||
- **`2015/06/13` `04:20:00`** – Date and time the log entry was written
|
||||
- `**[error]**` – Severity level
|
||||
- `**120315#0**` – Process ID and thread ID of the NGINX worker, separated by the `#` sign
|
||||
- `***32086**` – ID for the proxied connection that was rate‑limited
|
||||
- **`limiting` `requests`** – Indicator that the log entry records a rate limit
|
||||
- `**excess**` – Number of requests per millisecond over the configured rate that this request represents
|
||||
- `**zone**` – Zone that defines the imposed rate limit
|
||||
- `**client**` – IP address of the client making the request
|
||||
- `**server**` – IP address or hostname of the server
|
||||
- `**request**` – Actual HTTP request made by the client
|
||||
- `**host**` – Value of the `Host` HTTP header
|
||||
|
||||
By default, NGINX logs refused requests at the `error` level, as shown by `[error]` in the example above. (It logs delayed requests at one level lower, so `warn` by default.) To change the logging level, use the [`limit_req_log_level`](http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_log_level) directive. Here we set refused requests to log at the `warn` level:
|
||||
|
||||
```
|
||||
location /login/ {
|
||||
limit_req zone=mylimit burst=20 nodelay;
|
||||
<strong>limit_req_log_level warn;</strong>
|
||||
|
||||
proxy_pass http://my_upstream;
|
||||
}
|
||||
```
|
||||
|
||||
### Error Code Sent to Client
|
||||
|
||||
By default NGINX responds with status code `503 (Service Temporarily Unavailable)` when a client exceeds its rate limit. Use the [`limit_req_status`](http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_status) directive to set a different status code (`444` in this example):
|
||||
|
||||
```
|
||||
location /login/ {
|
||||
limit_req zone=mylimit <strong>burst=20 nodelay;
|
||||
limit_req_status 444</strong>;
|
||||
}
|
||||
```
|
||||
|
||||
### Denying All Requests to a Specific Location
|
||||
|
||||
If you want to deny all requests for a specific URL, rather than just limiting them, configure a [`location`](http://nginx.org/en/docs/http/ngx_http_core_module.html#location) block for it and include the [`deny`](http://nginx.org/en/docs/http/ngx_http_access_module.html#deny) `all` directive:
|
||||
|
||||
```
|
||||
location /foo.php {
|
||||
deny all;
|
||||
}
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
We have covered many features of rate limiting that NGINX offers, including setting up request rates for different locations on HTTP requests, and configuring additional features to rate limiting such as the `burst` and `nodelay` parameters. We have also covered advanced configuration for applying different limits for allowlisted and denylisted client IP addresses, and explained how to log rejected and delayed requests.
|
83
nginx/resources/nginx.conf.default
Normal file
83
nginx/resources/nginx.conf.default
Normal file
@ -0,0 +1,83 @@
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
pid /run/nginx.pid;
|
||||
error_log /var/log/nginx/error.log;
|
||||
include /etc/nginx/modules-enabled/*.conf;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
# multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
##
|
||||
# Basic Settings
|
||||
##
|
||||
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
types_hash_max_size 2048;
|
||||
# server_tokens off;
|
||||
|
||||
# server_names_hash_bucket_size 64;
|
||||
# server_name_in_redirect off;
|
||||
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
##
|
||||
# SSL Settings
|
||||
##
|
||||
|
||||
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
##
|
||||
# Logging Settings
|
||||
##
|
||||
|
||||
access_log /var/log/nginx/access.log;
|
||||
|
||||
##
|
||||
# Gzip Settings
|
||||
##
|
||||
|
||||
gzip on;
|
||||
|
||||
# gzip_vary on;
|
||||
# gzip_proxied any;
|
||||
# gzip_comp_level 6;
|
||||
# gzip_buffers 16 8k;
|
||||
# gzip_http_version 1.1;
|
||||
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
|
||||
|
||||
##
|
||||
# Virtual Host Configs
|
||||
##
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
}
|
||||
|
||||
|
||||
#mail {
|
||||
# # See sample authentication script at:
|
||||
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
|
||||
#
|
||||
# # auth_http localhost/auth.php;
|
||||
# # pop3_capabilities "TOP" "USER";
|
||||
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
|
||||
#
|
||||
# server {
|
||||
# listen localhost:110;
|
||||
# protocol pop3;
|
||||
# proxy on;
|
||||
# }
|
||||
#
|
||||
# server {
|
||||
# listen localhost:143;
|
||||
# protocol imap;
|
||||
# proxy on;
|
||||
# }
|
||||
#}
|
91
nginx/resources/sites_available.default
Normal file
91
nginx/resources/sites_available.default
Normal file
@ -0,0 +1,91 @@
|
||||
##
|
||||
# You should look at the following URL's in order to grasp a solid understanding
|
||||
# of Nginx configuration files in order to fully unleash the power of Nginx.
|
||||
# https://www.nginx.com/resources/wiki/start/
|
||||
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
|
||||
# https://wiki.debian.org/Nginx/DirectoryStructure
|
||||
#
|
||||
# In most cases, administrators will remove this file from sites-enabled/ and
|
||||
# leave it as reference inside of sites-available where it will continue to be
|
||||
# updated by the nginx packaging team.
|
||||
#
|
||||
# This file will automatically load configuration files provided by other
|
||||
# applications, such as Drupal or Wordpress. These applications will be made
|
||||
# available underneath a path with that package name, such as /drupal8.
|
||||
#
|
||||
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
|
||||
##
|
||||
|
||||
# Default server configuration
|
||||
#
|
||||
server {
|
||||
listen 80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
|
||||
# SSL configuration
|
||||
#
|
||||
# listen 443 ssl default_server;
|
||||
# listen [::]:443 ssl default_server;
|
||||
#
|
||||
# Note: You should disable gzip for SSL traffic.
|
||||
# See: https://bugs.debian.org/773332
|
||||
#
|
||||
# Read up on ssl_ciphers to ensure a secure configuration.
|
||||
# See: https://bugs.debian.org/765782
|
||||
#
|
||||
# Self signed certs generated by the ssl-cert package
|
||||
# Don't use them in a production server!
|
||||
#
|
||||
# include snippets/snakeoil.conf;
|
||||
|
||||
root /var/www/html;
|
||||
|
||||
# Add index.php to the list if you are using PHP
|
||||
index index.html index.htm index.nginx-debian.html;
|
||||
|
||||
server_name _;
|
||||
|
||||
location / {
|
||||
# First attempt to serve request as file, then
|
||||
# as directory, then fall back to displaying a 404.
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
# pass PHP scripts to FastCGI server
|
||||
#
|
||||
#location ~ \.php$ {
|
||||
# include snippets/fastcgi-php.conf;
|
||||
#
|
||||
# # With php-fpm (or other unix sockets):
|
||||
# fastcgi_pass unix:/run/php/php7.4-fpm.sock;
|
||||
# # With php-cgi (or other tcp sockets):
|
||||
# fastcgi_pass 127.0.0.1:9000;
|
||||
#}
|
||||
|
||||
# deny access to .htaccess files, if Apache's document root
|
||||
# concurs with nginx's one
|
||||
#
|
||||
#location ~ /\.ht {
|
||||
# deny all;
|
||||
#}
|
||||
}
|
||||
|
||||
|
||||
# Virtual Host configuration for example.com
|
||||
#
|
||||
# You can move that to a different file under sites-available/ and symlink that
|
||||
# to sites-enabled/ to enable it.
|
||||
#
|
||||
#server {
|
||||
# listen 80;
|
||||
# listen [::]:80;
|
||||
#
|
||||
# server_name example.com;
|
||||
#
|
||||
# root /var/www/example.com;
|
||||
# index index.html;
|
||||
#
|
||||
# location / {
|
||||
# try_files $uri $uri/ =404;
|
||||
# }
|
||||
#}
|
Loading…
Reference in New Issue
Block a user