Undabot logo
Undabot logo

How to boost performance and technical SEO just by tuning NGINX

Blog post

After implementing some of these suggestions you should easily get a perfect score on the Google Page Speed Test. It shouldn’t take you too much time as well!


I’ll try to explain some basics of NGINX and how to leverage its features to get the maximum performance out of your apps and websites. Most web developers don't use NGINX to optimize their website or application. I get it, in most cases, and companies that should be DevOps responsibility. The problem is most of the NGINX features and optimizations, that I’ll mention in this article, are closely related to your website’s code. 

These features may even harm performance if your website is not built to support them. That’s why I think it’s important for developers to at least understand these technologies and terms, so they can implement them, or change the app to work better on existing server configuration.


I won’t go too deep into some of NGINX features since it’s not possible to fit all of that into one blog post. I will try to explain some basics and show you some of the things I use to boost performances. I will link some additional resources at the end of the article if you need some reminders, or if you want to learn more. To get most out of this article you should have at least some understanding of Linux and web servers.
Things that I’ll cover in order:


1. NGINX — Basics and setup
2. SSL Certificate & HTTPS
3. HTTP/2
4. HTTP/2 Push
5. Client-Side Caching
6. Compression
7. Pagespeed Module
8. Other modules
9. NJS

1. Nginx — Basics and setup

Before implementing any of suggestion you should know how to install NGINX and at least some of its main principles. I’ll cover these parts shortly because there are many resources out there that go into more detail. NGINX has pretty good documentation too. 

Also, all installing tips are strictly for Ubuntu 18.04, but they will probably work on some other versions of Ubuntu. For most people, these steps should be enough. Sometimes you need to do additional steps and setups depending on your server. If you are a beginner, please do not do this on existing servers with apps that are in production. There is a big probability that something will go wrong when trying these things for the first time.


Firstly install NGINX on your Linux machine using commands below:

$ sudo apt update
$ sudo apt install nginx


After the install process is finished NGINX will start to handle all HTTP traffic. You can test this by typing the server IP inside the browser URL bar, and you should see the default NGINX page. If everything is ok we are ready to add a new server block (virtual host) for our app. A virtual host is an Apache term. NGINX does not have Virtual hosts, it has “server blocks”.
Recommend approach is to have a separate server block configuration file for each of your apps. NGINX server blocks are located in two folders:
/etc/nginx/sites-available which contains all configurations
/etc/nginx/sites-enabled which contains enabled configurations
Firstly we will create config file inside sites-available and then symlink it to the sites-enabled folder.


Go to /etc/nginx/sites-available and create a new config file:
vim example.conf
 

Now I’ll explain what are all directives in the basic configuration file, and then we will build upon it.

Simple NGINX server block configuration:

listen and server_name directives will be used by NGINX to determine which of the server blocks will serve the request. listen defines the port, which is in our case port 80. By default, HTTP goes to port 80 and HTTPS goes to port 443. server_name defines host name or IP that it should match.

If you open your URL in the browser e.g. http://example.com, NGINX will match port 80 since it’s HTTP, and it will check for “Host” request header, which should be example.com. Based on that NGINX will assume that this server block configuration should handle the request.

root directive specifies the root directory that will be used to search for a file. To obtain the path of a requested file, NGINX appends the request URI (path or portion of the URL after the domain.) to the path specified by the root directive. The directive can be placed on any level within the http, server, or locationcontexts. In the example above, the root directive is defined for a server context. It applies to all locationblocks where the root directive is not included.

loacation blocks will be checked against the request URI. NGINX optional modifiers and Regular expressions or literal strings can be used to define location match. In the above case, the location will match all requests. For now, I will use the most simple location match, a more complex example would be if you want to serve images from a folder that is not within the serverroot. In that case, you can write another location block with a regular expression that matches image file extensions:

When you add root directive under location it will override server root directive for all requests that match the regex. I’ll cover a few more loacation use cases later in the article.

try_files within location basically tells Nginx which file should be served after it matches the location. In our case, we first try to load file as it is in requested URI, and if that file doesn't exist we serve index.html from the root directory.

The config below should be enough to get you started. Just remember to change it with your domain and root folder.

Now we need to symlink saved configuration to sites-enabled folder. 
We do so with the command below:

ln -s /etc/nginx/sites-available/example.conf /etc/nginx/sites-enabled
This allows you to selectively disable/enable server block configurations by adding/removing the symlink. Nginx will only load config files that are located within sites-enabled folder.


Now we can test configuration with the simple command:
$ sudo nginx -t


NGINX will tell you if something is not right. If you get test successful message then you need to reload NGINX so it picks up the new config file:


$ sudo service nginx reload


Remember to always use service nginx reload instead of service nginx restart . This is important because reloading will keep old processes running if it encounters the syntax error inside configuration. Restart, on the other hand, will kill all NGINX processes and your server blocks will stop working until you fix all of the errors, so all of your apps, that are running on that instance of NGINX, will go offline. Restart is necessary for some rare cases e.g. when you change the listen directive, so keep that in mind as well.


If you did everything right you should see your website when you type your URL in the browser: http://example.com


Remember to replace example.com with your domain name, and also check that your domain is pointing to the correct server. NGINX will look for index.html file within the root folder, so make sure you have one. Now lets jump into the more interesting part.

2. SSL Certificate & HTTPS

One of the things that every website needs these days is an SSL certificate and HTTPS. Today SSL certificates are free, thanks to Let’s Encrypt, a nonprofit Certificate Authority providing TLS certificates to 200 million websites at this moment. They simplify the process by providing a Certbot software client, that automates most of the required steps for NGINX. 


So let’s firstly install Certbot for Nginx and Linux:


$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-ngin


After installing Certbot, just run the command below, for a domain that is pointing to your server.


$ sudo certbot --nginx -d example.com
Follow the instructions and when asked “Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.” select 2: Redirect.
After that Certbot will alter your server block config and redirect all traffic to HTTPS. 

Open your config file now and you should see this:

You can see that your initial server block is changed, and that new server block is created. Your initial server block now listens to 443 port, which is HTTPS, and the newly created block only redirects HTTP traffic to HTTPS.
Having an SSL certificate and HTTPS is also important for the next section because HTTP/2 doesn’t work on insecure HTTP.


Certbot also takes care of renewing certificates. By default, they are valid only for 90 days and Certbot will renew it when it’s within thirty days of expiration. 


We can verify if renewal works with the next command:
$ sudo certbot renew --dry-run
If you see no errors, automatic renewal should work.

3. HTTP/2

HTTP/2 is currently supported by nearly 96% of all web browsers in use. If you don’t have an older project that was fully optimized to get the best out of HTTP/1, and you don’t need to support older versions of Internet Explorer and Opera Mini, there is no reason not to use HTTP/2.
Few HTTP/1 optimizations are considered anti-patterns for HTTP/2 environments. Most of these are related to concatenation. 

Concatenation is the process of combining files in order to reduce the number of HTTP requests which do not benefit HTTP/2. Concatenation examples are sprite images, css/javascript bundling and inlining. Sprite images and inlining can also reduce the benefits of client caching — if you end up updating just one of your images, the whole sprite has to be requested again, instead of requesting only the updated image and keeping others in the cache.
Enabling basic HTTP/2 functionalities in NGINX is pretty straight forward, you just need to modify listen directive to:
listen 443 ssl http2;


I tried few performance tests with many images/static resources and HTTP/2 was 2x faster on 4G and up to 3x on 3G network. Just by adding http2 label to your listen directive you can get an amazing performance upgrade, but be careful and test every change you do since it all depends on your website or application.

4. HTTP/2 Push

HTTP/2 Server Push is another useful feature that is a bit harder to implement. The server push allows a server to pre-load static resources by anticipating the user requests. In NGINX we can do this again with location block:
location = /index.html {
    http2_push /style.css;
    http2_push /image.jpg;
}


This instructs NGINX to send style.css and image.jpg whenever it gets a request for index.html. http2_push directive takes one parameter, the full URI path of the file to push to the client. 


HTTP2 Push is not really useful if you need to have these settings and files hardcoded on the server. It’s also not something I would recommend. There are two workarounds for this. The first one is to set Link headers within your Node.js or other application. NGINX will automatically push resources to clients if proxied applications include an HTTP response header Link. This header will instruct NGINX to preload the specified resources.

 To enable this feature, add http2_push_preload on; to the NGINX configuration:


Another approach is to have your Nginx configuration inside GIT and Docker. I man cover that part in next blog article.

5. Client-Side Caching

With NGINX we can set response headers to tell the client (browser) that it can cache specific content for a specific amount of time. This can improve performance a lot since the browser won’t make requests to the server for files that are cached. This is how we do it in NGINX:


location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 30d;
    add_header Cache-Control “public”;
}
 

In this example, we have location regex that should match all images, JavaScript, and CSS files. The expires directive tells the client that their cached resource will no longer be valid after 30 days. The add_header directive adds the HTTP header Cache-Control to the response, with a value of public, which allows any caching server along the way to cache the resource. If we set it to private, only the browser is allowed to cache the value.

6. Compression

Compression is a simple, effective way to save bandwidth and speed up your site. Compressing responses often significantly reduces the size of the transmitted data. However, since compression happens at runtime it can also add considerable processing overhead which can negatively affect performance. NGINX performs compression before sending responses to clients, but does not “double compress” responses that are already compressed (for example, by a proxied server). 


I’ll show you how to enable the two most popular compression algorithms — Gzip and Brotli. Gzip comes as default module for NGINX, and Brotli doesn't. You can follow install instructions on the official ngx_brotli module Github. If you don’t want to bother just remove that part from the configuration.


Brotli has impressive gains of up to 25% over Gzip compression. It’s developed by Google and most of the browsers these days support it. It’s also good practice to enable both (GZIP and Brotli) so that older browsers still get compressed content. This is a configuration with Gzip and Brotli enabled:


brotli_types& gzip_types directives enable on-the-fly compression of responses for the specified MIME types in addition to text/html. which are always compressed. It’s not recommended to enable compression for image files, as well as videos, PDFs and other binary formats. Using compression on them won’t provide any additional benefit, and can actually make them larger. 

In the next section, I’ll show you how to compressed and optimize images by using the Pagespeed module.
brotli_comp_level & gzip_comp_level directives are used to specify the level of compression. It simply determines how compressed the data is on a scale from 1–9 for GZIP, and from 1–12 for Brotli, where 9 & 12 are the most compressed. The recommended level is from 4 to 6, but it all depends on your website.


gzip_vary: on basically just adds header Vary: Accept-Encoding. It’s there to prevent client cache mixup between compressed and uncompressed files. 

If for some reason the client has an uncompressed version of the file in its cache, it will know not to request a compressed version of it again and instead to just use the uncompressed file from the cache. In the end, you are also preventing serving uncompressed version to a user that supports gzip, and vice versa.

7. PageSpeed Module

It is an open-source Nginx module that optimizes your site automatically, and not many people know about it. It’s made by Google and it implements most of Google page speed recommendations. It can boost performance and SEO dramatically. I’ve abused this module many times, and I will keep doing so! When you need to quickly improve performances of a website just set it up in a few hours and you should get great improvements! It’s hard to believe how easy it can be to turn super slow websites, into super performant just by using this module. 


I’ll list some of the performance optimizations it can do for you: 


The setup process is not that simple, but it’s worth it when you see how much value it adds. You can follow these instructions on how to install it, and I’ll give you a few suggestions on how to configure it. 


This is a server block config with some pagespeed configuration options:


It has a lot of configuration options and you should tune these for each website separately. Your frontend stack may already be doing some of these optimizations in place and it can be counterintuitive if you don’t sync it all together. If you are using HTTP2 you should turn off all options for concatenation, or at least break files into smaller chunks. You should do that in and Nginx and, for example, Webpack. 


I’ll try to explain a few Pagespeed directives I used above, but you should really check official documentation and see which options will for your website.

pagespeed RewriteLevel PassThrough directive disables RewriteLevel CoreFilters . What it means is that you have to manually enable each filter that you want to enable. IT offers three rewrite levels to simplify configuration: PassThrough, CoreFilters, and OptimizeForBandwidth.
The CoreFilters set contains filters that the PageSpeed team believes are safe for most web sites. By using the CoreFilters set, as PageSpeed is updated with new filters, your site will get faster. 

The OptimizeForBandwidth setting provides a stronger guarantee of safety and is suitable as a default setting for use with sites that are not aware of PageSpeed. I personally prefer to manually set each filter and test it afterward. You can also try to use CoreFilters and OptimizeForBandwidth and see if it works better for you.


pagespeed EnableFilters recompress_images is a filter group consisting of convert_gif_to_png, convert_jpeg_to_progressive, convert_jpeg_to_webp, convert_png_to_jpeg, jpeg_subsampling, recompress_jpeg, recompress_png, recompress_webp, strip_image_color_profile, and strip_image_meta_data directives. 

You can check the documentation for each of these directives. Basically it will optimize all hosted images and serve the best possible format depending on browser support. For example, you can upload an unoptimized image in jpeg format, and if the client requests that image using Google Chrome, Pagespeed will serve it optimized in webp format since Chrome supports it. It will probably be 10x smaller than the original. 


pagespeed FileCachePath tells Pagespeed which folder should be used for cache that it generates. Pagespeed will cache all of the static resources if you set it to do so.


The extend_cache filter rewrites the URL references in the HTML page to include a hash of the resource content (if rewrite_css is enabled then image URLs in CSS will also be rewritten). Thus if the site owners change the resource content, then the URL for the rewritten resource will also change. The old content in the user's browser cache will not be referenced again, because it will not match the new name. If the site owners change the logo, then PageSpeed will notice within 5 minutes and begin serving a different URL to users. But if the content does not change, then the hash will not change, and the copy in each user’s browser will still be valid and reachable.


It’s important to note that every time you change a Pagespeed setting you have to reload your website a few times to see changes in HTML code. Pagespeed needs some time to rewrite all resources and to save these in the cache.


What I find most useful with Pagespeed are image compression and caching. I mainly use these filters, but this depends a lot on your app and other Nginx configuration.


You can also enable Nginx statistics and logging admin UI. Just add directives below and you should be able to access it via URI you set in GlobalAdminPath 


Just be careful and disable this on production, since you don’t want people or bots to access this Admin panel. That is why I didn’t include it in the configuration above.


## ADMIN
pagespeed Statistics on;
pagespeed StatisticsLogging on;
pagespeed GlobalAdminPath /pagespeed_admin;


Here is the official documentation where you can see all other filters & configuration options. At the bottom of documentation for each option, there is a Risk section, and it can be low, moderate, or high. 

Check this for every config and avoid all high-risk options. Try and test all configuration options to see which one works well with your website. I can't stress how important it is to test every change you do inside Pagespeed module configuration. Not every setting will work well for every app and environment, so be careful and test, test, test.

9. NJS scripting language

This part isn’t really well known even within the DevOps community and it will be most interesting to JavaScript developers. NGINX team decided to add additional scripting options for their configuration files. They did so by writing their own interpreter in JavaScript that provides objects, methods and properties for extending NGINX functionality.


Some of the reasons why they have chosen JavaScript instead of other languages:

  • JavaScript has a C-like syntax which is very similar to NGINX config files
  • JavaScript uses curly braces for different blocks, which is also the case with Nginx.
  • JavaScript is event-driven, same as Nginx


Some of the use cases:

  • Complex access control and security checks in NJS before a request reaches an upstream server
  • Manipulating response headers
  • Writing flexible asynchronous content handlers and filters


Simple Hello World example with NJS:
# nginx.conf example
events {}
http {
  js_import http.js;
  js_content http.hello;
}

# http.js example
function hello(r) {
r.return(200, “Hello world!”);
}
export default {hello};


You can find more examples and use cases on this link. One use case that I’m planning to work on is for Dynamic HTTP2 Server Push. I’ll write another blog article just for that when I properly test the solution.


It’ important to note that it doesn’t support most of the ES6 specifications and JavaScript modules/3rd-party code. The NGINX team says that this is mainly because of performance e.g. ECMAScript specs require UTF-16 encoding and NJS uses UTF-8 which needs 2x fewer bytes for any data chunk. That shows how much they actually care about performance. If you ever notice that you lack some Nginx functionality it’s worth checking NJS.

To sum up

The goal of this article was to show an alternative approach to website optimization that’s not used as widely. The key with Nginx is that most of the suggested optimizations can be implemented without changing a single line of website code. That can be really useful sometimes if you need a centralized solution, or if you don't have access to the app code. Some people may prefer some other tools and technologies for implementing the same optimizations and that’s perfectly ok.


I’m suggesting you read the official documentation for all terms that you don’t understand perfectly, and, one more time, it’ really important to test every change you make. Below you can see useful resources and documentation for covered topics:

Similar blog posts

Get in touch