Apache vs Nginx

Introduction

Apache and Nginx both are the most common open source web servers. Together, both web servers are responsible for serving over 50% of traffic on the internet. Both solutions are capable of handling different workloads and working with other software to provide a full web stack.

While Apache and Nginx split many qualities, they should not be consideration of as completely interchangeable. Each excels in its own way and it is important to understand the situations where you may need to re-evaluate your web server of choices. This article will be dedicated to a discussion of how each web server stacks up in various areas.

Overview

Ahead of, we dive into the differences between Apache and Nginx, let’s take a look at the environment of these two projects and their general features.

Apache

The Apache HTTP Server is created by Robert McCool in 1995. It has been developed under the direction of the Apache Software Foundation since 1999. Since the HTTP web server is the foundation’s innovative project and is by far their most popular software, it is often referred to just as “Apache”.

The Apache web server has been the most popular server on the internet since 1996. Because of this recognition, Apache benefits from great documentation and integrated support from other software projects.

Apache is often preferred by administrators for its flexibility, power, and extensive support. It is extensible during a dynamically loadable structure module and can process a large number of interpreted programming languages without linking to the separate software.

Nginx

In 2002, Igor Sysoev start work on Nginx as a solution provider to the C10K problem, which was a extreme challenge for web servers to start handling ten thousand synchronized connections as a necessity for the modern network. The first public release of Nginx was made in 2004, meeting this goal by relying on an asynchronous, events-driven architecture.

Nginx has gain popularity since its first release due to its light-weight resource utilization and it’s easily scale ability on minimal hardware. Nginx excels at serving static content very quickly and is intended to pass dynamic requests off to other software that is well suited for those purposes.

Nginx is often selected by administrators for its resource efficiency and responsiveness of load balancing. Advocates welcome Nginx’s focus on core web server and proxy quality features.

Connection Handling Architecture

A huge difference between Apache and Nginx is the authentic way that they handle connections and traffic. This offers possibly the most significant difference in the way that they take action to different traffic conditions.

Apache

Apache provides a diversity of multi-processing modules (Apache calls these MPMs) that state how client requests are handled. Mainly, this allows administrators to change over its connection handling architecture simply. These are:

mpm_prefork: This is a processing module that spawns processes with an individual thread each to handle request. Each child can handle a single connection at the moment. As long as the number of requests that is lesser than the number of processes, this MPMs is very fast. Yet, performance degrades quickly after the requests exceed the number of processes, so this is not a best choice in many scenarios. Each process has a considerable impact on RAM utilization, so this MPM is hard to scale effectively. Still, It may be a good choice though, if used in concurrence with other components that are not built with threads in mind. For instance, PHP is not thread-safe, so this MPM is suggested as the only safe way of functioning with mod_php, the Apache module for processing these files.

mpm_worker: This module spawns processes that can each module manage multiple threads. Each of these threads can handle a particular connection. Threads are much more competent than processes, which mean this MPM scales better than the prefork MPM. Since there are extra threads than processes, which also means that new connections can instantly take a free thread instead of having to wait for a free process.

mpm_event: This module is alike to the worker module in most conditions, but it is optimized to handle keep-alive connections. When using the worker MPM, a connection will hold a thread despite of whether a request is dynamically being through as long as the connection is kept alive. The event MPM handles keep alive connections by setting apart committed threads for handling keep-alive connections and passing active requests off to other threads. This keeps the module from getting halt by keep-alive requests, allowing for quicker execution. It was marked stable with the release of Apache 2.4.

As you can notice, Apache server provides a flexible architecture for choosing diverse connection and request handling algorithms. The choices provided are generally to function of the server’s growth and the increasing requirements for concurrency as the internet site has changed.

Nginx

Nginx came on top of the sight after Apache, with an extra awareness of the concurrency problems that would look sites at a scale. Leveraging this information, Nginx was designed from the scratch to use an asynchronous, non-blocking, event-driven connection handling algorithm.

Nginx spawns worker processes, each of thread which can hold thousands of connections. The worker processes get done by implementing a rapid looping method that constantly checks for and processes events. Decoupling authentic work from connections allows each worker to unease itself with a connection only when an original event has been triggered.

Each of the connections handled by the worker sited within the event loop where they live with other connections. Within the loop, events are processed asynchronously, allowing effort to be handled in a non-blocking method. When the connection closes, it is disconnected from the loop.

This mode of connection processing allows Nginx to scale extremely far with limited resources. Because the server is single-threaded and processes are not spawned to handle each fresh connection, the memory and CPU usage tends to stay fairly consistent, even at times of heavy load.

Static vs Dynamic Content

In terms of authentic world use-cases, one of the most common comparisons between Apache and Nginx is the method in which each server handles requests for static and dynamic content.

Apache

Apache web servers can handle static content using its predictable file-based methods. The performance of these operations is generally a function of the MPM method described above.

Apache can also process dynamic content by embedding a processor of the language in subject into each of its worker instances. This allows it to execute dynamic content within the web server itself without having to rely on outer components. These dynamic processors can be enabled throughout the use of dynamically loadable modules.

Apache’s ability to handle dynamic content internally means that configuration of dynamic processing tends to be easier. Statement does not need to be synchronized with an additional piece of software and modules can simply be swapped out if the content requests change.

Nginx

Nginx does not have any capability to process dynamic content natively. To handle PHP and other requests for dynamic content, Nginx must surpass to an external processor for execution and wait for the content to be rendered and sent back to the server. Thus results can be relayed to the client.

For administrators, this means that communication must be configured between Nginx and the processor over the individual protocols Nginx knows how to converse (http, FastCGI, uWSGI, SCGI, memcache). This can make things slightly difficult, mainly when trying to await the number of connections to allow, as an extra connection will be used for each call to the processor.

However, this process has some advantages as well. Since the dynamic exponent is not embedded in the worker process, its transparency will only be present for dynamic content. Static content can be served in a straight-forward mode and the interpreter will only be intimidated when needed. Apache can also function in this mode, but doing so removes the benefits in the preceding section.

Distributed vs Centralized Configuration

For administrators, one of the most readily apparent differences between these two pieces of software is whether directory-level configuration is permitted within the content directories.

Apache

Apache includes an opportunity to allow added configuration on a per-directory basis by inspecting and interpreting directions in concealed files within the content directories themselves. These files are known as .htaccess files.

Since these files exist in within the content directories themselves, when handling a request, Apache checks each module of the path to the requested file for an .htaccess file and applies the commands found within. This is effectively allows decentralized configuration of the web server, which is frequently used for implementing URL redirections, access restrictions, authorization and authentication, even web caching policies.

Read More: Importance of Wireless Technology in Business Systems

While the exceeding examples can all be configured in the major Apache configuration file, .htaccess files have some significant advantages. First, since these are interpreted each time they are found alongside a request path, they are implemented instantly without reloading the server. Second, it makes it feasible to allow non-privileged users to control certain aspects of their own web content without giving them control over the whole configuration file.

This provides an easy way for certain software, like content management systems, to organize their environment without allowing access to the central configuration file. This is also used by shared hosting providers to keep control of the main configuration while giving clients control over their specific directories.

Nginx

Nginx does not take .htaccess files, nor does it give any method for evaluating per-directory configuration outer of the main configuration file. This may be less elastic than the Apache model, but it does have its own reward.

The most remarkable progress over the .htaccess system of directory-level configuration is improved performance. For a classic Apache setup that may allow .htaccess in any directory, the server will check for these files in every of the parent directories primary up to the requested file, for each request. If more than one .htaccess files are found during this search, they must be read and interpreted. By not allowing directory overrides, Nginx can serve requests faster by deed a single directory lookup and file read for each request (assuming that the file is found in the conservative directory structure).

Another benefit is security related. Distributing directory-level configuration control also distributes the task of security to individual users, who may not be trusted to handle this task well. Ensuring that the manager maintains control over the entire web server can avoid some security missteps that may occur when access is given to other parties.

Remember that it is possible to turn off .htaccess interpretation in Apache if these concerns echo with you.

File vs URI-Based Interpretation

How the web server interprets requirements and maps them to actual resources on the system is another area where these two servers different.

Apache

Apache provides the facility to interpret a appeal as a physical supply on the filesystem or as a URI location that may need a more conceptual evaluation. In general, for the past Apache uses <Directory> or <Files> blocks, while it utilizes <Location> blocks for extra abstract resources.

Apache was intended from the opinion up as a web server, the default is usually to interpret requests as filesystem resources. It begins by taking the manuscript root and appending the segment of the request following the host and port number to try to find an actual file. Basically, the filesystem ladder is represented on the web as the accessible document tree.

Apache provides an amount of alternatives for when the request does not match the original filesystem. For instance, an Alias directive can be used to map to an alternative location. Using <Location> blocks is a process of working with the URI itself instead of the filesystem. There are also regular expression variants which can be used to apply configuration more flexibly during the filesystem.

While Apache has the capability to operate on both the primary filesystem and the webspace, it leans greatly towards filesystem methods. This can be seen in some of the design decisions, including the use of .htaccess files for per-directory configuration. The Apache docs themselves warn against using URI-based blocks to limit access when the request mirrors the underlying filesystem.

Nginx

Nginx was made to be both a web server and a proxy server. Due to the architecture required for these two roles, it works mainly with URIs, translating to the filesystem when needed.

This can be seen in some of the way that Nginx configuration files are constructed and interpreted. Nginx does not provide a method for specifying configuration for a filesystem directory and instead parses the URI itself.

For example, the primary configuration blocks for Nginx are server and location blocks. The server block interprets the host is requested, while the locality blocks are responsible for matching portions of the URI that comes after the host and port. At this point, the request is being interpreted as a URI, not as a position on the filesystem.

For static files, all requests finally have to be mapped to a location on the filesystem. First, Nginx selects the server and location blocks that will handle the request and then combines the document root with the URI, adapting everything needed according to the configuration specified.

This may seem parallel, but parsing requests mainly as URIs instead of filesystem locations allows Nginx to more easily function in web, mail, and proxy server roles. Nginx is configured basically by laying out how to respond to different request patterns. Nginx does not verify the filesystem until it is ready to serve the request, which explains why it does not execute a form of .htaccess files.

Modules

Both Nginx and Apache are extensible through module systems, but the approach that they work differ considerably.

Apache

Apache’s module method allows you to dynamically load or unload modules to satisfy your requirements during the course of running the server. The Apache core is always present, while modules can be turned on or off, adding or removing added functionality and hooking into the main server.

Apache uses this functionality for a large diversity tasks. Due to the development of the platform, there is broad library of modules available. These can be used to alter some of the core functionality of the server, such as mod_php, which embeds a PHP interpreter into each running use case.

Modules are not limited to dealing out dynamic content, however. Amongst other functions, they can be used used for rewriting URLs, authenticating clients, hardening the server, logging, caching, compression, proxying, rate limiting, and encrypting. Dynamic modules can extend the core functionality significantly without much additional work.

Nginx

Nginx also implements a module system, but it is moderately different from the Apache system. In Nginx, modules are not dynamically loadable, so they must be selected and compiled into the core software.

For numerous users, this will make Nginx much less flexible. This is especially true for users who are not comfortable maintaining their own compiled software outer of their distribution’s predictable packaging system. While distributions’ packages tend to include the most frequently used modules, if you require a non-standard module, you will have to build the server from source physically.

Nginx modules are still very useful while, and they allow you to dictate what exactly you want out of your server by only counting the functionality you intend to use. Some users also may regard as this more secure, as subjective mechanism that cannot be hooked into the server. However, if your server is ever put in a place where this is possible, it is likely compromised already.

Nginx modules allow loads of of the same capabilities as Apache modules. For instance, Nginx modules can provide proxying support, compression, rate limiting, logging, rewriting, Geo-location, authentication, encryption, streaming, and mail functionality.

Support, Compatibility, Ecosystem, and Documentation

A major point to regard as is what the actual process of getting up and running will be given the setting of available help and support among other software.

Apache

Because Apache has been accepted for so long, support for the server is everywhere. There is a large library of first- and third-party credentials available for the core server and for task-based scenarios connecting hooking Apache up with other software.

Along with citations, many tools and web projects include tools to bootstrap themselves within an Apache environment. This may be integrated in the projects themselves, or in the packages maintained by your distribution’s binding team.

Apache, in general, will have additional support from third-party projects simply because of its market share and the span of time it has been available. Administrators are also more possible to have experience working with Apache not only due to its popularity, but also because many people switched in shared-hosting scenarios which almost exclusively rely on Apache due to the .htaccess distributed administration capabilities.

Nginx

Nginx is increased support as more users take on it for its performance profile, but it still has some catching up to do in some key areas.

In the past, it was difficult to find full English-language documentation regarding Nginx due to the fact that most of the early on development and documentation were in Russian. As interest in the project grow, the documentation has been packed out and there are now plenty of administration properties on the Nginx site and through third parties.

In regards to third-party applications, support and documentation is appropriate more readily available, and package maintainers are opening, in some cases, to give choices between auto-configuring for Apache and Nginx. Don’t even support required for configuring Nginx to work with alternative software is generally straight-forward so long as the development itself documents its requirements (permissions, headers, etc).

Using Apache and Nginx Together

After departure over the benefits and limitations of both Apache and Nginx, you may have a better initiative of which server is more suited to your requirements. However, many users find that it is possible to influence each server’s strengths by using them together.

The conservative configuration for this corporation is to place Nginx in front of Apache as a reverse proxy. This will allow Nginx to to handle all requests from clients. This takes improvement of Nginx’s fast processing speed and ability to handle large numbers of connections simultaneously.

For static content, which Nginx excels at, the files will be served quickly and straight to the client. For dynamic content, for instance PHP files, Nginx will proxy the request to Apache, which can then process the grades and return the rendered page. Nginx can pass at ease back to the client.

This setup installation works well for many people because it allows Nginx to function as a sorting machine. It will handle all requests it can and pass on the ones that it has no indigenous ability to serve. By decreasing on the requests the Apache server is asked to handle, we can improve some of the blocking that occurs when an Apache process or thread is engaged.

This formation also allows you to scale out by totalling additional backend servers as necessary. Nginx can be configured to pass to a pool of servers easily, increasing this configuration’s flexibility to failure and performance.

Conclusion

As you can compare, both Apache and Nginx are powerful, flexible, and competent. Deciding which server is best for you is mainly a function of evaluating your specific requirements and testing with the patterns that you expect to see.

There are differences between these projects that have a very real impact on the rare performance, capabilities, and the implementation time necessary to get each solution up and running. However, these frequently are the result of a series of tradeoffs that should not be casually dismissed. In the end, there is no one-size-fits-all web server, so use the explanation that best aligns with your objectives.

2 COMMENTS

  1. This is really great info! I’m pretty happy with my current server, but this definitely gives me something to think about.

Comments are closed.