Home » Blog

Nginx Vs Apache in AWS – Updated

 |  Comments (29)

According to your comments we’re publishing the data regarding tests conducted using “workers = 2″ in Nginx. Other optimization were excluded due to the fact that the point of this benchmark was to find the correct sizing of AWS resource used in this HA structure: type of EC2 instances, size of RDS databases and the needing of PIOPS on EBS volumes. Given that we managed to keep the test fair, using only the production ready versions of the webservers, avoiding external “poisoning” like php caching or excessive tuning.There are a lot of post in the Net explaining how to use APC, iozone or nginx’s micro-caching, there’s no need of another one.

Besides, we want to thank you for your valuable comments that help us to improve the blog contents.

Yet another Nginx Vs Apache comparison? Yes, but you should continue reading it because we ran all the benchmark in the AWS Cloud using all the resources suitable to this test: ELB, EC2, EBS, RDS and ElastiCache. Moreover… Nginx wasn’t always the winner.

nginx vs apache

Benchmark Scenario:

We wanted to measure the different performances of Nginx (v. 1.2.6) and Apache (v. 2.2.23) using the vanilla version of Worpress 3.5.1 running on 2 EC2 instances sharing one GlusterFS volume (EBS based), one RDS (small) instance and an ElastiCache small instance, all behind an Elastic Load Balancer. All test were conducted using php-fpm, Apache mpm_worker and Nginx configured to use only one worker with no particular configuration or tuning.

Nginx Vs Apache in AWS: Infrastructure schema
Nginx Vs Apache in AWS: Infrastructure schema

These are the relevant parts of Apache and Nginx configurations:

httpd.conf

<IfModule worker.c>
StartServers         4
MaxClients         400
MinSpareThreads     25
MaxSpareThreads     75 
ThreadsPerChild     25
MaxRequestsPerChild  4000
</IfModule>

nginx.conf

user              nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log  info;
pid        /var/run/nginx.pid;

# Events Module 
events {
    worker_connections  1024;
}

[...]

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
[...]
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    keepalive_requests 100;
    keepalive_timeout  65 20;
[...]

We used ApacheBench (version 2.3) to run the test: ab -n $(($concurrency*1000)) -c $concurrency http://www.mywordpressbenchmark.com/ Test with concurrency of 50 and 100 were conducted from different sites (25 concurrent users per site) to force the ELB to distribute the load on the instances and to avoid to trigger Amazon’s Anti DDoS checks.
Versions of the web servers were:  Nginx 1.2.6, Apache 2.2.23 running in the latest x86_64 Amazon Linux AMI, release 2012.09.

The results:

m1.small EC2 instance: 1.7 GiB memory 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit)

m1.small: phpinfo()
m1.small: phpinfo()

m1.small: WordPress 3.5
m1.small: WordPress 3.5

m1.small: WordPress 3.5 and WP Super Cache 1.2
m1.small: WordPress 3.5 and WP Super Cache 1.2

m1.small: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)
m1.small: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)

c1.medium EC2 instance: 1.7 GiB of memory 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)

c1.medium: phpinfo()
c1.medium: phpinfo()

c1.medium: WordPress 3.5
c1.medium: WordPress 3.5

c1.medium: WordPress 3.5 and WP Super Cache 1.2
c1.medium: WordPress 3.5 and WP Super Cache 1.2

c1.medium: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)
c1.medium: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)

m1.large EC2 instance: 7.5 GiB memory 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)

m1.large: phpinfo()
m1.large: phpinfo()

m1.large: WordPress 3.5
m1.large: WordPress 3.5

m1.large: WordPress 3.5 and WP Super Cache 1.2
m1.large: WordPress 3.5 and WP Super Cache 1.2

m1.large: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)
m1.large: WordPress 3.5 + HyperCache with Memcached (AWS ElastiCache)

Updated tests with Nginx workers set according to CPU cores:

Testing c1.medium EC2 instance
Testing c1.medium EC2 instance

Testing m1.large EC2 instance
Testing m1.large EC2 instance

Conclusions:

As we can see from the graphics on small instances Nginx prevails always, especially on high concurrency where the Nginx event driven worker show its power running on one core showing a percentage increment between 49% and 201%. While on multicore instances we can see that Nginx, sometimes, suffers a small gap on “pure” WordPress (due to the workers = 1 setting) using file caching or memcached results are always favorable to nginx with a maximum percentage increments of 268%. Beside of the Nginx Vs Apache benchmark we clearly see that caching WordPress is imperative and a plugin that uses correctly Memcached is a must-have.

Drawing our conclusions we’re always considering the highest concurrency (100) while speaking of percentages because You want your site this busy.. right?

Conclusion’s update:

As we can see from data in the last tables using all CPU cores for Nginx it’s not that optimal as with pure php code and high concurrency request/sec. has significative drops.

References:

Nginx Webserver http://nginx.org/
Apache  HTTP Server http://httpd.apache.org/
Wordpress CMS http://wordpress.org/
WP Super Cache Plugin http://wordpress.org/extend/plugins/wp-super-cache/
Hyper Cache Plugin http://wordpress.org/extend/plugins/hyper-cache/
GlusterFS http://www.gluster.org/
Amazon Web Services: EC2, Elastic Block StoreElastic Load Balancing, RDS, ElastiCache.

29 Responses to “Nginx Vs Apache in AWS – Updated”

  1. skyborne

    I have to ask, how exactly was PHP connected to Apache? I haven’t bothered to study mod_fastcgi since it didn’t compile against v2.4, and mod_fcgid shows some limitations that imply it will always be slower than anything that handles php-fpm effectively. Although it’s not orders of magnitude slower than nginx/php-fpm when managing php-cgi.

    Another good question is, what sort of backend PHP concurrency was actually visible on the servers during the tests? In my tests, 10-32 concurrent clients are handled by only 7-9 simultaneous php-cgi processes on a quad-core server (+ gigabit network + beefier quad-core client) even though the pool’s process limit was 16. This was also independent of php page complexity–from hello-world to a serendipity weblog, backend concurrency was surprisingly limited.

    I’d also be really interested in latency statistics–IME nginx has marginally worse baseline performance but retains much better service as concurrency increases.

    Reply
  2. Aleksander Hristov

    1. turn off keep alive
    2. increase worker_connections
    3. increase worker_processes
    4. use epoll in events {}
    5. where is your fastcgi configuration ?

    whats the point of all that comparison without using an opcode cacher ?

    Reply
    • Aleksander Hristov

      I forgot … ulimit / sysctl dump ?
      You really think anyone can assess anything from your benchmarks without full information ?

      Reply
      • Miles S.

        I agree, there are a ton of performance changing configuration changes that weren’t included in this. It’d be nice to see this same test completed with a more complete configuration.

        Reply
        • ruggero

          The point of this benchmark was to find out the limits of this kind of setup in AWS and check the benefits of vertical scaling, performance tuning would have “distorted” the results.

          Reply
    • Pinky

      “use epoll in events {}”
      Nginx is able to autodetect the best the most efficient method (see the docs).

      Reply
  3. Eric

    Granted we are testing 1 worker for the benchmarks but if not and you’re running Nginx 1.2.5+ or 1.3.8+, this will spawn workers for each detected cpu core:

    worker_processes auto;

    Reply
    • adrian

      Gluster setup:

      Type: Replicate
      Status: Started
      Number of Bricks: 2
      Transport-type: tcp
      Bricks:
      Brick1: bmng-1:/export
      Brick2: bmng-2:/export
      Options Reconfigured:
      auth.allow: *
      performance.cache-size: 256MB

      Reply
  4. chrisweb

    Interesting tests. I wonder if there is a difference between Apache and Nginx on time needed to deliver a page to the client. Of course it is important how much pages per second a server can serve, but is the time needed to respond the content requested by a browser different?

    Reply
  5. Ram

    I am interested in seeing your php-fpm config for nginx, because in the case of pure WordPress, the request is being passed off to php-fpm so it can only be as fast as php-fpm is and how it is configured.

    Reply
  6. ken

    Why wasn’t Apache 2.4.x used? Supposedly, that’s the version that is competitive against nginx (nginx 1.3 hasn’t been released unfortunately)…

    Reply
  7. JAILLET

    As in comment #9, it would be interesting to have results with Apache 2.4.x
    The new event Multi-Processing Module (MPM) should supersede the worker MPM performances.

    It is designed to allow more requests to be served simultaneously by passing off some processing work to supporting threads, freeing up the main threads to work on new requests.

    Reply
  8. Imran Ahmed

    What your article does not show is the load on the machines. If it did, it would show nginx keeping the server low . Uncontrollable machines are uncontrollable. it also showed what happened when something like a 10 times increase in traffic what happens during that peak time.Nginx is more responsive during peak and offpeak. A more full benchmark should include response times too.

    Reply
    • gerard

      It’s not about power on a single machine, but scalability. Keep in mind that you can have a fleet of instances and scale it up and down according to your needs.

      Reply
  9. Hayden James

    Thanks! Would be interesting to see benchmark comparison vs DigitalOcean’s nodes at a similar hourly “price point”. DO FTW

    Reply
    • ruggero

      Hi JR, as far I know HyperCache does not support memcached, during the test we used the Object Cache plugin.

      Reply

Post a comment


− 6 = 3