Archive

Archive for the ‘nginx’ Category

nginx rate limiting with a combination of IP and user agent

September 7th, 2016 No comments

Here's a quick and dirty way to use IP-based rate limiting (very common) but override it for specific user agents (or basically, this is just a method of chaining geo {} and map {} and other things together - you have to recycle the variables as each following statement's "default" value.

# whitelisted IP ranges - will not have limits applied
geo $geo_whitelist {
  default 0;
  1.2.3.4 1;
  2.3.4.5/24 1;
}

# whitelisted user agents - will not have limits applied
map $http_user_agent $whitelist {
  default $geo_whitelist;
  ~*(google) 1;
}

# if whitelist is 0, put the binary IP address in $limit so the rate limiting has something to use
map $whitelist $limit {
  0 $binary_remote_addr;
  1 "";
}

limit_req_zone $limit zone=perip:30m rate=1r/s;

References:

Categories: nginx

nginx goes "Prime time"

July 18th, 2011 No comments

Today, Igor announced that he is establishing a company behind nginx, citing that he and some others are going to be focusing more seriously on development, support, etc. This is exciting, as it proves how serious the project has become. For a long time, nginx has been regularly updated, and support has been typically very reliable, but at the end of the day, the internals of nginx development and features and such have largely been a mystery. It sounds like now we'll all benefit from increased transparency and much more focus on the end-users and support for developers in a more structured and formal way.

I, for one, am excited to see things like a roadmap and making it easier for contributors to help out.

Read Igor's post here.

Categories: nginx

nginx hits 1.0(.0)

April 12th, 2011 No comments

nginx, my webserver du joir, announced its 1.0.0 launch today. Along with the announcement, Igor has also posted a public SVN repository to grab the code from (which seems to include a lot of historical versions, if not the whole thing!)

He hasn't setup a web-based SVN option yet (cough, cough, I bet it would be if nginx supported SVN's DAV needs :)) - but you can hit it up using a native SVN client: svn://svn.nginx.org - note, I would only grab "/nginx/trunk" as otherwise you'll get a few years worth of code, if not the entire nine years.

At first I thought this was a joke, seeing the announcement for 1.0.0 - sounds like it just started. But one must remember, he's always used pre-1.0 for past versions.

Update:

MTecknology maintains a PPA and has already built a package for 1.0.0. The quick and dirty method to update:

apt-get install python-software-properties
add-apt-repository ppa:nginx/stable
apt-get update

This will provide you a handful of new packages, by default the "nginx" package is a dummy package that will provide "nginx-full" and should be the only one you really need to care about.

nginx-dbg - Debugging symbols for nginx
nginx - small, but very powerful and efficient web server and mail proxy
nginx-common - small, but very powerful and efficient web server (common files)
nginx-doc - small, but very powerful and efficient web server (documentation)
nginx-extras - nginx web server with full set of core modules and extras
nginx-extras-dbg - Debugging symbols for nginx (extras)
nginx-full - nginx web server with full set of core modules
nginx-full-dbg - Debugging symbols for nginx (full)
nginx-light - nginx web server with minimal set of core modules
nginx-light-dbg - Debugging symbols for nginx (light)
Categories: nginx

Setting PHP INI parameters from nginx

February 11th, 2011 No comments

A little known feature in PHP-FPM 5.3.3+ (since it was integrated into PHP core) is that you can actually define PHP INI parameters inside of your nginx configuration. This bridges some of the desire to have "php_value" and "php_admin_value" from Apache available. It's not user-override-able, but things like htscanner or newer PHP 5.3 features could address some of that.

This is cool, and I wasn't sure it actually got in a build or not, I remember it was mentioned or discussed, but sure enough, Jérôme coded it and got it in to FPM in core. So it is available to all, and he welcomes the feedback - see this thread on the nginx mailing list.

Due to limitations in the FastCGI protocol, you have to pass all the parameters you want as a single string, separated by "\n" - it's not the cleanest looking configuration, but that's how it is for now. I spitballed a different approach to it, but at the moment I believe it will be unlikely to get adopted.

If you're like me, you're probably looking for the code samples almost immediately. Here's your examples. Note that PHP_VALUE and PHP_ADMIN_VALUE are both legitimate keys, the difference being that PHP_ADMIN_VALUE does not allow the user to override the value using ini_set() - see the manual for more infomation.

Single value:

fastcgi_param PHP_VALUE [email protected];

Multiple values, manually line-broken:

fastcgi_param PHP_VALUE "[email protected]
precision=42";

Multiple values, with the \n in there to clearly call it out:

fastcgi_param PHP_VALUE "[email protected] \n precision=42";

More information is available in the feature request on the PHP bug tracker.

Categories: nginx, PHP, PHP-FPM

Front Controller Patterns and nginx

August 20th, 2010 No comments

It's no secret I'm a fan of nginx. Probably no secret that I'm a fan of front controller pattern design as well.

Why? I find it to be the most universal and easy to support. I create things from scratch using it, and the most popular software out there uses it - Drupal, Joomla, WordPress - and frameworks including the Zend Framework.

A lot of people cook up crazy nginx configurations for this stuff, when all it really takes for at least a standard setup is a single line of configuration code. However, I just found out something that was sitting in front of me this entire time.

BAD: I used to advise people to use this:

try_files $uri $uri/ /index.php?page=$uri$is_args$args;

BETTER: When actually, even simpler, you can use this:

try_files $uri $uri/ /index.php?page=$request_uri;

So my whole blog post here is slightly wrong. If you use $request_uri, you don't have to remember the $args. It's already there. 🙂

Note: different packages have different parameters. CMS Made Simple uses "page", Drupal uses "q", WordPress uses "q" - some software, like WordPress, doesn't technically need anything, they can fall back by reading PATH_INFO or REQUEST_URI, and not a parameter.

In which case, all you need is:

try_files $uri $uri/ /index.php;

Don't care about directories? Drop the "$uri/" too. Save some stat calls - if you don't need em 🙂

Categories: nginx

The magic that is... nginx

August 18th, 2010 2 comments

I've told this story countless times, but I've never publicly documented it. This happened over a year ago, but I feel obligated to share it. nginx is the main reason for the success and deserves bragging rights.

We had an upcoming product release, and another team was handling all the communication, content, etc. I was simply just the "managed server" administrator type person. Helped setup nginx, PHP-FPM, configure packages on the server, stuff like that.

The other team managed the web applications and such. I had no clue what we would be in for that day.

There is a phrase, "too many eggs in one basket" - well, this was our first system and we were using it for many things, it hadn't gotten to the point where we needed to split it up or worry about scaling. Until this day, of course.

To me, the planning could have been a bit better, with the ISO image we were providing from the origin being staged on other mirrors, using torrents to offload traffic to P2P, etc. However, that wasn't done proactively, only reactively.

The story

Launch day occurs. I get an email about someone unable to SSH to a specific IP. I check it out - the IP is unreachable, but the server is up. I submit a ticket to the provider, and they let me know why - apparently the server had triggered their DDoS mitigation system once it hit something like ~500mbit; it was automatically flagged as probably being under attack, and the IP was blackholed.

Once we informed them this was all legit, they instantly worked to put us onto a switch that was not DDoS protected, and we resumed taking traffic. This was all done within 15 or 20 minutes, if I recall. I've never seen anything so smoothly handled - no IP changes, completely transparent to us.

I believe the pocket of no traffic (seen in the graph below) was when we were moved off their normal network monitoring and into a Cisco Guard setup. We were definately caught off guard; like I said, I knew we'd get some traffic, but not filling the entire gigabit port. Not a lot of providers would be so flexible about this and handle it with such grace. There are some reports of it being slow, and that is literally because the server itself has too much going on. PHP is trying to handle all the Drupal traffic, and during the night, the disk I/O was at 100% for a long period of time. Oh yeah - since this was the origin, the servers mirroring us had to start pulling stuff from us too 🙂

Luckily we're billed by the gigabyte there, not by 95th like most places, or this would be one heck of a hosting bill. We wound up able to reroute the bandwidth fast enough to not even be charged ANY overage for it!

All in all, without nginx in the mix, I doubt this server would have been able to take the pounding. There was no reverse proxy cache, no Varnish, no Squid, nothing of that nature. I am not sure Drupal was even setup to use any memory caching, and I don't believe memcached was available. There were a LOT of options to reduce load - the main one was just cutting down the bandwidth usage to open up space in the pipe, which was eventually done by removing the ISO file off the origin server, and pushing it to a couple mirror sites. Things calmed down then.

However, it attests to how amazinly efficient nginx is - the entire experience wound up taking only 60 something megabytes of RAM for nginx.

Want the details? See below.

The hardware

  • Intel Xeon 3220 (single CPU, quad-core)
  • 4GB RAM
  • single 7200RPM 250GB SATA disk, no RAID, no LVM, nothing
  • Gigabit networking to the rack, with a 10GbE uplink

The software - to the best of my memory (remember, all on a single box!)

  • nginx (probably 0.7.x at that point)
    • proxying PHP requests to PHP-FPM with PHP 5.2.x (using the patch)
      • Drupal 6.x - as far as I know, no advanced caching, no memcached, *possibly* APC
    • proxying certain hosts for CGI requests to Apache 2.x (not sure if it was 2.2.x or 2.0.x)
    • This server was also a yum repo for the project, serving through nginx
  • PHP-FPM - for Drupal, possibly a couple other apps
  • Postfix - the best MTA out there 🙂
    • I believe amavisd/clamav/etc. was involved for mail scanning
    • Integration with mailman, of course
  • MySQL - 5.0.x, using MyISAM tables by default, I don't believe things were converted to InnoDB
  • rsync - mirrors were pulling using the rsync protocol

The provider

  • SoftLayer - they just rock. Not a paid placement. 🙂

The stats

nginx memory usage
During some of that time... only needed 60 megs of physical RAM. 240 megs including virtual. At 2200+ concurrent connections... eat that, Apache.

root     13023  0.0  0.0  39684  2144 ?        Ss   03:56   0:00 nginx: master process /usr/sbin/nginx
www-data 13024  2.0  0.3  50148 14464 ?        D    03:56   9:30 nginx: worker process
www-data 13025  1.1  0.3  51052 15256 ?        D    03:56   5:38 nginx: worker process
www-data 13026  1.3  0.3  50760 15076 ?        D    03:56   6:13 nginx: worker process
www-data 13027  1.3  0.3  50584 14900 ?        D    03:56   6:22 nginx: worker process

nginx status (taken at some random point)

Active connections: 2258
server accepts handled requests
711389 711389 1483197
Reading: 2 Writing: 2040 Waiting: 216

Bandwidth (taken from the provider's switch)

Exceeded Bits Out: 1001.9 M (Threshold: 500 M)
Auto Manage Method: FCR_BLOCK
Auto Manage Result: SUCCESSFUL

Exceeded Bits Out: 868.1 M (Threshold: 500 M)
Auto Manage Method: FCR_BLOCK
Auto Manage Result: SUCCESSFUL

To give you an idea of the magnitude of growth, this is the amount of gigabytes the server pushes on a normal day:

  • 2009-05-18 155.01 GB
  • 2009-05-17 127.48 GB
  • 2009-05-16 104.21 GB
  • 2009-05-15 152.42 GB
  • 2009-05-14 160.12 GB
  • 2009-05-13 148.6 GB

On launch day and the spillover into the next day:

  • 2009-05-19 2036.37 GB
  • 2009-05-20 2481.87 GB

The pretty pictures

Click for larger versions!

Hourly traffic graph
nginx rocks!
(Note: I said 600M, apparently their threshhold from their router says 500M)

Weekly traffic graph
nginx rocks!

The takeaway

Normally I would never think a server could get slammed with so much while it is having to service so much. Perhaps if it was JUST a PHP/MySQL server, or JUST a static file server, but no - we had two webservers, a mailing list manager, Drupal (which is not the most optimized PHP software), etc. The server remained responsive enough to service requests, on purely commodity hardware.

Categories: nginx, PHP, PHP-FPM

nginx and Go Daddy SSL certificates

July 15th, 2010 1 comment
  1. Generate the CSR:
    openssl genrsa 2048 > yourhost.com.key
    openssl req -new -key yourhost.com.key > yourhost.com.csr
    
  2. Enter in whatever you want - you NEED the "Common Name" everything else is not really required for it to work.
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:.
    Locality Name (eg, city) []:.
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:Something Here
    Organizational Unit Name (eg, section) []:.
    Common Name (eg, YOUR name) []:yourhost.com
    Email Address []:.
    
    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:
    An optional company name []:
    
  3. Paste the CSR into Go Daddy, get back the .crt file
  4. Combine the cert + Go Daddy chain:
    cat yourhost.com.crt gd_bundle.crt > yourhost.com.pem
  5. Lastly, in nginx.conf:
    ssl_certificate /etc/nginx/certs/yourhost.com.pem;
    ssl_certificate_key /etc/nginx/certs/yourhost.com.key;
    

Additionally I have these SSL tweaks which seems to maintain a better SSL experience, passes McAfee Secure's SSL checks, etc.:

ssl on;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:-ADH:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP;
ssl_session_cache shared:SSL:10m;
Categories: nginx

PHP-FPM and nginx upstart scripts

May 21st, 2010 4 comments

Upstart is becoming the de-facto standard for everything in Ubuntu, and I do enjoy it's process management and re-spawning capabilities.

(Actually, before PHP-FPM I used to use upstart jobs to su - $user php-cgi -b $port :))

These are VERY simple but effective scripts, and would actually be beefed up to be more intelligent (chaining nginx to start after PHP-FPM for example. However, if you do not need PHP, then that's a useless chain. So I kept it simple. I suppose you could add a short delay to start nginx then...)

Note: make sure PHP-FPM is set to daemonized = yes.

Second note: this works for PHP 5.2.x w/ the PHP-FPM patch, on Ubuntu Lucid Lynx (10.04) - anything else YMMV. I am not using PHP-FPM w/ PHP 5.3 yet since I have no environments that I know will support 5.3 code. When I finally get one, I will look for the same opportunity.

/etc/init/nginx.conf:

description "nginx"

start on (net-device-up and local-filesystems)
stop on runlevel [016]

expect fork
respawn
exec /usr/sbin/nginx

/etc/init/php-fpm.conf:

description "PHP FastCGI Process Manager"

start on (net-device-up and local-filesystems)
stop on runlevel [016]

expect fork
respawn
exec /usr/local/bin/php-cgi --fpm --fpm-config /etc/php-fpm.conf

Once you've done this you can remove all the /etc/init.d/php-fpm, /etc/init.d/nginx and /etc/rc?.d/[K,S]??php-fpm and /etc/rc?.d/[K,S]??nginx symlinks and files. This takes care of all of that.

Feel free to comment and leave better tips, tricks and ideas!

Categories: Development, nginx, PHP, PHP-FPM

SPNEGO For nginx - a start, at least

January 17th, 2010 4 comments

I've been posting on the nginx mailing lists for a while that I've had a developer working on SPNEGO (Kerberos/GSSAPI/etc.) module for authentication for nginx. I never produced any code though, because I was constantly waiting for a bit more of a matured module before giving it out.

For now though, it might just be best to throw out there what I have, explain my findings, and let the community start testing it, hacking it, improving it, etc.

Download
ngx_http_auth_sso_module-0.3.zip

Opens for the developer still

  • He said he'd like to remove the dependency on the bundled spnegohelp library (apparently it's not needed or it can be filled with a system package)
  • Needs some more documentation

My questions, comments

  • Should it be called "mod_auth_sso" or something like "mod_auth_gssapi" - I believe Apache's equivalent has "gssapi" in the title somewhere. It was hard to determine which was the most up to date version - mod_auth_kerb, modgssapache, etc.
  • I cannot verify this still using my corporate domain setup. I did have to make a minor tweak in the source to change the principal name it was using (see below) and I still got access denied. I have no clue if the module is to blame, the machine's setup with the domain, or what.

A possible required tweak... around line 474 or so in ngx_http_auth_sso_module.c

- host_name = r->headers_in.host->value;
+ host_name = alcf->realm;

How to compile - yes, it's a bit messy, mainly due to the spnegohelp library dependency 🙂

wget http://sysoev.ru/nginx/nginx-0.8.31.tar.gz
tar xvfz nginx-0.8.31.tar.gz
cd nginx-0.8.31
./configure --conf-path=/etc/nginx/nginx.conf --prefix=/usr --user=www-data --group=www-data --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-http_stub_status_module --with-http_gzip_static_module --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_realip_module --with-http_xslt_module --with-debug --add-module=/usr/src/build/ngx_http_auth_sso_module-current --with-ld-opt="-L/usr/src/build/ngx_http_auth_sso_module-current/spnegohelp"
cp -f /usr/src/build/ngx_http_auth_sso_module-0.3/spnegohelp/libspnegohelp.so /usr/lib64/libspnegohelp.so
... make, make install, whatever ...

I copied the libspnegohelp.so into the system so nginx can use it without any special runtime LD_LIBRARY_PATH crap.

How to configure (again, I could not validate this 100%)

location /test {
   auth_gss on;
   auth_gss_realm YOUR.KERBEROS.REALM;
   auth_gss_keytab /etc/krb5.keytab;
   auth_gss_service_name YOURMACHINENAMEIBELIEVE;
}

Conclusion
Your mileage will definitely vary but hopefully some people with more experience with Kerberos, C, nginx modules, or anything else helpful can pick it apart. I will post updates as I get them and I do want to post this on github or something... unless someone else wants to take ownership over it who will actually actively maintain/keep it up to date with nginx internals and maintain the github/whatever repository for it.

This was self-funded personally with no company sponsorship or anything, part of the terms of the project were also to keep it BSD licensed. Feel free to do whatever you want with it. If you want to send me any money to reimburse, I'll never say no to that - paypal AT mike2k.com. Or, pay a developer who can ensure it works end-to-end, if there is something missing.

Sadly, it does require the machine to be on the domain, I was hoping I could get away with it not having to be on the domain. I confirmed with Sam Hartman, who was a chief technologist at the MIT Kerberos Consortium - you can't really get more knowledgeable than that. I would have hired Sam for the project but it would have been too expensive. However if a company is willing to take this initial coding and have Sam add his magic to it, he knows C and was able to give me a real quick estimate and check how nginx modules work. He could possibly do it for cheaper now since the initial work might be done, and he would definitely be able to confirm all the logic is intact.

Thanks to YoctoPetaBorg at RentACoder.com for the initial work (and for hopefully soon finishing up the last bits of this :p) - it will be exciting to be able to use this at work and be able to have a fully functional module out there that people find useful.

Categories: nginx

Transifex + Django + nginx on Ubuntu

November 1st, 2009 No comments

You can probably also use this for just Django + nginx on Ubuntu too, it should work the same for any Django-based package, just change the Transifex-specific stuff (like /site_media and such.)

Dependencies:

  • python-django package from Ubuntu (currently Jaunty's 1.0.2-1ubuntu0.1)
  • Transifex installed to /home/transifex/web/transifex, from source
  • Transifex setup to answer over fastcgi on port 8080
  • Tested on nginx 0.7.x and 0.8.x compiled from source

/etc/nginx/nginx.conf:

server {
   listen 80;
   server_name foo.com;
   include /etc/nginx/confs/defaults.conf;
   location / {
      fastcgi_pass 127.0.0.1:8080;
      include /etc/nginx/confs/django.conf;
   }      
   location /site_media {
      root /home/transifex/web/transifex;
      include /etc/nginx/confs/expires.conf;
   }
   location /media {         
      root /usr/share/python-support/python-django/django/contrib;
      include /etc/nginx/confs/expires.conf;
   }
}

/etc/nginx/confs/defaults.conf:

location ~ /\.ht {
   deny all;
}

location ~ \.svn {
   deny all;
}

log_not_found off;
server_name_in_redirect off;

/etc/nginx/confs/django.conf:

# http://www.rkblog.rk.edu.pl/w/p/django-nginx/
# http://code.djangoproject.com/wiki/ServerArrangements
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_pass_header Authorization;
fastcgi_intercept_errors off;

/etc/nginx/confs/expires.conf:

location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
   expires max;
   access_log off;
}
Categories: nginx