Getting setup with nginx, php, and postgresql on ubuntu

So I’ve been working off a BeagleBone Black the last couple months as my primary webserver and learning arena.  It’s been fun getting to know linux a little bitter after all these years on a Mac.  And while the BBB has served me well in learning to host my own server and site, it’s ARM processor has been somewhat restrictive.  I wasn’t able to setup MongoDB, and I couldn’t find a version of php5.5.  There may be solutions out there, and if I was dedicated enough maybe I could even concoct them, but I figure, why not just move the whole thing to the cloud anyway?

My main web project is a wiki-like platform whose goal is a better user experience than the likes of Wikipedia. Interaction with information on the internet could be made many times superior than it is today, and that’s what I’m striving for with YanlJ. YanlJ is currently built with PHP against a postgresql database on the BBB, being served up by nginX. Let’s move the whole stack to the cloud.

I had already set up an account with Amazon, and I’ve got ssh access to my EC2 instance. Great. Let’s grab nginx and build from source so we can change the server response header.

curl -O http://nginx.org/download/nginx-1.7.1.tar.gz
tar zxvf nginx-1.7.1.tar.gz

There are a couple files we have to change.  First,

vi src/http/ngx_http_header_filter_module.c

Find the line that reads:

static char ngx_http_server_string[] = "Server: nginx" CRLF;

and change the ‘nginx’ to whatever you want:

static char ngx_http_server_string[] = "Server: YanlJ" CRLF;

Next,

vi src/core/nginx.h

find the line that reads

#define NGINX_VER          "nginx/" NGINX_VERSION

and again change the ‘nginx’ to whatever your heart desires. Finally, feel free to change the NGINX_VERSION number to something a tad more magical.
Now, we’re nearly ready to build.  Nginx needs PCRE (Perl Compatible Regular Expression Library) for url rewrites, so let’s grab that first.

sudo apt-get update
sudo apt-get install libpcre3 libpcre3-dev

We’re going to want to build nginx with https compatibility, so we better grab the openSSL developer library as well.

sudo apt-get install libssl-dev

Now, we’re ready to build:

./configure --with-http_ssl_module
make
sudo make install

Since I didn’t specify, nginx was installed in /usr/local. Let’s boot up the server:

sudo /usr/local/nginx/sbin/nginx

If we visit the EC2 ip address in the browser now, we should see the nginx welcome page. Of course, this assumes we’ve already forwarded port 80 in the Amazon EC2 console. We can shut down the server with

sudo /usr/local/nginx/sbin/nginx -s stop

Right. Moving along now to PHP. Getting nginx and PHP to work together can be a little tricky at first, so let’s see what we can do. Start by grabbing the necessary PHP packages:

sudo apt-get install php5-common php5-cli php5-fpm php5-pgsql

Easy. Note we need the last package so php will be able to speak to the postgresql database. But first we have to edit the nginx configuration files to talk to php.

cd /usr/local/nginx/conf
vi nginx.conf

This is where all the server configuration is. There are some commented out lines that you can supposedly uncomment to get php functionality, however that will not work for us, since we went with the fast processes manager for php (php5-fpm), rather than just grabbing the cgi. Here’s the important part of my config setup, for testing purposes:

    server {
        listen       80;
        server_name  localhost;

        root /home/ubuntu/web_test;
        index index.php index.html;

        location / {
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi.conf;
        }

        try_files $uri $uri.php $uri.html =404;

    }

Of course, this implies that we have a directory at /home/ubuntu/web_test, inside of which is an index.php file. My index.php looks like this:

<?php echo phpinfo(); ?>

Restart the nginx server (assuming we’re still in the nginx dir) with:

sudo sbin/nginx -s reload

And now if we point our browser at the server, we see a pile of information about our php setup. Tada!

Unfortunately, if it did not work, you may have some hunting to do. Go through the instructions again carefully and consult other instructions on the web. Getting php and nginx to work properly the first time can be a bit of a pain!

Now, we’ve got our nginx server serving up php pages. Lets install postgresql so we can run our data driven application:

sudo apt-get install postgresql

We’ll use a dedicated user for managing the database (this user was setup during installation of postgresql).

sudo -u postgres psql postgres
\password postgres

The psql command starts the postgres interactive shell, in this case to modify the user named postgres, and to change its password. Exit the shell with \quit

Now, we’re ready to clone YanlJ!

git clone git@github.com:ebuchman/YanlJ
cd YanlJ

YanlJ comes with setup scripts for creating the database, but she uses authorization details in a file `auth.txt`, which stores the username and password for accessing and modifying the database over the database server (from PHP). Let’s create that file

echo usr_name > auth.txt
echo some_pwd >> auth.txt

And run the setup script:

bash setup/setup_db.sh


The setup script should create the database ('wikidb'), and set up all the appropriate tables, granting the user in 'auth.txt' permission to modify. To learn about how to play around with the database, see http://www.postgresql.org/docs/9.1/static/tutorial-accessdb.html. You can pull up an interactive database shell, and list all current tables like so:


sudo -u postgres psql -d wikidb
\dt

And that’s that! The database is setup for using YanlJ. The last thing we have to do is some final nginx configuration. Getting back to the nginx conf folder, the best way to do this is to create a new folder sites-enabled, keep configuration files for each site in there, and include a reference to them in the main nginx.conf.

cd /usr/var/nginx/conf
mkdir sites-enabled
vi sites-enabled/yanlj.conf

Let’s take the server block we used to test our setup before and copy it into yanlj.conf, making sure to change the root path to point at the yanlj directory instead of web_test. Then, in nginx.conf, simply add this line where the server block used to be:

include sites-enabled/yanlj.conf;

Now reload nginx. When we point the browser, we should see YanlJ! Of course, YanlJ has been written to only be accessible over https. We can comment this out, or, add another server block to our yanlj.conf file for https connections. For this we’ll need ssl certificates:

mkdir ~/certs
cd ~/certs
sudo openssl genrsa -out server.key 1024
sudo openssl req -new -key server.key -out server.csr
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Now, let’s add the https server block to yanlj.conf:

server {
  listen 443;
  server_name localhost;

  ssl                  on;
  ssl_certificate      /home/ubuntu/certs/server.crt;
  ssl_certificate_key  home/ubuntu/certs/server.key;

  keepalive_timeout    70;

  ssl_protocols  SSLv2 SSLv3 TLSv1;

  root /home/ubuntu/programming/YanlJ;
  index index.php index.html;

      location / {
      }

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi.conf;
        }

  try_files $uri $uri.php $uri.html =404;
}

Reload nginx one more time

sudo /usr/local/nginx/sbin/nginx -s reload

and we’re done! Of course, since we haven’t registered with a certificate authority, the browser will warn that the site is not trusted, but just add the exception and move along.

We did it. We moved the stack over from BBB to AWS EC2. Have fun playing with YanlJ, and please, if you’re a developer, feel free to make pull requests – she needs plenty of work!