javascript – Deploying a production Node.js server-ThrowExceptions

Exception or error:

I’ve written a Node.js app, I’m looking to get it running on one of our production machines. This seems like a pretty common request yet I can’t find an adequate solution. Is there not established solutions for deploying production Node.js apps?

The app is simple (<100 LOC), but needs to be very efficient, reliable and could run continuously for years without restarting. It’s going to be run on a large site, with dozens of connections/second. (the app is not used as a webserver, it only has a JSON API)

Here are the approaches I’ve considered but I’m still not sure about:

Using a framework (eg. Express)

Because the app needs to be high performance and is so simple, adding bloat in the form of a framework is something I want to avoid.

Starting the server with nohup

The main problem here is with exception handling, we (obviously) don’t want the entire server to crash because of an exception. From what I understand, wrapping the entire app in a try {} catch {} loop won’t help because the Javascript interpreter is left in an unpredictable state after an exception. Is that correct?

Using something like Forever

I’ve installed Forever in a FreeBSD machine of ours and it was very buggy. It ended up spawning endless processes that couldn’t be killed from Forever. I had to run kill -9 to get my machine back and I don’t feel too confident about running a production app on Forever. It also seems that Upstart (similar tool, but more generic) won’t run on FreeBSD.

Hosted solutions (eg. Heroku, Rackspace, Amazon EC2, etc.)

This is probably the simplest solution, but we already have a the serious hardware for the rest of our webservers. For financial considerations, it doesn’t make sense.

Surely there must be some established solution to this? Am I missing something?

How to solve:
  • You should really really use a framework (I recommend something like Express since it was battle-tested) unless you want to deal with sessions, cookies, middleware etc by yourself. Express is really light.
  • Starting the server with nohup: you shouldn’t do that, just start it with the regular “node” command. Also Express wraps the routes in a try-catch, so your server won’t crash in a route. However if your server does have a serious problem, you shouldn’t fear restarting it (besides, if you have 2-3 processes at least, only one will die, so there will be at least 1-2 remaining and the user won’t feel a thing).
  • For monitoring I personally prefer something more at the OS-level such as Upstart and Monit.
  • Hosting solution: since you already have your own serious hardware stuff, no need to invest money in something else. Just use a load-balancer (maybe nginx or node-http-proxy) to proxy stuff.


See Hosting Node Apps.

This tutorial walks you through setting up a server that can host node.js apps for server-side JavaScript applications. Right now, the node.js hosting options boil down to running node daemon processes that talk to a web server. Most web servers can proxy connections to a different port, so you’ll be able to use Apache or nginx to do this.


There are three questions here, I think.

Question 0: “Should I use a framework for my node app?”

Question 1: “How do I run node servers on production machines?”

Question 2: “How do I deploy node apps to production”.

For Question 1, I really like Cluster (although the latest Node version has something like that built in, so you might check that out). I’ve had good success with something like Monit/Upstart to monitor OS level events and make sure your servers are in good health. (This was monitoring N clusters of Ruby Thin servers, but same thing).

Depending on the traffic you may want to run cluster on multiple machines, then putting a load balancer in front of that. This depends on your traffic, how long requests take to complete / how long you block the event loop, and how many processors/node instances you launch per machine.

A framework gives you better error handling, and catches errors that would exit normal node.js apps. If you do it without a framework, make sure you read up on error handling in node.js.

For Question 2, I don’t think the node community has a good deploy standard yet. You could try using Ruby’s Capistrano tool (and here’s a blog entry talking about deploying cluster with Capinstrano).

The bad thing about Capistrano is that it makes some assumptions that might not be true (ie: that you’re deploying a Rails project), so you may end up fighting with the framework a lot.

My goto deployment solution in general is Python’s Fabric tool, which gives you deployment tools and lets you do what you need to do.

Another deployment option is “the cloud”, with things like Nodester: let them take care of it.


Try using pm2 it is simple and intuitive CLI, installable via NPM. Just start your application with PM2 and your application is ready to handle a ton of traffic

PM2 Offical Link

How to set up a node js application for production using pm2


You might get better answers over on ServerFault, but there’s a description of one user’s experience here using supervisord. You’re going to need to use some sort of process watcher to keep the node process alive, and another common recommendation seems to be to reverse-proxy connections to the node process somehow. I’d probably vote for nginx (this way you can have nginx handle the logging, authentication, or any other higher-level HTTP features you need as opposed to somehow baking them into node), but the aforementioned article mentions haproxy in the comments here and there which may be more lightweight. Your choice of reverse-proxy will probably depend largely on whether or not you need WebSocket support.

I’m not sure any more “standard” workflow exists for node just yet; it’s not quite as mature as something like Rails that has a myriad of ways to keep a webapp running.


The guys at Cloudkick wrote an an excellent solution to this. It’s called Cast,

Install cast on your server and on your workstation. You start the cast-agent on the server and have your workstation sign with the servers cast instance. You can then create “bundles”, upload them to the server, create/upgrade/destroy from them as well as start/stop the your instances. Cast will automatically restart your services when they crash. You can also tail the stdout/strerr remotely as well as get a list of running instances and PID#s and manage your instances/servers from your workstation (no SSHing required). The docs are slightly out of date, but the results are worth the little bit of extra work. All of the interactions/commands are over HTTPS and a RESTful API.

Prior to this I was doing all the upgrades by hand with SCP/SSH. We has supervise keeping things up. We haven’t looked back.

Leave a Reply

Your email address will not be published. Required fields are marked *