from: Lin Jen-Shin (godfat)
date: Mon, Apr 8, 2013 at 6:01 PM
subject: Regarding the recommended Ruby app server
I know there won't be a best server in all cases,
but generally speaking, for average people who
might not know HTTP and network well, shall we
instead recommend something which doesn't have a
terrible worst case even if it doesn't have the
Here's the points I want to make:
* Unicorn might not be a suitable server for average apps.
(on Heroku Cedar stack which doesn't fully buffer.)
* Is it feasible to have an Nginx addon for Cedar stack?
* We're using a combination of EventMachine and thread pool
strategy for our Ruby application server. Thoughts?
Why Unicorn might not be suitable:
According to this document:
HTTP Routing and the Routing Mesh: Request buffering
Cedar stack's routers (reverse proxies) won't be
buffering the request body, which would cause
some worst cases for Unicorn, since it is assuming
all clients are fast client. By fast clients, it usually
means clients from internal or local network.
That is, by using Unicorn directly, we would want
something like Nginx which would fully buffer
all requests for all our clients around the world.
You can find this information from Unicorn's documents.
PHILOSOPHY, and DESIGN.
Instead of attempting to be efficient at serving slow clients,Nginx addon?
unicorn relies on a buffering reverse proxy to efficiently deal
with slow clients.
unicorn uses an old-fashioned preforking worker model with
blocking I/O. Our processing model is the antithesis of more
modern (and theoretically more efficient) server processing
models using threads or non-blocking I/O with events.
Like Mongrel, neither keepalive nor pipelining are supported.
These aren’t needed since Unicorn is only designed to serve
fast, low-latency clients directly. Do one thing, do it well;
let nginx handle slow clients.
I am concerning about this all the time, therefore never really
tried Unicorn without using Nginx. With Bamboo stack, we would
have Nginx and Varnish in front, which I guess it's fine. However,
this is not the case on Cedar stack. I wonder if it's possible to
have Nginx as an addon on Heroku?
Since Nginx has a very small memory footprint, I think it's
fine to have every dyno get a Nginx in front. Thus we don't
have to worry about routing/queuing issues. We don't need
realtime or streaming features at the moment, so I think it's
perfectly fine to have Nginx for us, and probably for most
Recently there's a discussion on Unicorn's mailing list regarding
this issue. Here's the thread: Unicorn hangs on POST request
According to Tom Pesman:
I've some new information. Heroku buffers the headers of a HTTP
request but it doesn't buffer the body of POST requests. Because of
that I switched to Rainbows! and the responsiveness of the application
They switched to Rainbows! with EventMachine, which would fully
buffer the request/response as in Nginx but with EventMachine,
the responsiveness of the application increased dramatically.
The current maintainer of Unicorn and Rainbows! responded with:
Re: Unicorn hangs on POST request
However, Rainbows! with EventMachine would still be suffering
from head-of-queue blocking issue. That is, suppose our app
would do some heavy computing, since EventMachine is single
threaded, at the time we're computing, the whole process is
still blocking there and therefore cannot keep receiving
packets from slow clients at the same time.
This could be further solved by using threads together, like
using CoolioThreadPool or CoolioThreadSpawn. But cool.io is
not actively maintained at the moment, and the author Tony
Arcieri headed the development to celluloid (which is the
core of Sidekiq), celluloid-io, and nio4r.
Last time I tried cool.io (probably two years ago), it even
gave me some assertion failures. I don't know if anyone is
using cool.io on production, either. Even though Eric Wong
is willing to patch cool.io if we can provide reproducible
cases, I would rather try to write a celluloid-io based
model for Rainbows! since that's the way the community is
heading to at the moment.
What we currently run:
All after all, we're using a combination of EventMachine
and thread pool strategy, which is something like this:
I was once working on making this merge back to Rainbows!.
The unfinished work is located at:
It's almost done, but I failed to make one test case pass:
"send big pipelined chunked requests"I believe not many people are doing pipelined requests
combined with big chunked data, and probably this won't
even work with Heroku's Erlang routers, so I think it
might be fine to use it on Heroku. However, if I cannot
make it pass, I don't think it could be merged. The thing
which makes it quite hard is the EventMachine API. There's
no easy way to tell EventMachine to pause a connection,
leaving the data buffered at kernel level.
If you want to subscribe to the mailing lists, here they are:
Unicorn mailing list and Rainbows! mailing list.
Let me know if you have any thoughts, thanks!