Swiftiply is a clustering proxy server for web applications. What makes it different from other clustering proxies, however, is that it expects the backend processes to connect to it. That is, the backend processes are clients of the Swiftiply server, as are the browsers out in userland. The advantage of this is that it permits the back ends to maintain a persistent connection with the proxy server, which eliminates socket setup/teardown costs. And even more importantly than that, it permits backend processes to be started up or shut down without requiring any notification or configuration of the proxy. So, if more capacity is needed, all one needs to do is start the processes. It will immediately be available and will begin to be utilized.
The drawback to this sort of an architecture is that it is exactly opposite of what most web frameworks expect. Most Ruby frameworks, however, utilize Mongrel as one of their primary deployment tools, which allows this drawback to be neutralized. Swiftiply provides a class that overrides pieces of Mongrel (version 1.0.1), allowing any Mongrel handler to be used with Swiftiply, transparently.
Because Mongrel is the preferred deployment method for most Ruby frameworks, Swiftiply includes a version of Mongrel (found in
swiftcore/swiftiplied_mongrel.rb) that has been modified to work as a swiftiply client. This should be transparent to any existing Mongrel handlers, allowing them all to work with Swiftiply.
In addition, as an offshoot of the swiftiplied_mongrel, there is a second version that is available. This other version is found in
swiftcore/evented_mongrel.rb; it is a version of Mongrel that has its network traffic handled by EventMachine, creating a Mongrel that runs in an event based mode instead of a threaded mode. For many applications, running in an event based mode will give better throughput than running in a threaded mode, especially when there are concurrent requests coming in.
This is because the event based operation handles requests efficiently, on a first come, first served basis, without the overhead of threads. For the typical Rails application, this means that request handling may be slightly faster than the threaded Mongrel for single, non-concurrent requests. When there are concurrent requests, though, the differential increases quickly.
To install Swiftiply:
The Swiftiply executable will be installed installed into the ruby installation's bindir, along with associated scripts, and the libraries into the site lib. rdoc information will also be generated.
ruby setup.rb --help
to see a full list of options.
After installation, the test suite (full test suite coming soon) can be ran with:
ruby setup.rb test
To start a Swiftiply instance, first create a configuration file (YAML format):
- incoming: iowa.swiftcore.org
- incoming: analogger.swiftcore.org
Then start Swiftiply:
swiftiply -c config_file
An instance of Swiftiply is now running, listening on a socket for connections from browsers.
IOWA has built in support for running in evented and clustered modes.
Swiftiply provides a _REPLACEMENT_ to mongrel_rails that, throught the use of an environment variable, can be told to run in either the evented mode or the swiftiplied mode.
To run a Rails app in evented mode, set the EVENT environment variable. On a unixlike system:
env EVENT=1 mongrel_rails
To run in swiftiplied mode:
env SWIFT=1 mongrel_rails
Because Swiftiply backends connect to the Swiftiply server, they all connect on the same port. This is important. Each of the backends runs against the same port. To make it easier to start multiple Rails backends, a helper script, swiftiply_mongrel_rails, is provided. It is just a light wrapper around mongrel_rails that will let one start N backends, with proper pid files, and stop them.
The merb source (trunk only, at this point), has Swiftiply support that works just like the Rails support, built in.
A couple adapters for Ramaze are included, to allow Ramaze to run with either the evented or the swiftiplied mongrels. They are installed into
Swiftiply has been tested with Camping and Nitro, as well. Direct support for them is not yet bundled, but will be in an upcoming release. In the meantime, all that really needs to happen to use the evented_mongrel or swiftiplied_mongrel with your application is to require the proper library --
Swiftiply takes a single configuration file which defines for it where it should listen for incoming connections, whether it should daemonize itself, and then provides a map of incoming domain names and the address/port to proxy that traffic to. That outgoing address/port is where the backends for that site will connect to. The configuration file uses a YAML format. For the purposes of Swiftiply configuration, what this means is that key/value pairs are placed one per line, with a : between the key and the value, and lists of items are created by leading off a list item with - , one per line.
The following items can be specified in a Swiftiply configuration file:
This is the address/IP that Swiftiply will listen on for traffic from browsers.
This is the port, on the cluster_address, that Swiftiply will listen on for traffic from browsers.
If this is set to true, Swiftiply will attempt to put itself into the background. This currently only works on platforms where forks() is supported (i.e. not win win32 platforms).
With version 0.8.0, EventMachine now supports using epoll() for the event loop instead of select() on platforms that support it (Linux 2.6). The benefit to epoll() is that performance does not degrade as the number of file descriptos increases, and it can handle more than the 1024 descriptors that select() under Ruby is limited to. This lets Swiftiply to scale to handling many thousands of connections all at the same time. There is no downside to attempting to enable epoll, as it will silently fail on plaforms where it is not supported, so Swiftiply currently defaults to attempting to enable epoll support. If, for some reason, you want to ensure that it is not used, you can use
epoll: false to disallow it.
Swiftiply defaults to a descriptor table size of 4096. This means that it will handle 4096 active connections. If you would like that limit to be higher, you may use this config setting to change that table size.
The timeout is the number of seconds that Swiftiply will hold on to a browser's connection, waiting for a backend to take the connection and service it, before giving up and returning a 503 Server Unavailable error to the browser. The default is three seconds. If your web framework actions are slow, you may want to increase that number, but try to keep it as low as you can.
- incoming: iowa.swiftcore.org
- incoming: analogger.swiftcore.org
map section contains the mapping of incoming hostnames to the outgoing address/port that the backend(s) for the site are listening on.
- incoming: foo.bar.com
This is the hostname or hostnames to match against.
This is the address/port that the backend(s) for the site are expected to be connected to.
If this option is set to true, then this incoming/outgoing set is the default set. Any request that comes into Swiftiply that does not match another set is sent to the default.
Swiftiply will serve your static files directly, itself. To enable this, just provide a docroot declaration. Swiftiply will check in the docroot for a file that matches the requested uri before dispatching the request to a backend. If a match is found, it will be delivered.
Rails, and other frameworks, can cache generated pages in a directory that the webserver can then use to fullfill requests, eliminating the need for those requests to go all the way back inside of the framework to be fulfilled. If your app is using a page cache, you can have Swiftiply serve files from it with this directive. Swiftiply will look in docroot/cache_directory if it doesn't find the requested uri in the docroot. If it doesn't find a file to fulfill the uri as requested, it will append
.html to the uri and check again.
If you would like Swiftiply to check more extensions than just
.html, list each of them under a
cache_extensions directive. Each will be checked in order until a match is found. If none is found, then the request will be dispatched to a backend for handling.
If the redeployable attribute is turned on (it defaults to off), then if a backend goes away -- is killed, crashes, hangs, has a walrus fall from the sky on top of it, etc... -- Swiftiply will detect that and will redeploy the request to the next available backend, so long as the backend had not started to return a response.
There is a limit to the size of the requests that will be redeployed, however. It defaults to 16k, but may be set to a different size by assigning that size to the redeployable attribute. e.x.
Requests that exceed the size limit are treated like any other request, and will be dropped instead of redeployed if the backend handling it goes away.