There is a channel on irc.freenode.net, at #swiftcore.
The Swiftiply mailing list is email@example.com. You may register by going to http://rubyforge.org/mailman/listinfo/swiftiply-users.
You are really talking about two completely different scenarios there. So let's start with nginx + evented_mongrel.
evented_mongrel runs just like the threaded mongrel does, as a server. The only real difference is that the request handling is all running inside of an event loop. This lets requests be queued up without using any significant resources and lets them get handled with the full resources of a process, without any threading overhead. The benefit of this is greater throughput and much better behavior under heavy loads. Because evented_mongrel looks identical to threaded Mongrel to external systems like nginx, and to your application as well, you don't have to do anything different with regard to configuration to use evented_mongrel.
With nginx, this means that you don't do anything differently. I'm not going to cover nginx configuration here, as it's covered in many other places.
With mongrel, as of 0.6.0, evented support is a hotpatch on the standard threaded Mongrel. In your own code, you can do a
require "swiftcore/evented_mongrel" to use it instead of the standard Mongrel. Or you can use the mongrel_rails that is bundled with Swiftiply. If you do that, set the EVENT variable, and it will automatically use evented_mongrel.
The other scenario is to use the full Swiftiply package, with swiftiplied_mongrel. In this scenario you may not need nginx. Swiftiply now has relatively fast static file support (as of 0.6.0) and it will improve further in the next release. If you do want to use nginx with Swiftiply, I would suggest separating your domains so that your requests for static assets are sent to nginx, and your dynamic requests go directly to Swiftiply.
If you want to try proxying through Swiftiply behind nginx, though, you again do it just like you normally would, or like you would with evented_mongrel, with one exception. You don't have your swiftiplied_mongrels running on different ports. All of them run on the same port. This is because, as is mentioned in the documentation, swiftiplied mongrels are clients of Swiftiply, and are not standalone servers. They all connect to the same address and port combination. Within nginx, you then put the location of your Swiftiply instance in as the proxy destination, and let Swiftiply distribute the requests to your mongrels.
As with evented_mongrel, the bundled mongrel_rails can use swiftiplied_mongrel by setting the SWIFT environment variable. Also bundled is a swiftiply_mongrel_rails wrapper that can be used to launch N mongrels running as swiftiplied mongrels.
Yes. The focus so far has been on the Swiftiply proxy itself. Now that it is nearing the feature set that is desired for the 1.0 release, it is time to start focusing on the cluster management tools more. The 0.7.0 release will feature a nice set of cluster management tools designed to make deploying and managing your Swiftiplied cluster sweet and easy. Until then, I will happily accept patches if anyone wishes to add to the work that Ezra started with the modified mongrel_rails and swiftiply_mongrel_rails scripts.
No. Swiftiply will proxy to one or more backend processes for each host that it is proxying for. What you may be missing is that since Swiftiplied backends run as clients of Swiftiply, all of the backends for a given application connect to the same address/port. Consider the following configuration:
map: - incoming: planner.walrusinc.com
- incoming: blog.walrusinc.com
Swiftiply would proxy requests for planner.walrusinc.com to the backends connected to frontend.walrusinc.com:11111, and blog.walrusinc.com to the backends connected to frontend.walrusinc.com:11112.
Because the backends are clients of Swiftiply, you can have however many of them that you need all connected to the same point. So if blog.walrusinc.com is running with 2 backends, but walrusinc's thoughts become popular, they can spin up a second machine with a couple more backends on it without having to reconfigure anything. All of the backends connect to frontend.walrusinc.com:11112 and will get traffic evenly distributed to them.