The Blog

Posts from May 2009

May 29

RailsConf Wrap Up

By Jason LaPorte

Well, we’re back from Vegas! And have been, for a couple weeks… I’ve been meaning to put up some follow-up resources for my talk (PWN Your Infrastructure: Behind Call of Duty: World at War), but there was just so much work to do when I got back… such is the life of a system administrator!

That said, I’ve got some free moments, so I’m putting up some reference materials.

Anyway, for those who didn’t see it, my talk was about the cloud infrastructure we built to power the Call of Duty stats site, and the tools we built to support that. The slides from the talk are available here, though they probably don’t mean too much without the audio behind them.

The puzzle pieces I talked about were:

  • Virtualization, which allowed us to not have to worry about hardware failures (and gave us a lot of flexibility unrelated to scaling, such as being able to instantly clone a test environment, do some testing, and shut it back down, for virtually no cost). Terremark is our awesome hosting provider.
  • NFS, which allowed us to do away with complicated tools like Capistrano and avoid managing server installations separately. It’s been around for 25 years, so it’s stable, and it’s also dead simple. Unfortunately, it is not suitable for tasks requiring heavy IO or a very large number of files, so it must be used with care.
  • Monit, which allowed us to monitor our hosts and automatically fix certain problems (such as application failures) without requiring human intervention. As mentioned in the slides, if you want to pull XML from Monit (which we do to aggregate data from all of our hosts), the URL or doing so is “/status?format=xml”. _This behavior is not documented.
  • Overlord, a simple internal tool that distributes server configuration and aggregates monitoring information from each Monit instance. Overlord is currently proprietary, but we’re considering releasing it once it’s ready (read: once it’s properly documented). But it’s really simple. It basically says what files should be on which servers (all of which are just simple text files), and they’re placed there on boot. Any files put into a particular directory are run as scripts. After that, it’s just a big cron job to pull XML from all of our servers and make pretty graphs.
  • RRDTool, which has many uses, but specifically makes the aforementioned pretty graphs. These are vital to determine trends and validate results. Also, they’re really pretty. Double also, RRDTool is extremely well constructed and totally awesome.
  • Using shell scripts everywhere. Thanks to using NFS, Monit, and Overlord, our distribution needs are already taken care of, so we can do a lot via simple shell scripts, which makes our infrastructure self-documenting and easy to work with. The simplest example of this is that we switched to deploying our applications with shell scripts, instead of using Capistrano or other network tools.

The end result is a scalable system that, while it still has a few warts we’re working through, is very simple, self-documenting, and easy for even a non-sys-admin to diagnose and solve problems on–there’s very little magic here, just some elegant abstractions.

Some people were interested in getting some more information on our deploy scripts, so I’ve made them available online. They come in two parts: “core.sh” defines the core set of functionality, for logging, rollbacks, and some basic “here’s how you deploy Rails 101” functions. “deploy.sh” actually performs the deploy itself, and is the script that should be executed.

All told, we had a blast, and are looking forward to future RailsConfs!

May 22

Mike and Agora win an award!

By Mike DelPrete

“The U.S. Small Business Admininstration and the New York Business Development Corp. recognized more than 20 business owners at the 11th Annual Small Business Excellence Awards luncheon at The Desmond Hotel & Conference Center in Colonie.”

http://blogs.timesunion.com/business/?p=12124

Click the picture to see it bigger! (the glass award looks a little dangerous, doesn’t it?  What if Mike tripped on his way down from the podium and the glass tip got stuck in his eye?  I guess that concern falls under the “worrying about something that is unlikely to happen."  It looks dangerous, though.)

From Left to Right:  Some SBA Guy, Eric from Pioneer Bank (is Awesome), Mike (also Awesome), Some SBA Guy

May 20

(Ago(ra)ilsconf) - Part II

By Nicole Plummer

Well, we came, we talked, we kicked some … (you know)!

This year Agora had two talks accepted to Railsconf.  Both were received exceedingly well and you can find the slides here.

Congratulations to all of our presenters including: David Czarnecki, Ola Mork, Eric Torrey and Jason LaPorte.  Nice work guys!

May 16

github.com/agoragames

By David Czarnecki

We are starting to open source some of the components behind community sites like Guitar Hero and Call of Duty. Enjoy!

http://github.com/agoragames

action-mailer-with-temporary-delivery-method Send email using ActionMailer but without using the templates or changing your smtp_settings

notify-campfire-multi Notify multiple campfire rooms from a post-commit svn hook

read-and-write-if-nil Pass through the value of a block to a cache key if the value is nil when it’s requested

test-runner-benchmark Benchmarking your tests

May 13

Write if read returns nil

By Ola Mork

Usually we use standard caching methods on our site (primarily fragment caching to avoid DB queries).

Occasionally we need to do something more fancy. These instances usually come up when we’re splitting one query into two because rails doesn’t support :force_index or :adapter_specific_find_options on ActiveRecord::Base.find. We understand this motivation but really hate ActiveRecord::Base.connection#find_by_sql or ActiveRecord::Base.connection#execute. These are not rational hatreds.

So when we get into a situation where we’re going to be caching manually it’s usually in the controller and we almost always end up with a pattern of:

@object = Rails.cache.read('really/complicated/and/stinky/key')
if @object.nil?
  @object = what_should_my_object_be?
end

That’s fine in a contrived example but we were doing this in about 10 different places and it looked like a good candidate for drying up.

Here’s the solution we use:

module ActiveSupport
  module Cache
    class Store
      def read_and_write_if_nil(key, options = {})
        object = read(key)
        if object.nil?
          object = yield
          write(key, object, options)
        end
        object
      end
    end
  end
end

And the production example looks like this:

account_ids = Rails.cache.read_and_write_if_nil("member_ids_for_clan_#{@clan.id}", :expires_in => 5.minutes) do
  @clan.members.find(:all, :order => 'groupies DESC', :select => 'accounts.id').collect(&:id)
end