The Blog

Posts from June 2011

Jun 27

Nerds: The Take Over

By Brian Corrigan

Several days ago, I was walking with my Agora Games t-shirt, which happens to bare the Major League Gaming (MLG) logo on the arm. Out of Dunkin Donuts, a young boy excitedly ran up to me. After catching his breath, he goes, “excuse me, do you compete in the MLG?” At that point I had gotten towed and was on the phone with the police so I simply shook my head no.

Think back to high school; remember the assortment of social groups? To name a few, we had the jocks with expectations of college sports and the major leagues. Also, we had the dramatic arts students who hoped for the big break that would land them fame and fortune. There’s no forgetting the music students who dreamed of being rap or rock stars and finally, the nerds. Sadly there was no glamorous expectation for the nerds. They were expected to do your normal jobs, working for big companies or starting their own.

Even when you considered the earning potential, there was still that essence of glory that was missing, unless you discovered something great. This however, started to change about eight years ago when Sundance started MLG. Today MLG is accurately described as a huge concert for gamers. The last event housed almost 15,000 while streaming to an audience of at least 22.5 million.

After getting my car, I started thinking of the influence that MLG has on the new generation. To young kids, MLG is right up there with other professional leagues such as the NBA or the NFL. MLG already opens doors to huge opportunities for sponsorships and endorsement deals and will only go further. Additionally, as we attract more sponsors and fans, the growth potential becomes unlimited.

Now consider the change that brings to that same high school scene. Finally, a means of recognition for those who lack the interest or ability to play sports, act out a scene or please a crowd with music. Now when you’re told to do something for your future, it’s arguable that you are investing in your future. So, to those siting on the couch pwning their friends in Halo, your path to glory awaits.

Jun 24

Of Penguins, Rabbits and Buses

By Aaron Westendorf

Here at Agora we make use of dedicated hardware and virtual machines running on our providers' respective clouds. In recent months, we’ve moved our RabbitMQ hosts onto hardware because we found that we could far exceed the CPU capacity of our virtual machines and it was far cheaper to run a small cluster of hardware hosts than a giant cluster of VMs. We used an existing, underutilized host for our primary traffic while awaiting delivery and installation of a new pair servers. Expecting a simple plug-and-play swap, I set out to test the new hardware before we made the transition. What follows is a harrowing tale into the deepest depths of modern hardware architecture.

Our current primary RabbitMQ host, leviathan, is a 24 core Intel Xeon X5650 running at 2.67GHz and fitted with 132GB of RAM. The machine hosts all our in-memory databases, such as Redis and Memcached, and is vastly underutilized at this time. RabbitMQ is run in a cluster with other nodes hosted on VMs to give us failover capacity.

To replace its role as RabbitMQ host, we purchased artemis and hermes, two 24 core AMD Opteron 6172s running at 2.1GHz and fitted with 8GB of RAM each. Recent versions of RabbitMQ page queue backlogs to disk, and our traffic pattern and infrastructure validations are such that this amount or RAM is sufficient.

At first, one might assume that the differences in processors would result in near-equal performance for RabbitMQ. The Intel CPUs have faster clock cycles, but they rely on Hyper-Threading to present 24 logical cores to the operating system. The AMD CPUs are slower, but they present 24 hardware cores to the kernel. Linux reports 5333 bogomips for the Xeons, 4200 for the Opterons.

Using haigha’s load testing script, we were astonished to discover that our brand-new hardware was almost 50% slower than our until-recently-brand-new Intel hardware! What could possibly have gone wrong?

The test that we ran consisted of 3 VM clients, each with 4 cores, each running 3 instances of the standard configuration of the stress_test script of 500 channels looping messages over 500 queues. That is, 4500 channels and queues, each channel publishing a message as soon as it receives its previously published message. The test would run for a fixed period of time, usually a minute.

Our investigation started simple enough. Using top and our new favorite, htop, we observed that the kernel was using a substantial portion of each cores capacity. We also observed that cores were underutilized, as htop clearly showed a visible gap on the right-hand side the CPU graphs. Though not scientific, it appeared to be a 10-30% loss. A bit of research implied that the Ubuntu 10.04.2 kernel, 2.6.32, was perhaps released around the same time as our AMD chips, and may not fully support them. We tested the latest patches to that release, 2.6.32-32, but did not observe any improvement.

Venturing into unknown territory, we installed the latest kernel backported from maverick, 2.6.35-25. We immediately observed an improvement in CPU usage, such that all cores were near 100% utilization. Sadly though, our message throughput remained nearly the same, as user space consumed only 40% of each core. Yet when comparing a single instance of stress_test , leviathan and artemis performed nearly equal. In no case were we able to induce any IO wait, which was to be expected since we weren’t hitting disk. Why would 24 cores of AMD be so dramatically different than 24 cores of Intel?

With the obvious problems ruled out and the latest kernel installed, I started to dig deeper into the architectural differences between the two companies' designs. Using lscpu, we can see two very different CPU designs.

 leviathan:~$ lscpu
 Architecture: x86_64
 CPU op-mode(s): 32-bit, 64-bit
 CPU(s): 24
 Thread(s) per core: 2
 Core(s) per socket: 6
 CPU socket(s): 2
 NUMA node(s): 2
 Vendor ID: GenuineIntel
 CPU family: 6
 Model: 44
 Stepping: 2
 CPU MHz: 2666.806
 Virtualization: VT-x
 L1d cache: 32K
 L1i cache: 32K
 L2 cache: 256K
 L3 cache: 12288K

 artemis:~$ lscpu
 Architecture: x86_64
 CPU op-mode(s): 64-bit
 CPU(s): 24
 Thread(s) per core: 1
 Core(s) per socket: 12
 CPU socket(s): 2
 NUMA node(s): 4
 Vendor ID: AuthenticAMD
 CPU family: 16
 Model: 9
 Stepping: 1
 CPU MHz: 2100.172
 Virtualization: AMD-V
 L1d cache: 64K
 L1i cache: 64K
 L2 cache: 512K
 L3 cache: 5118K

The AMD CPUs have nearly double the amount of dedicated cache per core, but a much smaller (shared) L3 cache. Though this was clearly a fundamental difference, it did not seem adequate in explaining the vast amount of time that the kernel was consuming on each CPU. Yet the only reason why the kernel would be consuming so much time, without any IO wait, would be if it was waiting for something. What would Linux be waiting for that Intel was readily delivering?

As I noted, our test was running 4500 unique channels and queues. In a reply to a recent inquiry on the RabbitMQ mailing list, I learned that both channels and queues are allocated an Erlang process. A bit of searching and I found a useful paper[PDF] on the early SMP support in Erlang R12B, circa 2008. The diagrams show a single run queue from which all schedulers pull the next process to run.

By R13B, each scheduler had a dedicated run queue, vastly decreasing lock contention. Additionally, scheduling algorithms, and configuration thereof, were designed specifically to take advantage of the variety of SMP architectures. RabbitMQ is running on R14B01, and so it should have the latest in SMP optimizations, particularly with respect to NUMA, which is how both Intel and AMD implement their SMP architectures.

Linux is also NUMA-aware, and contains scheduling algorithms that try to pair the core that a process or thread will use with the NUMA node where its memory is allocated. Likewise, it tries to allocate memory on the same NUMA node as the process or thread that is requesting it. This was a topic area we were already familiar with, but in terms of database applications that consume most of system RAM. That clearly was not the case here, as RabbitMQ barely consumed 500MB under the stress test, and the memory is allocated on demand, and so was spread evenly across all NUMA nodes.

So with hardware that benchmarked well, using recent releases of the Linux kernel and Erlang VM, and an application that used a small fraction of available RAM, RabbitMQ performed abysmally slow. What could possibly cause such behavior?

The final piece of the puzzle lay in the nature of RabbitMQ itself. Though Erlang may try to pair a process with a node-bound scheduler, and Linux allocate memory on the same node as that scheduler, that’s of little use in practice. When a message is read from a connection (itself a process) on a channel (also a process), the route of that message must be looked up in an mnesia-backed global table to determine which queue(s) the message should be copied to. Bits are allocated and written to for that queue (yet another process), and then any consumer of that queue - a channel - must read the bits before sending them out. In short, there is a near-0 chance that the bits necessary to fulfill a single publish-route-consume will be processed by the same scheduler, and a just-slightly-greater-than-0 chance that it will be processed by a scheduler on the same NUMA node. Even if the code is optimized to only copy messages as references, numerous reads and writes must acquire an exclusive lock on a NUMA nodes memory bus.

So what’s the difference between Intel and AMD NUMA implementations?

 leviathan:~$ numactl -H
 available: 2 nodes (0-1)
 node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22
 node 0 size: 65525 MB
 node 0 free: 59013 MB
 node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
 node 1 size: 65535 MB
 node 1 free: 59787 MB
 node distances:
 node 0 1
 0: 10 20
 1: 20 10

 artemis:~$ numactl -H
 available: 4 nodes (0-3)
 node 0 cpus: 0 2 4 6 8 10
 node 0 size: 2047 MB
 node 0 free: 1621 MB
 node 1 cpus: 12 14 16 18 20 22
 node 1 size: 2044 MB
 node 1 free: 1758 MB
 node 2 cpus: 13 15 17 19 21 23
 node 2 size: 2048 MB
 node 2 free: 1806 MB
 node 3 cpus: 1 3 5 7 9 11
 node 3 size: 2047 MB
 node 3 free: 1874 MB
 node distances:
 node 0 1 2 3
 0: 10 20 20 20
 1: 20 10 20 20
 2: 20 20 10 20
 3: 20 20 20 10

The AMD cores are split across four nodes, whereas the Intel cores only use two. In the case where a process uses all cores equally, there is a 50% probability of a memory operation being local on a Xeon processor, but only a 25% probability on an Opteron!

Starting RabbitMQ on just 2 nodes, I instantly gained nearly 30% improvement, using half the available processing power, and kernel time dropped to a far more normal 20-30% per core. I experimented with this for a few hours, and found that 3 nodes, with memory interleave across all 4 nodes, was the optimal configuration. But what of the Erlang scheduler?

 artemis:~$ erl
 Erlang R14B01 (erts-5.8.2) [source] [64-bit] [smp:24:24] [rq:24] [async-threads:0] [hipe] [kernel-poll:false]

 Eshell V5.8.2 (abort with ^G)
 1> erlang:system_info(scheduler_bind_type).
 thread_no_node_processor_spread

Erlang is smart enough to recognize that this is a NUMA system, but the scant documentation implies that the default scheduler type is best suited to to Hyper-Threading architectures. As it turns out though, all of the scheduler types that are documented as designed for NUMA were slower than the simple processor_spread scheduler, which out-performed the NUMA schedulers by almost 30%. And what of the number of schedulers and request queues? Though they’re configured for 24 cores, experiments show that the default number of 24 is best, even given only 18 cores of execution. I can’t speak to why exactly either of these two settings are best, but I can entertain any number of educated guesses.

The last question that remained was, if we’re to run with 3 out of 4 NUMA nodes, which ones do we choose? It seemed logical to pick the ones that weren’t connected directly to the network card, the only other bit of hardware which we were trying to push as many bits through as possible.

 artemis:~$ lspci -tv
 -[0000:00]-+-00.0 ATI Technologies Inc RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part
 +-04.0-[0000:04]--+-00.0 Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
 | \-00.1 Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
 +-06.0-[0000:05]--+-00.0 Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
 | \-00.1 Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
 ......................

 artemis:~$ ifconfig
 eth1 Link encap:Ethernet HWaddr aa:aa:aa:aa:aa:aa
 inet addr:111.111.111.111 Bcast:111.111.111.111 Mask:255.255.255.255
 inet6 addr: ffff::ffff:ffff:ffff:ffff/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:71583284 errors:0 dropped:0 overruns:0 frame:0
 TX packets:134508244 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:13852822249 (13.8 GB) TX bytes:23156900853 (23.1 GB)
 Interrupt:45 Memory:f4000000-f4012800

 artemis:~$ cat /sys/bus/pci/devices/0000\:04\:00.1/irq
 45

 artemis:~$ cat /sys/bus/pci/devices/0000\:04\:00.1/numa_node
 0

It’s unclear how much of a difference that makes, but when RabbitMQ is under full load, a few of the cores on node 0 show 5-30% kernel usage, a mixture of network card and memory traffic.

Our final configuration looks something like this. Your installation will have these stanzas in various places depending on the distribution and how your administrator configured the runtime scripts. Note that we turned off async threads. We didn’t observe any benefit of enabling them, and it was unclear if they degraded performance.

 SERVER_ERL_ARGS="+K true +sbtps +P 10485760"
 exec setuidgid rabbitmq numactl --cpunodebind=!0 /usr/local/sbin/rabbitmq-server

The final result? Using only 18 AMD cores of execution, artemis achieved 93% of the performance of leviathans 24 Intel cores. Given that Linux calculated a 22% performance difference, we’ll call that a win. Who can complain about two hosts capable of 40,000 messages each vs. a single host capable of 43,000?

What can we learn from this? Firstly, know the hardware you’re buying. No matter how this played out, the Intel chips benchmarked faster, and we should have stuck with those when purchasing our RabbitMQ hosts. Second, the physical layout of the data path and the nature of your application together determine the bounds of your capacity. Any multithreaded application configured to use all of your cores, where the memory access pattern is not localized to a single thread, will exhibit non-linear performance inversely proportional to the number of NUMA nodes those threads run on.

Jun 23

Hero or Villain?

By Leslie Brezslin

When we play video games, we are automatically transported into a world that stretches beyond the line of current reality. Within this world, you are granted complete freedom while omitting consequences. Many people play games because it’s a channel through which we can live out the impossible. While there are many other reasons, I question whether our attention spans are the only things being is injected into the games we play.

Today, many games offer the ability to play as either a good or evil character. For me, the evil character is usually more appealing. The reason is simply because I know it’s fake. I figure, hey; if you’re going to enter a fictional world, why enter as yourself? Why enter just to follow the laws and moral code of the current realm that you’re trying to escape?

I was curious as to see what the general consensus on the matter was, so I asked around. The answer to the question wasn’t very surprising at all. It seems like the majority of people (at least here at the office) enter gaming worlds as heroes serving the laws of “good” and fighting for justice. Some people claimed to enjoy the feeling of cleaning up the crap in this world while others did it for the honor. Those who chose the villain were those who only cared about an entertaining storyline. I however, have my own theory on the matter.

For many people, playing games does not eliminate the sense of morality and duty to the world’s wellbeing that we’ve been encoded to uphold. We perceive what goes on in games, as we perceive ourselves in those situations. Some of us even go as far as to subconsciously apply methods like Karma to those situations. Our moral codes enter the game along with our attention spans and we block ourselves from completely enjoying the total package.

So, what’s the lesson to be learned? Next time you turn on a video game, escape yourself. Forget the world on your shoulders and do as you see fit. If the color of the houses annoys you, burn down the village. You have ample time to spend as a good person in the real world.

Perfect Example right here —> http://kotaku.com/5815150/saints-row-the-third-revels-in-the-absurd

Jun 22

Secrets to success

By Mike DelPrete

Success pertains to anyone willing to grab it, however, the majority will experience it through different channels. A lot of people measure success by a person’s current status; what they have, who they know and ultimately where they are. Only, is this a correct scale upon which to measure a person’s triumphs? When measuring success, should you look at the point where a person has landed or consider the journey from where they made the initial jump?

Personally, I prefer to look at the whole picture as opposed to one part of it. The fact of the matter is, although all men were created equal, not all were born with the same opportunities. Ultimately, we have the potential to reach the highest form of glory as well as the lowest. That’s why my measure of success is based on analyzing the whole path that a person has traveled. Someone starting at a lower position has a lot more work to put in, but can eventually reach the highest outcome.

The world has many opportunities to offer and many are determined enough to recognize them. One example of this is Mike DelPrete, the CEO of Agora Games. Mike started Agora Games eight years ago and it thrived both before and after it’s acquisition by MLG. When I asked Mike what the secret to his success was, he told me to ”get back to work.” Idealistically, I took that for stay focused and continue with the task at hand. After surviving his ocean of emails, meetings and conferences, Mike specified “hard work, determination, and having an open mind.”

The pillars of success reside on the platform of determination. Needless to say, that platform isn’t leveled. This is one of the leading causes of the gap between those who prosper and those who fall behind. The result that you can expect to gain from any experience is positively correlated to the sacrifices that you’re willing to make. Of course some dream bigger than others, but many fail to make it through the first stage: desire. Without a constant desire for something greater, can you expect progress?

Jun 21

Introducing Haigha

By Aaron Westendorf

We’re pleased to announce the official release of haigha, our Python AMQP client library.

Haigha is the culmination of over 2 years of development. We’ve used RabbitMQ since we launched our game services platform and immediately fell in love with it and the power that AMQP afforded us. At the time that we began development, py-amqplib was the dominant client library. It fully-supported the 0.8.1 protocol features we made use of, but because it was blocking, fell short of our desire for an asynchronous, highly-scalable messaging layer on which to build our services. At the time, only txAMQP supported asynchronous IO and we had ruled out Twisted for a variety of reasons. Pika, the official Python client maintained by RabbitMQ, had yet to be born.

With the py-amqplib source in hand, we deployed it directly into our stack and started heavily modifying it to integrate pyevent. By mid-2009 we had a stable and fast fork of py-amqplib and were able to scale our HTTP<>AMQP bridge to tens of thousands of concurrent connections. As these things are wont to go, once we had met our needs, and our schedules demanded focus on client deliverables, our fork languished, unknown to the world and quietly shuffling gigabytes of data across our network.

Throughout 2010 we continually expressed a desire to release what we had done to the community at large, feeling that others could also benefit from a fast asynchronous AMQP client for their Python applications. Pika was maturing rapidly and interest was clearly growing for building the kind of applications that AMQP can support. We held out releasing anything, as the changes we had made to support event-driven IO had brought to light many problems in the layout of py-amqlib, and our fork was based on an older version of the code, before a major refactor. We felt that a ground-up re-write, with a clean and efficient architecture, was the right way to contribute back to the community and improve our own code base.

In the dark pre-dawn hours of Friday 24 September 2010, haigha was born during the 2nd Agora Games hack-a-thon. By the end of the day we had a working demo with architectural details in place, and continued to develop features for the next few weeks before once again putting it on hold in favor of money-making enterprises.

After a restful New Year vacation, we spent several weeks completing haigha, profiling and optimizing it, and integrating it into our game services stack. The transition was seamless, about the best anyone could ask given that a major component we rely upon was completely re-written and deployed against a major upgrade to RabbitMQ, as we transitioned from the 1.7 series to the 2.2 series and the 0.9.1 protocol.

We still felt it poor form to release without comprehensive unit tests, and momentum was building to throw away pymox in favor of a new Mocha-inspired mocking library. In another pre-dawn fit of hack-a-thon inspiration, we launched Chai. With the right tool for the job in hand, we set about completing our unit test suite.

Once again, client deliverables conspired to keep our code from finally seeing the light of day, but after the big crunch, we punched through a quietly-publicized preview release.

After a few more weeks baking in the oven, we’ve nearly completed code coverage, fixed many bugs, and generally cleaned house. We feel that haigha is now ready for the masses, and we’re proud to put it out there for the rest of the community to use. We look forward to your feedback. You can find the source on github and packages on pypi.

Jun 16

The Postman Always Rings Twice

By David Czarnecki

My colleague @logankoester posed the following question in our team chat room: “Can the Github bot notify HipChat on wiki updates as well? I mean, wikis are just git repos, right? I am equally interested in documentation changes as in software changes.”

It is possible to do this in a little bit of a roundabout way. Read on to see how I did this with Hudson, our Continuous Integration server.

  • We have the Jenkins HipChat plugin installed and configured in Hudson.

  • Grab the Wiki Git Access URL for your project, e.g. git@github.com:myorganization/my-project.wiki.git

  • Create a new project in Hudson to monitor the project’s Wiki Git repository. Relevant items to setup in the project would be as follows:

Project name: Identify in the project name that it is the Wiki for your project so as to not be confused with actual project software notifications.

Source code management: Set to git and use the Wiki Git Access URL for your project here.

Branches to build: **

Poll SCM: I set to check every 30 minutes. Example:

*/30 * * * *

In the Post-build actions: check the box for HipChat notification and set the name of the room to the one where notifications on Wiki changes should be sent to.

And you’re done! If there are no changes to the wiki since the last poll, you won’t get any notices from Hudson. If there are changes to the wiki since the last poll, a notification will be posted to HipChat. Folks can then click the link in Hudson to see the list of changes.

You can find more hilarity over on my Twitter account, @CzarneckiD.

Jun 13

Agorian Survival: New employee edition

By An Intern

Around this time last year, I was preparing to return to my internship with a fortune 50 company. However, the excitement that normally comes with such an accomplishment escaped me. The reason is because I knew exactly what to expect. I’d dress up in typical business attire and sit at my desk from 9-5, while waiting for the occasional computer crash or random fire drill. With so much to look forward to, who could complain, right?

Wrong. When my last day came, I decided that I had officially retired the repetition, stress and boredom. After putting my best foot forward, I landed a job at Agora Games and things were completely different. I seemed to have entered an unfamiliar world, full of fun and exciting individuals with no limit to their creative potential. The plethora of randomness and the nature of our work, combined with the flexibility and surreal environment, formulated a culture that removed the work out of the job.

On your first day at Agora, you will be set up at a new computer with a standard four-legged chair. I mention the chair’s regularity because it is then your prerogative to choose a new one from Staples. Your first course of action however, is to tour around the office for a company-wide meet and greet. Of course, no one expects that you’ll remember their name. Down the line, people will generally re-introduce themselves and try to get to know the family’s new addition.

Like many, I place high value on self-expression. With that, I can proudly say that one of the biggest perks you’ll find at Agora is freedom.  This isn’t a bureaucratic work environment where you’ll get sanctioned for forgetting to tuck in your shirt.  Whether you want to dress like Prince or the old school New York Knicks players is up to you.

Expect to find the necessary hierarchies and team divisions, which keep us organized and successful but you won’t be able to tell who’s who. Just yesterday a few of us went for ice cream with the CEO. We’re well structured which leads us to power projects like Guitar Hero, Call of Duty, or Mortal Kombat but at the same time, perfectly balanced to produce happier, more driven employees.

To conclude this rant, I have a lot to look forward to and a lot to learn, but I also have an array of exciting individuals to learn from.  Additionally, with the constant stream of new challenges that keep me enticed, I couldn’t have chosen a better place to work.

 

Jun 9

The Friendly Point

By Blake Taylor

Do you use points or do you use pixels? I’ve fallen in love with points. If you know me, you might already know this because I talk about them all the time. So what is it that I like about points, or perhaps, what is it that I dislike about pixels?

First off, did you know that a pixel isn’t actually a pixel? As it turns out, what we refer to as a pixel nowadays, is more accurately defined as a device independent pixel, also known as a kludge. Consider this, you’re a developer, 10 years back, listening to bootylicious while styling with your bowl haircut. You’re probably specifying things in pixels with the assumption that monitors will never have more than the cutting edge 96ppi. Now fast-forward to present day with our 400 some odd pixel iPhone displays and consider how compressed that same page looks. Displays may be increasing in resolution, but our eyes aren’t. Additionally, we can’t let our pixel-based designs shrink as resolution increases, so we what do we do? We redefine the pixel to the critical measure of 1/96 of an inch.

Interestingly enough, this measure does have magical properties of it’s own. However, it’s a system that seems to be rooted in happenstance, so I feel a little dirty about using it without considering anything more. The point system, on the other hand, not only doesn’t suffer from this shameful introduction, but hell, there’s elegance in its history.

The point (pt) goes way back and has a helpful cousin, the pica, with its own special abilities. So why do you think those crazy typesetters and print practitioners choose the point and the pica even to this day? I would certainly propose flexibility as the reason. 1 pixel is equivalent to 1/96 of an inch, what the hell does that mean? Now, consider 1/72 of an inch and we’re getting into the territory where I might actually recognize the difference of a single unit. We’re all familiar with 12pt font, there’s also 11, 10, 9, and 8. It’s simple enough, pick a font above 7 and you’ll be able to read it. What about 32pt? That’s headline font in the print world. Guess what, there’s no reason why it can’t be used in the web world as well. It’s just that we realize we’re moving bits around and not ink, so we use relative measures such as the em for fonts, but it should all still be rooted in the friendly point.

So what’s a pica? Well a pica is 1/6 of an inch. 6, huh, well that’s a nice number. Half of that is 3. I like 3. Get this, if there’s 6 pica in an inch and there’s 72 points in an inch, then that means there’s 12 points in pica. 12, now that’s an awesome number. It’s evenly divisible by 2, 3, and 4. How many numbers can you say that about? I’m sure the answer is infinite, but now you’re just being a jerk and you’re missing my point. 12 is cool, 6 is cool, 3 is cool, 1.5 is cool, .75 is cool even .375 is manageable. Let’s go the other way. 24 = 2pc, 48 = 4pc, 96 = 8pc. There’s that damn number again.

Anyway, what’s my point? (No pun intended) Use points. They’re there and they’re not fugly. Converting pixels into points is easy, just multiply by .75. Also, since it’s rooted in what’s made print look good for years rather then what the state of the art was when CSS became popular, the numbers just tend to work out nicer. Give em a try (haha, another pun); what do you have to lose?

Jun 7

Did you miss MLG Columbus?

By Brian Corrigan

This weekend, MLG held a highly competitive, highly anticipated competition to test the strengths of pro gamers alike. The Pro Circuit, this time in Columbus, Ohio, included a number of competitors facing off for champion status in StarCraft 2, Halo: Reach and Call of Duty: Black Ops.  The three-day event concluded as follows:

StarCraft 2

SlayerSMMA, the “Terran” prodigy, defeated second place winner IMLosira in the finals with a four to one win. MMA didn’t lose a single series all weekend and only dropped two games in total. Entering game four with a two-to-one lead, MMA survived massive Roach/Baneling attacks and executed excellent drop plays to shut Losira down. In game five, MMA deployed early reapers to kill off Losira’s drones. He eventually followed up with a big Marine/Tank force to force the surrender.

Halo: Reach

Dropping just one map, team Instinct, consisting of Ogre 2, Roy, Lunchbox and Pistola, were crowned MLG Columbus Halo champions. Instinct left with an amazing record of 21:1 after having defeated second place winner Str8 Rippin. It wasn’t until facing third place winners Dynasty in the WB final that they dropped a game. That however didn’t faze the team as they continued on to win the match and dominate the tournament.

Call of Duty: Black Ops

MLG Columbus produced a surprising sequence of events during the Call of Duty match ups. As day three continued, some of the highly favored teams fell into the losers bracket and the finals ended in an unexpected sweep. Optic Gaming was the victor who claimed the Black Ops Pro Circuit trophy.  Force, who was originally twelfth seeded, earned second place after losing six to none.

Overall, the event continues to open doors for young teams like Fly Society who placed ninth, but managed to make a name for themselves. Congratulations to all of the winners and shout out to everyone who helped make this possible. The next event is in July at Anaheim, CA; can you afford to miss greatness in the making? http:www.majorleaguegaming.com

Jun 6

Cucumber and Behavior Driven Infrastructure Validation

By David Czarnecki

Did you ever think to use Cucumber to write scenarios to validate your infrastructure? Here’s a short guide to help you get started.

I’m going to keep this intentionally short because I don’t want to steal the thunder away from our systems team, but I did get a fair number of pings inquiring about this after tweeting about it, so it’s worth getting the word out.

Start a new infrastructure validation project

Your Gemfile should reference the gems for rake and cucumber at a minimum. The Rakefile can be the minimal Cucumber-ified to get started.

{% gist 1011361 %}

Write your Cucumber features and scenarios for the parts of the infrastructure you want to validate

Examples of this may be “aliveness” of servers by pinging them or making sure processes are running. For example, with MySQL:

{% gist 1011369 %}

You’ll need to back those scenario step definitions up with the appropriate code. I’m going to leave that out of this post for now.

Tag your scenarios appropriately

You should tag your scenarios in the manner that makes sense for your organization and in the way that you’ll be running them and responding to them. For example, you may want to run @critical scenarios from cron every 5 minutes (or some appropriate interval) and make sure that if these fail, someone gets more than an e-mail. You may want to run @riseandshine scenarios at 6 AM after other system tasks have run, for example, to validate log rotation.

KISS me baby

Keep It Simple Stupid. If you just want to use this for sanity checks, that’s cool. You might not need to go full scale Nagios and cucumber-nagios right away. Integrate this into your existing infrastructure and monitoring processes in the way that it makes the most sense.

Hopefully this has been helpful to you. I’ll keep pushing the systems team here to get the bloggity blog blog posts about this coming. If you want to follow those guys, be sure to follow @ironwallaby, @dahui0401 and @gwaldo.

You can find more hilarity over on my Twitter account, @CzarneckiD.