This article is more than 1 year old
Render farming is hot!
Server upgrade makes data centre unfit for purpose
I recently revisited my all time favourite consulting experience; the design and construction of a mid-sized render farm. I had never done anything quite like it before; it took a full summer’s worth of research and two months to fully install and test the final design.
The render farm consisted of 400 render nodes, a rack’s worth of storage, a cluster of three Red Hat servers handing out Linux to the render nodes and a single Windows XP system running the controller software. This farm has just finished its fourth year of continuous operation and the client has reluctantly decided that the time has come to replace it.
I had not heard much about this farm since final delivery and the subsequent completion of the first production render. Sitting down with the client and his in-house systems administrator to discuss their trials and tribulations over the past four years working with my “baby” was an eye opener.
Apart from a disk upgrade to the storage rack and a CPU upgrade to the render nodes, very little had been touched on the render farm during it’s operational lifetime. The farm experienced a few disk failures, but nothing above average. The render nodes had grown some bad DIMMs yet had lost but a single motherboard.
Astonishingly, the Linux distribution in use is the same today as on the day it was installed; the configuration files had never once been touched. Considering the evolutionary pace of 3D software I found this hard to believe, but apparently they have gotten along just fine working with a years-old version of Lightwave.
The client had upgraded shortly after the render farm was completed; and this slightly newer version was apparently still compatible with the render farm. Indeed, it was not the speed of the render farm that was driving the request for an upgrade but rather a desire to finally update their software beyond a version nearly four years old. The newer versions of Lightwave will apparently not talk to the vastly dated software on the render farm.
More punch, more power
We went back to my lab, lashed together a few test systems and discovered to our amazement that the entire 400-node render system could probably be replaced with two dozen modern servers running sexy video cards. Numbers were juggled and it was decided that the replacement build would start with a 50-node render farm with two video cards per node.
I asked the client what he would do with the space freed up by such an upgrade. I was worried he would do something silly like packing the datacenter full of storage boxes resulting in compromised airflow. This proved to be a fateful question; apparently this upgrade would free up enough physical space in their datacenter to be able to make a bid on a major contract. Success would mean fleshing their 50 node farm out into a 500-node farm.
It was here that things got gluey. The servers we had just designed easily pulled over three times the power than did the old ones. Their existing datacenter has neither the electrical service nor the cooling to handle anything close to 500 of these new nodes.
The incident serves as an illustration of why a little bit of paranoia is beneficial in the long run. There’s more to upgrades than simply updating software or swapping boxes. Just as we must take into consideration software items such as patches, drivers, and the many and varied ways our systems interact with others, the interaction of our servers with their physical environment is something about which we should always be concerned.