Building your own PC for AI is 10x cheaper than renting out GPUs on cloud, apparently
Here's the recipe for cooking up your own AMD-Nvidia beast
So, you’ve hunkered down and finally completed that online course on machine learning. It took weeks. Now, you have all sorts of ideas running through your mind on developing your own intelligent code and neural networks.
You assume you'll have to fork out a considerable wedge for a decent GPU-powered number-crunching rig, because your handy lightweight laptop is not going to cut it during the intensive network training process. So, seeing as you'll dabble with this on and off initially, you're looking at renting out GPUs on cloud.
Your heart drops a little when you total up the cloud instance costs. AI is cool and all, but is it worth forking out hundreds or even thousands of dollars for running your code on a remote platform? Probably not. There is a cheaper alternative: build your own machine-learning computer. This is, apparently, ten times cheaper than relying on cloud platforms like Amazon Web Services (AWS), according to one engineer.
Jeff Chen, an AI techie and entrepreneur at Stanford University in the US, believes that a suitable machine can be built for about $3,000 (~£2,300) without including tax. At the heart of the beast is an Nvidia GeForce 1080Ti GPU, a 12-core AMD Threadripper processor, 64GB of RAM, and a 1TB SSD card for data. Bung in a fan to keep the computer cool, a motherboard, a power supply, wrap the whole thing in a case, and voila.
Here’s the full checklist...
Unlike renting out compute and data storage on cloud, once your personal rig is built, the only recurring cost to pay for is power. It costs $3 (£2.28) an hour to rent a GPU-accelerated system on AWS, whereas it’s only 20 cents (15p) to run on your own computer. Chen has done the sums, and, apparently, after two months that will work out to being ten times cheaper. The gap decreases slightly over time as the computer hardware depreciates.
“There are some drawbacks, such as slower download speed to your machine because it’s not on the backbone, static IP is required to access it away from your house, you may want to refresh the GPUs in a couple of years, but the cost savings is so ridiculous it’s still worth it,” he said this week.
An Nvidia GeForce 1080Ti isn’t the best GPU on the market, but it’ll be fine for most people unless you’re dealing with extremely large datasets and models. In fact, the performance on your computer is probably quite similar to the beefier GPU models like Nvidia’s Tesla V100, on cloud. A unit will set you back from $8,000 to over $10,000, compared to the measly $700 for a GeForce 1080Ti.
“The datacenter grade V100 is 25 to 75 per cent faster depending on workload, you don’t get the 4 to 8x [speedup] promised because you get bottlenecked by Amdahls law and slow IO. You get more memory but 11Gb on the 1080 Ti should be good for most cases,” Chen told The Register.
Nvidia has actually banned the use of cheap GeForce chips for datacenters since it updated its end-user licence agreement last year, forcing cloud providers to splash out on its more expensive options.
Jensen Huang, CEO of Nvidia, said GeForce cards were only really suited for gaming and cryptomining. But that hasn’t stopped deep learning engineers using them in their own servers.
Building GTX 1080 Ti (GPU) servers with team to ship to a https://t.co/PCELREx5OS customer. Lots of factories still not connected to internet, so have to build many edge deployments! pic.twitter.com/dekCH44kwc— Andrew Ng (@AndrewYNg) August 15, 2018
There are also other options like using Google’s tensor processing units or buying pre-built machines, but these aren’t cheaper than building your own, Chen said. He also advised people to stick to Nvidia’s GPUs.
“Based what I’ve seen, [TPUs] don’t seem that much cheaper but promises to be much faster. I’ve looked into [cheaper alternatives to Nvidia] and decided against it. Most lack full software community support and AMD’s Vega is 25 to 50 per cent slower,” he told us. ®
We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.