If you’ve been thinking about building your own deep learning computer for a while but haven’t quite got 'round to it, here’s another reminder. Not only is it cheaper to do so, but the subsequent build can also be faster at training neural networks than renting GPUs on cloud platforms.
When you start trying small side projects like, say, building little autonomous drones or crafting a bot to spit out random snippets of poetry, you begin to realise how much compute power is really needed to get interesting results. So you can either fork out money to rent hardware via cloud services like AWS or Google Compute Platform or build your own server.
Building your own PC for AI is 10x cheaper than renting out GPUs on cloud, apparentlyREAD MORE
Jeff Chen, an AI engineer and entrepreneur, drew up a handy shopping list for all the different parts needed to craft your own deep learning rig. Now he has measured the amount of time it takes to carry out machine learning tasks using Nvidia's higher-end V100 and K80 chips on AWS compared to Nvidia's lower end GPUs like the 2080i and 1080i.
"The $6,000 V100 hosted on AWS performed anywhere between 0.86x (underperforms) to 3.08x faster than the $700 1080 Ti, as shown in the benchmark results below," he said.
It depends on the task and how the data is processed, he argued. Nvidia's V100 chips boast a higher number of operations per second, but if your neural network is not dealing with FP16 and FP32, then it doesn't make use of all of the compute power provided by the Tensor Cores and underperforms.
So, the fancier chips are able to accelerate the training of computer vision networks such as ResNet-50 in FP32 and FP16, but for other jobs like image segmentation or sentiment analysis, they are not as good as the cheaper 1080i or 2080i GPU.
"It's obvious that consumer-grade 2080 Ti and 1080 Ti GPUs offer the best value. For example, the 2080 Ti offers at least 50 per cent the performance of the V100 at 25 per cent of the price. This is why builders predominantly use these GPUs...and why these GPUs are almost always out of stock. This is also why building your own Deep Learning Computer is 10x cheaper than AWS," Chen said.
Luckily, lone hobbyists aren't bound by Nvidia's harsh EULA rule that states no giant data centers are allowed to use the cheaper 1080s because, apparently, they are not well-suited for building large scale servers. So go ahead and build that computer; it'll only set you back around $3,000 or so. ®