Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Nvidia builds CUDA GPU programming library for machine learning – so you don't have to

Craft a deep neural network on a graphics chipset

Nvidia has released a set of software routines for accelerating machine-learning algorithms on its parallelized graphics processors.

Over the weekend, the GPU maker uploaded cuDNN – CUDA Deep Neural Networks – which is a library of primitives for building software that trains neural networks.

The component is optimized for Nvidia's processors and should, in theory, save programmers time: by using the library, developers won't have to reinvent the wheel when tuning parallelized machine-learning algorithms for GPUs – offloading the mathematical work from the host's application CPU.

Announcing cuDNN, Nvidia pointed to examples of machine learning and neural networks being used by financial companies, web firms and research bodies in areas such as fraud detection and gaming.

In particular, Nvidia highlighted the attempt to perform these tasks by processing pictures and images, looking at things like handwriting and facial recognition.

“The success of DNNs has been greatly accelerated by using GPUs, which have become the platform of choice for training large, complex, DNN-based ML systems,” the company’s solutions architect Larry Brown has blogged.

Brown added that Nvidia was introducing the primitives library due to the “increasing importance” of DNNs and the key role played by GPUs. ®

 

Similar topics

TIP US OFF

Send us news


Other stories you might like