This article is more than 1 year old

Wanna use your Nvidia GPU for acceleration but put off by CUDA? OpenAI has a Python-based alternative

Plus: Software-detected gunshot withdrawn as evidence from trial

In brief If you’ve always wanted to program your Nvidia GPU to accelerate machine learning, image processing, and other workloads, but find Nv's CUDA too daunting or too much of a faff to learn, you’re in luck.

OpenAI late last month released Triton, a Python-based environment that tries to help developers write and compile code to run on your Nvidia GPU much more easily without having to grapple with CUDA.

The San Francisco upstart has been using Triton to optimize their software so that their machine-learning algorithms run more efficiently on specialized hardware. Building state-of-the-art models is costly, developers have to be able to train and tweak their performance quickly, which requires writing custom GPU kernels.

“We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce,” OpenAI said. “Triton makes it possible to reach peak hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication kernels that match the performance of cuBLAS — something that many GPU programmers can’t do—in under 25 lines of code.”

You can read more about Triton and its documentation here. Support for other GPUs, such as AMD's, is said to be coming.

Twitter has offered a bounty for anyone who can find biases in its algorithms, as seen with its image-cropping tool that favored White people and women.

Evidence of computer-detected gunshot withdrawn from trial

Prosecutors in America have withdrawn from a murder trial evidence of what was said to be a gunshot detected by classification algorithms.

One evening in May last year, Safarian Herring, 25, was shot in the head, and died two days later in hospital. Michael Williams, 64 at the time, was charged with his slaying, and denies any wrongdoing: he said Herring was killed by someone else in a drive-by shooting. Williams was said to have brought Herring to St Bernard Hospital in Chicago.

Cities in the United States have a system built by ShotSpotter dotted around their streets; this consists of microphones attached to computer systems programmed to identify the sound of gunfire and automatically alert the cops to the location.

The evidence against Williams includes a report generated by ShotSpotter's sensors in Chicago that identified gunfire where surveillance cameras had seen Williams stop his car by a south-side Chicago block, right when and where the cops said Herring was shot.

However, Williams' lawyers asked the trial judge to probe the ShotSpotter evidence, alleging in a court filing [PDF] that ShotSpotter picked up the sound of a firecracker a mile away from where police said Herring was shot and that these details were later corrected by ShotSpotter employees to be gunfire where officers said Herring was hit. In response, prosecutors withdrew the evidence, and the case was ultimately dismissed. Williams had spent the best part of a year in jail awaiting trial.

ShotSpotter responded by denying at length it improperly altered any data or evidence, and pushed back on any suggestion it had done so to help the police make a case. "The idea that ShotSpotter 'alters' or 'fabricates' evidence in any way is an outrageous lie and would be a criminal offense," it said in a statement. "We follow the facts and data for our forensic analysis. Period."

Editor's note: ShotSpotter has responded to the allegations raised by Williams' lawyers, stating that, for its court evidence, its algorithm identified two data points: the exact coordinates where Herring was shot at the junction of South Stony Island Avenue and East 63rd Street, and the street address to the entrance of Jackson Park, the edge of which is where Herring was hit. The park's entrance is a mile from where the shooting occurred. These data points were not changed at any time.

ShotSpotter also said the reclassification of the sound from a firecracker to gunfire was innocuous: a human reviewer checked the audio and changed it from a possible firework to gunshot within a minute of its detection.

Thus, though Williams' lawyers sought to paint ShotSpotter's location and classification as ambiguous and unreliable, it is clear from the evidence why two data points were given – the precise coordinates of the actual shot; and what the algorithm thought was the nearest relevant street address, the adjacent park – and that these data points were not changed by ShotSpotter staff, and also how the sound was reclassified immediately by an employee. We are happy to clarify this situation.

Apple Watch data problematic for health study

Algorithms used to monitor things like heart rate and sleep patterns running on Apple Watches may not be useful in academic research.

JP Onnela, an associate professor of biostatistics at the public health school arm of Harvard University, discovered this the hard way when he asked his collaborator Hassan Dawood, a research fellow at Brigham and Women’s Hospital, to upload his heart rate data recorded by his Apple Watch.

To his surprise, when they exported data twice from the same samples taken over the same time period they discovered a big discrepancy between the recordings. The same heart rate reading exported once on September 5, 2020 and, again, on April 15, 2021 should be the same but they were different.

Onnela reckons Apple’s code could be to blame. “These algorithms are what we would call black boxes — they’re not transparent. So it’s impossible to know what’s in them,” he told The Verge. The lack of transparency means Apple may have tweaked its software, making it difficult for the researchers to trust the data collected by the iGiant's devices.

Apple, however, said there wasn’t an issue with its algorithms and that the issue probably lies in the exportation process. Either way, the errors show that it’s probably not a trustworthy source of data that should be used for academic purposes.

You can read more about the experiment here. ®

More about

TIP US OFF

Send us news


Other stories you might like