This article is more than 1 year old

SPEC mulls benchmarks for ML processing performance

Measuring real-world AI training, decision-making abilities on todo list

Benchmarking organization SPEC has formed a committee to oversee the development of vendor-agnostic benchmarks for machine-learning training and inference tasks.

SPEC, the non-profit Standard Performance Evaluation Corporation, produces a range of benchmarks that are widely used to evaluate the performance of computer systems, especially in the high performance computing (HPC) industry.

According to SPEC, the newly formed Machine Learning Committee will develop practical methodologies for benchmarking artificial intelligence and machine learning performance in the context of real-world platforms and environments.

The goal is to deliver benchmarks that will better represent industry practices, compared to today's benchmarks such as MLperf, by including major parts of the machine learning and deep learning pipeline, such as data preparation processes as well as training and inference stages. The committee will also work with other SPEC committees to update their benchmarks for ML environments.

Initially, the SPEC ML Committee will focus on devising benchmarks that will measure end-to-end performance of a hardware system under test in handling ML training and inference tasks. The panel is expected to produce a vendor-neutral third-party benchmark that can then be used by system designers to assess competing platforms and technologies.

It will also allow users of machine-learning tools, enterprises and scientific research institutions included, to understand how machine-learning solutions will perform in real-world environments, and guide them to make better purchasing decisions, according to the chair of the SPEC ML Committee, Arthur Kang.

“IDC expects enterprises to spend nearly $342 billion on AI in 2021, and it's essential that these companies understand what that money will buy,” Kang said in a statement.

Meanwhile, the SPEC ML Committee is inviting anyone with relevant expertise to consider joining them in the development and management of the SPEC ML benchmark, especially users and manufacturers of machine learning or deep learning tools. “I encourage anyone interested in the future of ML processing to join the SPEC ML Committee and help shape these invaluable benchmarks," Kang said.

Current members of the SPEC ML Committee include representatives from AMD, Dell, Inspur, Intel, NetApp, Nvidia, and Red Hat, all vendors that have an interest in the machine-learning market.

Other bodies that have had a crack at developing machine learning benchmarks include the Transaction Processing Performance Council (TPC) and MLPerf, backed by Google and Baidu. ®

More about

More about

More about


Send us news

Other stories you might like