This article is more than 1 year old

OCP supporters hit back over testing claims – but there's dissent in the ranks

Don’t like the message? Then don’t listen to it

Comment Open Compute Project aficionados did not like our story about its allegedly insufficient hardware testing procedures and said so, publicly and loudly.

“Terrible journalism” whinged one, adding it was an attack on the entire open source movement. Is this fair? It seems a can of worms has been opened.

Cole Crawford, the founding executive director of the Open Compute Project Foundation (Oct 2013 to Mar 2015) and an OCP evangelist, tweeted that I had “attacked open source in general,” and said it’s easy to complain, far harder to help improve.

He wrote a blog post entitled “An open reply in defence of OCP” in which he said:

If your workload requires fault tolerance, OCP probably isn’t right for you and you’d most likely benefit from insanely expensive certifications that ensure your hardware will last.

If you’ve designed your environment to abstract away the hardware I’d argue there’s no better choice on the market today than OCP.

There’s also a kicker: “To attack an open source project directly is at least courageous. It takes guts. To hide behind a nameless, faceless 'engineer' who hasn’t been transparent is just foolish.”


Foolish eh? OK, if you say so.

Barbara Aichinger, a member of the OCP’s C&I team and contributor to an OCP Server Memory Channel Testing document, added a comment to our story.

She said: “I sit on the OCP Compliance and Interoperability (C&I) committee and I have pushed for more rigorous testing ... and received considerable resistance to my ideas.”

Cost was a problem, in her view: “Since I am a 20-plus year veteran of the T&M industry (Test and Measurement) I was promoting the type of Validation and Compliance testing that the tier 1 vendors use. However, those involved did not want to pay the price for that type of testing.”

She continued:

I was repeatedly told that the test labs will not use any T&M hardware (scopes, analysers, etc) to test the OCP servers.

Aichinger said: “Initially, OCP wanted tier 1 quality at a tier 3 price and it would do this by standardising the HW and using volume. However, to get the tier 1 quality you have to adopt the tier 1 validation strategy. This has not happened.”

She continues to push her ideas and has “submitted a new concept called an 'audit'.” This isn’t full OEM-scale hardware validation. It will “verify signal integrity on the memory bus by measuring the data valid window of all the signals and check for protocol and timing compliance to the JEDEC spec.”

Memory testing is her area of coverage, and she says “Facebook has identified Memory as the #2 failure in the data centre. Google has also published several papers on memory errors. So the memory subsystem clearly needs some validation.”

“I would encourage the anonymous test engineer to join me in the battle to bring tier 1 validation to OCP servers.”

Let’s finish up with this quote from Shakespeare: “A fool thinks himself to be wise, but a wise man knows himself to be a fool.” ®

More about


Send us news

Other stories you might like