This article is more than 1 year old

Australia's e-Senate vote count: a good start but needs improvement

Boffins offer audit ideas to improve accuracy and transparency

An international group of security, encryption, and electoral academics believe Australia's Senate voting software needs an audit.

The group, including researchers from MIT, UC Berkeley, and the University of Melbourne, took a look (PDF) at the Australian Electoral Commission's (AEC's) implementation of electronic counting for this year's July federal election.

The AEC went rolled out an electronic count in the Senate for two very good reasons: in each state, there was a staggeringly large number of candidates; and that, in turn, makes the quota-based count painstaking and slow.

The AEC's system was straightforward: ballots were scanned; scrutineers had the chance to confirm that the scan matched the paper ballot; and the system counted the votes.

That, however, leaves gaps in the process, as explained to The Register by one of the authors, e-voting expert Dr Vanessa Teague of the University of Melbourne.

Dr Teague said the group found that the AEC put in place a “robust” process, but oversight by observers needs work.

“The important thing is about having evidence of getting the right outcome” (from the count) – and the most serious gap is between the screen and the database. Either through malice or simple software bugs, it's feasible that what's shown on the screen doesn't match what gets stored and counted in the system.

“The AEC put a lot of effort into designing a robust process, but there are gaps in the evidence trail that arise because of trust in the software.”

Even an independent computer re-count – as Dr Teague points out, the AEC publishes a complete data file so others can test the count – wouldn't capture an error between scan and storage.

The paper notes that it's also difficult to audit the cryptographic signature process surrounding ballot data storage; and there's no way to scrutinise that the data is accurately sent from the scanning process to the counting process.

Auditing is hard

Vulture South had never given this much thought: the voting system used in the Australian Senate presents an NP-hard problem.

That makes it impossible to work out how accurate an audit needs to be, because (as the paper says) “it is hard to compute how many votes it takes to change the outcome” in any jurisdiction.

As a result, Dr Teague said, audit efficiency depends on how close the result is.

In the 2016 election, the paper notes, the final seat in the Tasmanian Senate was down to 141 votes out of 300,000.

“When the margin is very small, you have to audit a lot of ballots to be really confident in the system,” she said. “If the result is as close as in [the State of] Tasmania, we don't know how to do an efficient audit” (the paper notes that state would need an audit sample of 250,000 ballots).

However, “For most of the other States, it seems it would take a lot of errors to change the outcome – so in that case, you can gain significant statistical confidence in a relatively small audit.”

Hence, the paper says, in some states, a simple Bayesian statistical audit of a random sample of ballots would suffice: take a few thousand ballots, confirm the accuracy of the scan, and test the count.

To handle a close result, the paper proposes a different audit: predict how many errors are needed to change a result (code for this is at GitHub), and test a sample of ballots for errors.

The error rate provides the indicator of whether the result can be trusted (in Tasmania's case, that would mean checking just 2,500 ballots; if there are three errors in the system, a recount would be needed).

Finally, the paper suggests fixed-size audit of 0.1 per cent of all ballots, tested for a measure of risk and as a second Beysian test.

The paper includes links to various GitHub projects presenting audit code.

The other authors of the paper are Berj Chilingirian, Zara Perumal and Ronald Rivest of MIT's Computer Science and Artificial Intelligence Laboratory; Andrew Conway of Australian company Silicon Econometrics (and a member of the Secular Party); Philip Stark of UC Berkeley's statistics department; and the University of Melbourne's Michelle Blom and Chris Culnane. Australian Greens member Grahame Bowland helped the group work with the Australian Senate counting rules, and provided Bayesian audit code. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like