Using 'AI-based software like Proctorio and ProctorU' to monitor online exams is a really bad idea, says uni panel

Academics find algorithmic surveillance just isn't worth it

Updated A committee at the University of Texas in Austin has advised against using AI software to oversee students' online tests, citing the psychological toll on students and the financial toll on academic institutions.

Acknowledging that some form of online proctoring is necessary to discourage academic misconduct, the committee concluded, "we strongly recommend against the use of AI-based software like Proctorio and ProctorU."

The Report of the Academic Integrity Committee about Online Testing and Assessment, spotted by Megan Menchaca, education reporter for the Austin-American Statesman, is said to have been included in a university official's recent message to faculty.

AI-based software to watch over remote students as they take online tests – "academic surveillance software" to detractors – has flourished during the COVID-19 pandemic. Large numbers of students have been studying remotely and schools believe they need a way to prevent cheating.

But the software that's been deployed has been widely criticized by students and privacy advocates. The concern centers around the inability to audit the software source code and the possibility that these systems rely on flawed algorithms and biased or arbitrary signals to label students cheaters.

Critics also worry that the software can't account for varied student living conditions and is vulnerable to racial bias – eg, motion tracking that produces different results with different skin tones – and cognitive bias such as gaze tracking that flags ADHD behaviors as suspicious.

Such criticism last year led UC Berkeley [PDF] and Baruch College in New York to stop using remote proctoring products. In February, the University of Illinois at Urbana-Champaign said it will drop Proctorio after this summer due to "significant accessibility concerns."

When in doubt, sue

Amid this backlash, proctoring software maker Proctorio sued critics, alleging last year that Ian Linkletter, a learning technology specialist at the University of British Columbia (UBC) in Vancouver, Canada, violated US copyright law by linking to the company's publicly viewable videos. That case remains ongoing in Canada, and has forced Linkletter to appeal for funds to defend himself through the costly legal process.

Proctorio also last year filed a Digital Millennium Copyright Act (DMCA) takedown complaint against Miami University computer science student Erik Johnson seeking the removal of posts on Twitter that were critical of the company. Twitter removed the posts and later restored them.

The firm's legal crusade prompted pushback from the Electronic Frontier Foundation, which said the company should not be able "to abuse copyright law to undermine their critics."

The UT Austin committee began working on its report after student councils in the spring of 2021 asked the university to get rid of AI proctoring software, which was used widely during the 2020-2021 academic year.

The committee asked student leaders and faculty to provide information about how the software was employed and decided that it just wasn't worth it.

"The invasive nature of the tools as well as the warnings that the tools may send to the screen during the exam cause high levels of anxiety," the report says.

"Although these tools were used extensively by faculty in academic year 2020-2021, only 27 cases were referred to the Student Conduct and Academic Integrity office as potential violations of academic integrity, and of these only 13 were upheld. Thus, the psychological (and financial) costs of the tool do not seem to be worth the small benefit of using it."

Trust the teachers

The report goes on to suggest alternative methods of watching over students during tests, such as Zoom for small groups, and other academic software like Canvas Quizzes, Gradescope, and Panopto. It also recommends that instructors consider rethinking how they assess student progress in order to reduce online test anxiety.

The University of Texas at Austin, Proctorio, and ProctorU did not respond to requests for comment.

In an email to The Register, Linkletter – still awaiting a ruling on his effort to dismiss Proctorio's copyright complaint under Canada's anti-SLAPP statute, the Protection of Public Participation Act – said what stands out to him from the UT Austin report is the finding that Proctorio just isn't worth it.

"Every institution should be taking a hard look at whether Proctorio is worth the 'psychological cost' mentioned in the report, let alone the expense," he said.

"Over half of the 27 students accused had their academic integrity cases tossed. Thousands of students were surveilled, at great expense, for what? How much faculty and staff time was wasted? How much unnecessary heartbreak caused?

"Students understand that surveillance is wrong. They know how the technology works. There is no technical explanation that will reduce the harm being done – it simply needs to stop.

"The only way institutions can demonstrate they are listening to students is to stop using academic surveillance software." ®

Updated to add

In a statement emailed to The Register Jarrod Morgan, Chief Strategy Officer of Meazure Learning, the parent company of ProctorU, took issue with the UT report and said that his firm gave up on AI proctoring several months ago.

“After our decision and announcement earlier this year to discontinue all Al-only exam monitoring, every single exam from every single test-taker using ProctorU is reviewed by a trained, live proctor,” said Morgan.

“As ProctorU announced at the time, it’s only proctoring if a human does it and, more importantly, it’s the only way to be sure the process is accurate, fair and consistent. Some exam monitoring companies may rely on AI software to monitor exams and score students, ProctorU is not one of them and it is important to understand the differences between true proctoring with a trained live proctor and the less-expensive and in our opinion, unacceptable option of ‘monitoring’ software such as that offered by Proctorio.”

Broader topics

Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading

Biting the hand that feeds IT © 1998–2022