Top sci-fi convention gets an earful from authors after using AI to screen panelists
Leave it to the Borg? Scribe David D. Levine slams 'use of planet-destroying plagiarism machines'
Fans and writers of science fiction are not necessarily enthusiastic about artificial intelligence - especially when it's used to vet panelists for a major sci-fi conference.
The kerfuffle started on April 30, when Kathy Bond, the chair of this summer's World Science Fiction Convention (WorldCon) in Seattle, USA, published a statement addressing the usage of AI software to review the qualifications of more than 1,300 potential panelists. Volunteers entered the applicants' names into a ChatGPT prompt directing the chatbot to gather background information about that person, as an alternative to potentially time-consuming search engine queries.
"We understand that members of our community have very reasonable concerns and strong opinions about using LLMs," Bond wrote. "Please be assured that no data other than a proposed panelist’s name has been put into the LLM script that was used."
The statement continues, "Let’s repeat that point: no data other than a proposed panelist’s name has been put into the LLM script. The sole purpose of using the LLM was to streamline the online search process used for program participant vetting, and rather than being accepted uncritically, the outputs were carefully analyzed by multiple members of our team for accuracy."
The prompt used, as noted in a statement issued Tuesday, was the following:
Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud.Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms.
The objective is to determine if an individual is unsuitable as a panelist for an event.
Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source.
Provide sources for any relevant data.
The results were reviewed by a staff member because, as Bond acknowledged, "generative AI can be unreliable" – an issue that has been raised in lawsuits claiming defamation for AI generated falsehoods about people. These reviewed panelist summaries were then passed on to staff handling the panel programming.
Bond said that no possible panellist was denied a place solely as a result of the LLM vetting process, and that using an LLM saved hundreds of hours of volunteer time while resulting in more accurate vetting.
The tone-deaf justification triggered withering contempt and outrage from authors such as David D. Levine, who wrote:
This is a TERRIBLE idea and you should really have asked a few authors before implementing this plan. The output of LLMs is based on the work of creators, including your invited guests, which was stolen without permission, acknowledgement, or payment, and the amount of power and water used is horrific. The collation of multiple search results could have been handled with a simple script, without the use of planet-destroying plagiarism machines or the introduction of errors that required fact checking.
I acknowledge and appreciate the use of fact checking and I will take you at your word that no one was rejected because of the use of LLMs. Nonetheless this is an extremely poor choice, with exceptionally bad optics, and will result in a LOT of bad press and hurt feelings, which could easily have been avoided.
Author Jason Sanford offered a similar take: "[U]sing LLMs to vet panelists is a powerful slap in the face of the very artists and authors who attend Worldcon and have had their works pirated to train these generative AI systems. My own stories were pirated to train LLMs. The fact that an LLM was used to vet me really pisses me off. And you can see similar anger from many other genre people in the responses to Kathy Bond’s post, with more than 100 comments ranging from shock at what happened to panelists saying they didn’t give Worldcon permission to vet them like this."
Following the outcry, World Science Fiction Society division head Cassidy, Hugo administrator Nicholas Whyte, and Deputy Hugo administrator Esther MacCallum-Stewart stepped down from their roles at the conference.
On Friday, Bond issued an apology.
"First and foremost, as chair of the Seattle Worldcon, I sincerely apologize for the use of ChatGPT in our program vetting process," said Bond. "Additionally, I regret releasing a statement that did not address the concerns of our community. My initial statement on the use of AI tools in program vetting was incomplete, flawed, and missed the most crucial points. I acknowledge my mistake and am truly sorry for the harm it caused."
- Feeling dumb? Let Google's latest AI invention simplify that wordy writing for you
- FYI: Most AI spending driven by FOMO, not ROI, CEOs tell IBM, LOL
- IT pros are caught between an AI rock and an economic hard place
- Meta blames Trump tariffs for ballooning AI infra bills
While creative professionals have varying views on AI, and may use it for research, auto-correction or more substantive compositional assistance, many see it as a threat to their livelihoods, as a violation of copyright, and as "an insult to life itself."
The Authors Guild's impact statement on AI acknowledges that it can be commercially useful to writers even as it poses problems in the book market. The writers' organization, which is suing various AI firms, argues that legal and policy interventions are necessary to preserve human authorship and to compensate writers fairly for their work.
In a joint statement posted on Tuesday evening, Bond and program division head SunnyJim Morgan offered further details about the WorldCon vetting process and reassurances that panellist reviews would be re-done without AI.
“First, and most importantly, I want to apologize specifically for our use of ChatGPT in the final vetting of selected panelists as explained below,” Morgan wrote. “OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process.”
“Using that tool was a mistake. I approved it, and I am sorry.”
Con organizers are now re-vetting all invited panellists without AI assistance. ®