This article is more than 1 year old

I've been fired, says engineer who claimed Google chatbot was sentient

Plus: How writers are using AI tools to help them write fiction more quickly

In brief Google has reportedly fired Blake Lemoine, the engineer who was placed on administrative leave after insisting the web giant's LaMDA chatbot was sentient.

Lemoine didn't get in trouble for holding his controversial, eyebrow-raising opinion on the model. Instead, he was punished for violating Google's confidentiality policies. He reportedly invited a lawyer to assess potential legal rights for LaMDA and spoke to a House Rep claiming Google was being unethical.

A Google spokesperson told the Big Technology newsletter it has decided to terminate his employment since Lemoine continued to violate "employment and data security policies" jeopardizing trade secrets.

"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," the spokesperson said.

"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

Many experts at Google and in academia and industry have cast doubt on whether LaMDA or any existing AI chatbot is sentient. Language models learn mathematical relationships between words, and are good at mimicking human language without having any real understanding. None of them are really intelligent, let alone conscious.

Why developers can't run open-source AI code

What's the point of releasing the code for machine-learning models if developers don't have the resources required to run it?

AI software is notoriously difficult to spin up and try for yourself even when it's open source. Bits and pieces of the source are left out, or the datasets required to train the model aren't available. Sometimes these parts are accessible, but developers don't have the computational power to wield such large systems.

Take Meta's Open Pretrained Transformer, for example. The largest version of the language model contains 175 billion parameters. Even though the ad giant released its code, not many will have enough chips at hand to train and use the model from scratch, Matt Asay, who runs partner marketing at MongoDB, noted in a personal capacity.

"The key is to provide enough access for researchers to be able to reproduce the successes and failures of how a particular AI model works…As companies and individuals, our goal should be to open access to software in ways that benefit our customers and third-party developers to foster access and understanding instead of trying to retrofit a decades-old concept of open source to the cloud," he argued.

OpenAI's Dall-E enters beta mode

OpenAI's commercial text-to-image generation model DALL·E 2 is being opened up to a million more people who joined its waitlist.

Access was previously confined to those that were select artists, researchers, and developers as OpenAI wanted to test its system before full commercial release. People have been using the tool to generate all sorts of pictures to make internet memes, comic books, or digital art. 

Users will receive 50 free credits during their first month of use, and 15 free credits every month after. Each credit can be used to generate one original Dall-E prompt generation, and they will get four images as output instead of the previous six. Editing existing text prompts will return three images. If they want to generate more, they will have to pay $15 to get another 115 credits for an additional 460 images. 

They will also get to commercialize any images they generate using the model, too, meaning creators have rights to sell and print their pictures for merchandise and the like. "We are excited to see what people create with DALL·E and look forward to users' feedback during this beta period," the research lab said this week. 

Snooping on children with facial recognition

The controversial commercial online facial recognition service PimEyes can be used to search for "potentially explicit" pictures of children.

PimEyes allows anyone to upload a photo, it searches for potential matches, and returns images and their corresponding URLs. Marketed as a privacy-saving tool, where users can see where pictures of themselves have been posted on the internet, the service said it has helped combat issues like revenge porn.

But there is a dark side, too. Anyone can use the website to search for photos of anyone they like, even children. An investigation by The Intercept found it was easy to find photos of youngsters, and some of these images were even labelled as "potentially explicit." Fake AI-generated pictures of children were used in the experiment, and PimEyes still returned matches, which suggests its software probably isn't very accurate.

"The fact that PimEyes doesn't have safeguards in place for children and apparently is not sure how to provide safeguards for children only underlines the risks of this kind of facial recognition service," Jeramie Scott, director of the Surveillance Oversight Project at the Electronic Privacy Information Center, was quoted as saying. "Participating in public, whether online or offline, should not mean subjecting yourself to privacy-invasive services like PimEyes."

AI-powered writing tools help indie authors publish more books

Content creation is often a difficult business. You gotta come up with fresh material frequently to keep and grow audiences.

One independent fiction writer, who publishes her work on Amazon's Kindle platform under the pen name Leanne Leeds, described the powers and limitations of using a GPT-3-powered text-generating tool as a writing partner.

By feeding sentences into software named Sudowrite, writers like Leeds can get back passages of text automatically generated; it acts like a more intelligent autocomplete. Leeds told The Verge her productivity had increased over 20 per cent when using this to craft her prose. She edits the software's outputs and slots the paragraphs into her books. Sudowrite is helping her write books at a faster rate to keep readers more interested. 

But writers are torn over the use of AI. Some believe it goes against the creativity and magic of literature, while others see its potential for storytelling. Should authors disclose that their books have been written with the help of algorithms? And more importantly, is this auto-generated scribbling any good?

Right now, machines aren't quite good enough and require editing to keep narratives and plot lines from going off the rails. Leeds believes tools like Sudowrite will one day get good enough to write generic fiction. "I think that's the real danger, that you can do that and then nothing's original anymore. Everything's just a copy of something else," she said. "The problem is, that's what readers like." ®

More about

TIP US OFF

Send us news


Other stories you might like