This article is more than 1 year old
X may train its AI models on your social media posts
Plus: AI luminary Douglas Lenat passed away, and US newspaper chain halts publishing of AI-generated articles
AI In brief X, the social media platform formerly known as Twitter, updated its privacy policy this week stating that it may train its AI models on user posts.
The new policy is expected to come into effect on 29 September. "We may use the information we collect and publicly available information to help train our machine learning or artificial intelligence models," the company said.
Owner and former CEO Elon Musk said that private data, such as text in direct messages, however, will not be used to train its models. The change should come as no surprise, Musk previously said that he planned to use data from the microblogging site to help researchers and engineers from his latest startup, xAI, to build new products.
X charges other enterprises $42,000 for access to its data via an API. In April, he threatened to sue Microsoft for allegedly "illegally using Twitter data'' after it reportedly removed X from its advertising platforms due to the increased fees. "They trained illegally using Twitter data. Lawsuit time," Musk tweeted.
AI pioneer Douglas Lenat dies at 72
Doug Lenat, a longtime computer science researcher and leading figure in AI, has passed away.
Tributes poured in online from academics and developers, who admired his intellect, impressive career, and tenacity to develop artificial general intelligence, software capable of reasoning.
In 1972, Lenat received his bachelor's degree in Mathematics and Physics, and his master's degree in Applied Mathematics from the University of Pennsylvania. He went to Stanford University to complete his PhD, where he worked on software capable of automatically writing computer programmes.
He later became an assistant professor at Carnegie Mellon University and Stanford University and was the only person to have served on the Scientific Advisory Boards of both Microsoft and Apple. In 1994, Lenat founded Cycorp, an AI company focused on machine reasoning, where he worked until his death.
Lenat was a pioneer of neurosymbolic systems and tried to teach machines to reason using a combination of a knowledge base and a reasoning engine. The system had a natural language interface and was used to power products that were sold to the companies working in logistics and healthcare.
- How to ask Facebook's Meta to not train its AI models on some of your personal info
- Microsoft may store your conversations with Bing if you're not an enterprise user
- Writing tool from AI21 Labs won't do all the hard work for you
- Startups competing with OpenAI's GPT-3 all need to solve the same problems
AI21 Labs raises $155 million in Series C round
Large language model maker AI21 Labs has raised $155 million in its latest Series C round, setting its valuation at $1.4 billion.
The Israeli startup is backed by venture investment firms Walden Catalyst, Pitango, SCB10X, b2venture, Samsung Next, Google, Nvidia as well as its own founder Amnon Shashua.
Under AI21 Studio, the company offers API access to its more general large language model Jurassic-2 and other systems that are designed to fulfil specific tasks, like summarisation or questions and answering. It has also developed more consumer-focused tools like Wordtune, which uses AI to generate and edit passages of text.
AI21 Labs envisions combining neural networks with symbolic systems to tackle issues like hallucination, a term used when models veer off track and generate false information. "The current round will fuel the growth of the company to reach our goal of developing the next level of AI with reasoning capabilities across multiple domains," said founder and Chairman Amnon Shashua.
Co-CEOs Yoav Shoham and Ori Goshen said the company's technology provides "robustness, predictability and explainability" that is needed for enterprises to trust AI.
Gannet halts publication of AI-generated sports stories
US newspaper chain Gannett has paused publishing of AI-generated articles covering high school sports games after readers mocked the writing.
Publishers are increasingly turning to AI to churn copy. Unlike humans, machines can work tirelessly and more quickly. But they don't tend to be better journalists for several reasons. The articles, written by Lede AI, a startup headquartered in Ohio, for example, contained errors and odd phrases that were just bad.
One sentence in a story published in The Columbus Dispatch shows the machine didn't even bother entering the team names in its writing template: "The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday," Axios reported. The mistakes were later corrected.
Another article described the scoreboard of a football game between the Wyoming Cowboys and Ross Rams as being in "hibernation in the fourth quarter." When another high school team made a comeback in another game, Lede AI wrote that "[they had] avoided the brakes and shifted into victory gear," according to the Washington Post.
The terrible writing was criticised online, and a Gannett spokesperson later confirmed that "this local AI sports effort is being paused." ®