TV and film extras fear generative AI will copy their faces and bodies to take their jobs
Plus: Apple has spent $22.1B on research for generative AI, and Kickstarter introduces new AI policies
AI in brief Production companies are scanning the faces and bodies of actors and actresses, who fear their likeness will be used to create fake AI doubles for TV shows and films in the future.
Some workers spoke to NPR this week about being subjected to the scans, and feeling like they couldn't say no. Alexandria Rubalcaba, who was working as a background actor, described being called into a trailer and asked to stand in front of cameras.
"Have your hands out. Have your hands in. Look this way. Look that way. Let us see your scared face. Let us see your surprised face," she said. What was most concerning, however, was that she didn't know what or how her images were going to be used. "My first thought leaving the trailer was, 'Oh this might just be the future," Lubsey said. "We might just lose our jobs," Dom Lubsey, an actor from Los Angeles, added.
Studios already use computational techniques to create synthetic images of people to create fake crowds for backgrounds in films.
It's not too far-fetched to think that extras can also be generated too. Andrew Susskind, an associate professor at Drexel University's film and TV department, explained how AI-made background actors would slash production budgets.
"Imagine ballroom scenes, party scenes, any scenes that need tons of extras," Susskind said. "Imagine the amounts of money they would be saving. Not paying $180 a day. Plus meals. Plus costuming," he said.
Hollywood labor union SAG-AFTRA last month warned of producers scanning background actors' likenesses on the cheap to use forever with generative AI in future.
Apple's $22.6 billion spending on research is going towards generative AI
As Big Tech goes all out in the rush to produce generative AI products, Apple is noticeably quiet. But it is working to explore and develop the technology internally, according to the company's latest Q3 earnings report [PDF].
Sales fell 1.4 per cent to $81.8 billion compared to the same quarter a year ago. Meanwhile it had spent a total of $22.1 billion in research and development as of July 1 compared to the $19.5 billion spent last year.
That increase is down to Apple working on generative AI, CEO Tim Cook told Reuters. "We've been doing research across a wide range of AI technologies, including generative AI, for years. We're going to continue investing and innovating and responsibly advancing our products with these technologies to help enrich people's lives," he said. "Obviously, we're investing a lot, and it is showing up in the R&D spending that you're looking at."
Apple has reportedly built Ajax, its own internal framework to build the tools and software for large language models. The technology has been used to develop a ChatGPT-like chatbot. Cook previously said there are "number of issues that need to be sorted" with the technology, and that Apple needed to be "very thoughtful" about future products.
- Netflix offers up to $900,000 for AI product manager while actors strike for protection
- Google fails to get AI engineer lawsuit claiming wrongful termination thrown out
- Read lips? Siri wants to feel them, according to fresh Apple patent
- US biz to blow $120bn on AI by 2025, says IDC
AI trading firms under scrutiny in Massachusetts
Regulators in Massachusetts have launched an inquiry investigating how AI is being used in the securities industry.
The state's Secretary of the Commonwealth, William Galvin, has sent a series of letters to registered and unregistered investment firms, including top names like JPMorgan Chase and Morgan Stanley, that are known to use the technology in their businesses.
There are concerns that automated algorithms could be biased and designed to optimize profits for a company over its clients, or that its predictions could affect human decisions and lead to financial disaster, for example.
"State securities regulators have an important role to play when it comes to AI and its impact on main street investors," he said in a statement. "If deployed without the guardrails necessary to ensure proper disclosure and consideration of conflicts, I am concerned that this technology could result in harm to investors."
Kickstarter's new AI policies
Popular crowdfunding platform for creative and tech projects Kickstarter is introducing new rules for artists and developers using AI in their work.
Anyone using generative AI tools to create images, text, or code must make this clear in the description advertising their project, and explain what content is created by machines and what content is created by them.
If the project is a specific AI product or tool, developers must disclose what data they're using to train or run their software. If they don't obtain consent or credit data sources appropriate, Kickstarter won't allow the project online for crowdsourcing.
"If any use of AI is not disclosed properly during the submission process, the project may be suspended. Attempts to skirt our guidelines or intentionally misrepresent a project will result in restrictions from submitting a Kickstarter project in the future," the company warned this week.
The new policies will come into effect from 29 August, and creators will be asked to give details on how AI will be used when they try to submit their projects on Kickstarter. ®