Experts: AI inventors' designs should be protected in law
Plus: Police release deepfake of murdered teen in cold case, and more
In brief Governments around the world should consider passing intellectual property laws that protect inventions created by AI software, two academics at the University of New South Wales in Australia argued.
Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize technologies developed by machine-learning systems could have long-lasting impacts on economies and societies.
"If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."
Today's laws pretty much only recognize humans as inventors with IP rights protecting them from patent infringement. Attempts to overturn the human-centric laws have failed. Stephen Thaler, a developer who insists AI invented his company's products, has sued patent offices in multiple countries, including the US and UK to no avail.
George and Walsh, meanwhile, would instead prefer lawmakers consider passing fresh legislation to protect AI-made designs rather have today's laws bent to fit in machine-learning software.
"Creating bespoke law and an international treaty will not be easy, but not creating them will be worse," they wrote. "AI is changing the way that science is done and inventions are made. We need fit-for-purpose IP law to ensure it serves the public good."
Editor's note: The above section was revised to clarify that George and Walsh are not, as we first said, siding with Thaler. "We have suggested that governments should conduct detailed inquiries into whether intellectual property incentives for AI-inventiveness would be in the social interest," Dr George told us. "We have suggested that it would be preferable to create a sui generis AI-IP law, rather than trying to shoehorn AI inventiveness into existing patent laws." We're happy to make this clear.
Dutch police generate deepfake of dead teenager in criminal case
A video clip with the face of a 13-year-old boy, who was shot dead outside a metro station in the Netherlands, swapped onto a body using AI technology was released by police.
Sedar Soares died in 2003. Officers have not managed to solve the case, and with Soares' family's permission, they have generated a deepfake of his image on a kid playing football in a field presumably to help jog anyone's memory. The cops have reportedly received dozens of potential leads since, according to The Guardian.
It's the first time AI-generated images have been used to try and solve a criminal case, it seems. "We haven't yet checked if these leads are usable," said Lillian van Duijvenbode, a Rotterdam police spokesperson.
You can watch the video here.
AI task force advises Congress to fund national computing infrastructure
America's National Artificial Intelligence Research Resource (NAIRR) urged Congress to launch a "shared research cyberinfrastructure" to better provide academics with hardware and data resources for developing machine-learning tech.
The playing field of AI research is unequal. State-of-the-art models are often packed with billions of parameters; developers need access to lots of computer chips to train them. It's why research at private companies seems to dominate, while academics at universities lag behind.
"We must ensure that everyone throughout the Nation has the ability to pursue cutting-edge AI research," the NAIRR wrote in a report. "This growing resource divide has the potential to adversely skew our AI research ecosystem, and, in the process, threaten our nation's ability to cultivate an AI research community and workforce that reflect America's rich diversity — and harness AI in a manner that serves all Americans."
If AI progress is driven by private companies, it could mean other types of research areas are left out and underdeveloped. "Growing and diversifying approaches to and applications of AI and opening up opportunities for progress across all scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, trustworthy AI, bias mitigation, and AI safety," the task force argued.
You can read the full report here [PDF].
Meta offers musculoskeletal research tech
Researchers at Meta AI released Myosuite, a set of musculoskeletal models and tasks to simulate biomechanical movement of limbs for a whole range of applications.
"The more intelligent an organism is, the more complex the motor behavior it can exhibit," they said in a blog post. "So an important question to consider, then, is — what enables such complex decision-making and the motor control to execute those decisions? To explore this question, we've developed MyoSuite."
Myosuite was built in collaboration with researchers at the University of Twente in the Netherlands, and aims to arm developers studying prosthetics and could help rehabilitate patients. There's another potential useful application for Meta, however: building more realistic avatars that can move more naturally in the metaverse.
The models only simulate the movements of arms and hands so far. Tasks include using machine learning to simulate the manipulation of die or rotation of two balls. The application of Myosuite in Meta's metaverse is a little ironic given that there's no touching allowed there along with restrictions on hands to deter harassment. ®