Experts: AI inventors' designs should be protected in law

Plus: Police release deepfake of murdered teen in cold case, and more


In brief Governments around the world should consider passing intellectual property laws that protect inventions created by AI software, two academics at the University of New South Wales in Australia argued.

Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize technologies developed by machine-learning systems could have long-lasting impacts on economies and societies.

"If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

Today's laws pretty much only recognize humans as inventors with IP rights protecting them from patent infringement. Attempts to overturn the human-centric laws have failed. Stephen Thaler, a developer who insists AI invented his company's products, has sued patent offices in multiple countries, including the US and UK to no avail.

George and Walsh, meanwhile, would instead prefer lawmakers consider passing fresh legislation to protect AI-made designs rather have today's laws bent to fit in machine-learning software.

"Creating bespoke law and an international treaty will not be easy, but not creating them will be worse," they wrote. "AI is changing the way that science is done and inventions are made. We need fit-for-purpose IP law to ensure it serves the public good."

Editor's note: The above section was revised to clarify that George and Walsh are not, as we first said, siding with Thaler. "We have suggested that governments should conduct detailed inquiries into whether intellectual property incentives for AI-inventiveness would be in the social interest," Dr George told us. "We have suggested that it would be preferable to create a sui generis AI-IP law, rather than trying to shoehorn AI inventiveness into existing patent laws." We're happy to make this clear.

Dutch police generate deepfake of dead teenager in criminal case

A video clip with the face of a 13-year-old boy, who was shot dead outside a metro station in the Netherlands, swapped onto a body using AI technology was released by police.

Sedar Soares died in 2003. Officers have not managed to solve the case, and with Soares' family's permission, they have generated a deepfake of his image on a kid playing football in a field presumably to help jog anyone's memory. The cops have reportedly received dozens of potential leads since, according to The Guardian. 

It's the first time AI-generated images have been used to try and solve a criminal case, it seems. "We haven't yet checked if these leads are usable," said Lillian van Duijvenbode, a Rotterdam police spokesperson. 

You can watch the video here.

AI task force advises Congress to fund national computing infrastructure

America's National Artificial Intelligence Research Resource (NAIRR) urged Congress to launch a "shared research cyberinfrastructure" to better provide academics with hardware and data resources for developing machine-learning tech.

The playing field of AI research is unequal. State-of-the-art models are often packed with billions of parameters; developers need access to lots of computer chips to train them. It's why research at private companies seems to dominate, while academics at universities lag behind.

"We must ensure that everyone throughout the Nation has the ability to pursue cutting-edge AI research," the NAIRR wrote in a report. "This growing resource divide has the potential to adversely skew our AI research ecosystem, and, in the process, threaten our nation's ability to cultivate an AI research community and workforce that reflect America's rich diversity — and harness AI in a manner that serves all Americans."

If AI progress is driven by private companies, it could mean other types of research areas are left out and underdeveloped. "Growing and diversifying approaches to and applications of AI and opening up opportunities for progress across all scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, trustworthy AI, bias mitigation, and AI safety," the task force argued. 

You can read the full report here [PDF].

Meta offers musculoskeletal research tech

Researchers at Meta AI released Myosuite, a set of musculoskeletal models and tasks to simulate biomechanical movement of limbs for a whole range of applications.

"The more intelligent an organism is, the more complex the motor behavior it can exhibit," they said in a blog post. "So an important question to consider, then, is — what enables such complex decision-making and the motor control to execute those decisions? To explore this question, we've developed MyoSuite."

Myosuite was built in collaboration with researchers at the University of Twente in the Netherlands, and aims to arm developers studying prosthetics and could help rehabilitate patients. There's another potential useful application for Meta, however: building more realistic avatars that can move more naturally in the metaverse.

The models only simulate the movements of arms and hands so far. Tasks include using machine learning to simulate the manipulation of die or rotation of two balls. The application of Myosuite in Meta's metaverse is a little ironic given that there's no touching allowed there along with restrictions on hands to deter harassment. ®

Similar topics

Broader topics


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Meta slammed with eight lawsuits claiming social media hurts kids
    Plus: Why safety data for self-driving technology is misleading, and more

    In brief Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US. 

    The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders. 

    "Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022