AI helps scientists design novel plastic-eating enzyme

Plus: Pentagon hires first chief digital and artificial intelligence officer, and more


In brief A synthetic enzyme designed using machine-learning software can break down waste plastics in 24 hours, according to research published in Nature.

Scientists at the University of Texas Austin studied the natural structure of PETase, an enzyme known to degrade polymer chains in polyethylene. Next, they trained a model to generate mutations of the enzyme that work fast at low temperatures, let the software loose, and picked from the output a variant they named FAST-PETase to synthesize. FAST stands for functional, active, stable, and tolerant.

FAST-PETase, we're told, can break down plastic in as little as 24 hours at temperatures between 30 and 50 degrees Celsius. The team believes production and usage of the AI-designed enzyme can be scaled up to industrial levels, providing a new and affordable way to get rid of the world's thrown-away plastic. Generally speaking, biological approaches to breaking up waste plastics use less energy and/or are more ecologically friendly than today's large-scale disposal methods, hence the interest in something like FAST-PETase.

"When considering environmental cleanup applications, you need an enzyme that can work in the environment at ambient temperature," Hal Alper, co-author of the study and a chemical engineering at UT Austin, said in a statement. "This requirement is where our tech has a huge advantage in the future."

AI came in handy here, it seems, as it allowed the team to use software to automate the generation of the desired mutation ‒ technically, five mutations in the end.

"This work really demonstrates the power of bringing together different disciplines, from synthetic biology to chemical engineering to artificial intelligence," Andrew Ellington, a synthetic biology professor also at UT Austin, who helped design the machine learning model, added.

Lyft's head of machine learning rides off into US military

The Pentagon has hired its first-ever chief digital and artificial intelligence officer (CDAIO) to figure out how it should use hundreds of millions of dollars from Congress to develop defense capabilities. 

Craig Martell, ex-head of machine learning at Lyft, confirmed he had left his job to join Uncle Sam in an interview with Breaking Defense. 

"I think they [US Dept of Defense] really need someone from industry who knows how to bring real AI and analytical value at scale and speed," he said. "One of the things that industry does well is, in a very agile way, turn on a dime and say, well, that's not working, let's try this and that's not working, let's try this. And you develop that muscle over time in industry and I think that's something DoD really needs."

The Pentagon has invested heavily in machine learning technology, splashing millions of dollars on cloud contracts to scale up data pipelines, analytics, and more. When it announced it was looking for a CDAIO last year, the Dept of Defense said it wanted to develop a more unified technology strategy across different units.

"If we're going to be successful in achieving the goals, if we're going to be successful in being competitive with China, we have to figure out where the best mission value can be found first and that's going to have to drive what we build, what we design, the policies we come up with," Martell said. "I just want to guard against making sure that we don't do this in a vacuum, but we do it with real mission goals, real mission objectives in mind."

Do AI algorithms and radiologists look at the same features in medical images?

Experts and machines inspect breast cancer scans in different ways, according to a study led by researchers at New York University.

Machine-learning models tended to focus on smaller, granular details when looking at soft tissue lesions, areas of abnormal growth in breast tissue, whereas radiologists study the overall brightness and shapes in images. Instead of just using one method, the researchers believe both machines and human knowledge should be used together to better diagnose patients. 

"Establishing trust in [deep neural networks] for medical diagnosis centers on understanding whether and how their perception is different from that of humans," Linda Moy, co-author of the study published in Nature Scientific Reports, and a researcher at NYU, said in a statement. 

"The major bottleneck in moving AI systems into the clinical workflow is in understanding their decision-making and making them more robust," added Taro Makino, the paper's lead author and a doctoral candidate in NYU. "We see our research as advancing the precision of AI's capabilities in making health-related assessments by illuminating, and then addressing, its current limitations."

Anthropic raises $580m in series-B round

The safety-focused AI research lab Anthropic just raised a whopping $580 million in a series-B funding round led by Sam Bankman-Fried, CEO of FTX, a cryptocurrency exchange.

Anthropic co-founder and CEO Dario Amodei said the startup will spend the money building and studying large systems. "With this fundraise, we're going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at-scale," he said in a statement.

Training large neural networks with hundreds of billions of parameters is expensive, computationally and financially. There are only a few companies and research labs with the backing and resources to build these types of systems. Some experts believe more intelligent behavior will emerge from these models as they grow larger in size, and are curious if there is a limit to how big they can be scaled.

"We've made strong initial progress on understanding and steering the behavior of AI systems, and are gradually assembling the pieces needed to make usable, integrated AI systems that benefit society," Amodei said. ®


Other stories you might like

  • China is trolling rare-earth miners online and the Pentagon isn't happy
    Beijing-linked Dragonbridge flames biz building Texas plant for Uncle Sam

    The US Department of Defense said it's investigating Chinese disinformation campaigns against rare earth mining and processing companies — including one targeting Lynas Rare Earths, which has a $30 million contract with the Pentagon to build a plant in Texas.

    Earlier today, Mandiant published research that analyzed a Beijing-linked influence operation, dubbed Dragonbridge, that used thousands of fake accounts across dozens of social media platforms, including Facebook, TikTok and Twitter, to spread misinformation about rare earth companies seeking to expand production in the US to the detriment of China, which wants to maintain its global dominance in that industry. 

    "The Department of Defense is aware of the recent disinformation campaign, first reported by Mandiant, against Lynas Rare Earth Ltd., a rare earth element firm seeking to establish production capacity in the United States and partner nations, as well as other rare earth mining companies," according to a statement by Uncle Sam. "The department has engaged the relevant interagency stakeholders and partner nations to assist in reviewing the matter.

    Continue reading
  • Meta slammed with eight lawsuits claiming social media hurts kids
    Plus: Why safety data for self-driving technology is misleading, and more

    In brief Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US. 

    The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders. 

    "Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading

Biting the hand that feeds IT © 1998–2022