This article is more than 1 year old

AI helps scientists design novel plastic-eating enzyme

Plus: Pentagon hires first chief digital and artificial intelligence officer, and more

In brief A synthetic enzyme designed using machine-learning software can break down waste plastics in 24 hours, according to research published in Nature.

Scientists at the University of Texas Austin studied the natural structure of PETase, an enzyme known to degrade polymer chains in polyethylene. Next, they trained a model to generate mutations of the enzyme that work fast at low temperatures, let the software loose, and picked from the output a variant they named FAST-PETase to synthesize. FAST stands for functional, active, stable, and tolerant.

FAST-PETase, we're told, can break down plastic in as little as 24 hours at temperatures between 30 and 50 degrees Celsius. The team believes production and usage of the AI-designed enzyme can be scaled up to industrial levels, providing a new and affordable way to get rid of the world's thrown-away plastic. Generally speaking, biological approaches to breaking up waste plastics use less energy and/or are more ecologically friendly than today's large-scale disposal methods, hence the interest in something like FAST-PETase.

"When considering environmental cleanup applications, you need an enzyme that can work in the environment at ambient temperature," Hal Alper, co-author of the study and a chemical engineering at UT Austin, said in a statement. "This requirement is where our tech has a huge advantage in the future."

AI came in handy here, it seems, as it allowed the team to use software to automate the generation of the desired mutation ‒ technically, five mutations in the end.

"This work really demonstrates the power of bringing together different disciplines, from synthetic biology to chemical engineering to artificial intelligence," Andrew Ellington, a synthetic biology professor also at UT Austin, who helped design the machine learning model, added.

Lyft's head of machine learning rides off into US military

The Pentagon has hired its first-ever chief digital and artificial intelligence officer (CDAIO) to figure out how it should use hundreds of millions of dollars from Congress to develop defense capabilities. 

Craig Martell, ex-head of machine learning at Lyft, confirmed he had left his job to join Uncle Sam in an interview with Breaking Defense. 

"I think they [US Dept of Defense] really need someone from industry who knows how to bring real AI and analytical value at scale and speed," he said. "One of the things that industry does well is, in a very agile way, turn on a dime and say, well, that's not working, let's try this and that's not working, let's try this. And you develop that muscle over time in industry and I think that's something DoD really needs."

The Pentagon has invested heavily in machine learning technology, splashing millions of dollars on cloud contracts to scale up data pipelines, analytics, and more. When it announced it was looking for a CDAIO last year, the Dept of Defense said it wanted to develop a more unified technology strategy across different units.

"If we're going to be successful in achieving the goals, if we're going to be successful in being competitive with China, we have to figure out where the best mission value can be found first and that's going to have to drive what we build, what we design, the policies we come up with," Martell said. "I just want to guard against making sure that we don't do this in a vacuum, but we do it with real mission goals, real mission objectives in mind."

Do AI algorithms and radiologists look at the same features in medical images?

Experts and machines inspect breast cancer scans in different ways, according to a study led by researchers at New York University.

Machine-learning models tended to focus on smaller, granular details when looking at soft tissue lesions, areas of abnormal growth in breast tissue, whereas radiologists study the overall brightness and shapes in images. Instead of just using one method, the researchers believe both machines and human knowledge should be used together to better diagnose patients. 

"Establishing trust in [deep neural networks] for medical diagnosis centers on understanding whether and how their perception is different from that of humans," Linda Moy, co-author of the study published in Nature Scientific Reports, and a researcher at NYU, said in a statement. 

"The major bottleneck in moving AI systems into the clinical workflow is in understanding their decision-making and making them more robust," added Taro Makino, the paper's lead author and a doctoral candidate in NYU. "We see our research as advancing the precision of AI's capabilities in making health-related assessments by illuminating, and then addressing, its current limitations."

Anthropic raises $580m in series-B round

The safety-focused AI research lab Anthropic just raised a whopping $580 million in a series-B funding round led by Sam Bankman-Fried, CEO of FTX, a cryptocurrency exchange.

Anthropic co-founder and CEO Dario Amodei said the startup will spend the money building and studying large systems. "With this fundraise, we're going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at-scale," he said in a statement.

Training large neural networks with hundreds of billions of parameters is expensive, computationally and financially. There are only a few companies and research labs with the backing and resources to build these types of systems. Some experts believe more intelligent behavior will emerge from these models as they grow larger in size, and are curious if there is a limit to how big they can be scaled.

"We've made strong initial progress on understanding and steering the behavior of AI systems, and are gradually assembling the pieces needed to make usable, integrated AI systems that benefit society," Amodei said. ®

More about

TIP US OFF

Send us news


Other stories you might like