AI-designed COVID-19 drug nominated for preclinical trial

Treatment could stop coronavirus from replicating inside body


Updated An oral medication designed by scientists with the help of AI algorithms could one day treat patients with COVID-19 and other types of diseases caused by coronaviruses.

Insilico Medicine, a biotech startup based in New York, announced on Tuesday it had nominated a drug candidate for preclinical trials – the stage before you start testing it on humans.

Today's mRNA vaccines boost the body's immunity to COVID-19 by aiding the generation of antibodies capable of blocking the virus's spike protein, stopping the bio-nasty from infecting cells. The small molecule developed by Insilico, however, is used to treat people already infected, and works by preventing the coronavirus from replicating.

The preclinical candidate has a specialized structure to target the 3C-like (3CL) protease, an enzyme involved in the viral reproduction of the SARS-CoV-2 coronavirus, Feng Ren, Insilico's chief scientific officer, explained.

"This molecule designed by AI has distinct pharmacophores from existing 3CL protease inhibitors and binds to the target protein in a unique, irreversible, covalent binding mode as demonstrated by a co-crystal structure."

Insilico says it has built a machine-learning-based software platform made up of three components to design drugs.

One part, PandaOmics, uses natural-language-processing tools capable of analyzing data in academic papers to highlight genes and proteins related to specific diseases. Another, Chemistry42 generates molecule structures that affect or interact with those highlighted proteins and genes, ranking candidates by their chemical stability and components.

Third, Inclinico predicts how a molecule may perform in clinical trials. Glue all of this together, and you get computer-suggested drugs to test for treating illnesses.

The company began using AI to generate antiviral drug candidates for COVID-19 in February 2020, when the novel coronavirus began spreading around the world, using this software platform. 

"The COVID-19 pandemic brought global attention to the pressing need for rapid drug development," said Alex Zhavoronkov, CEO of Insilico. "We made an executive decision to start our COVID-19 program early. We were mobilized along with the rest of the scientific community and were able to demonstrate how powerful AI tools can be in the fight against the disease."

Designing drugs is difficult and takes a long time. Although Insilico began generating hundreds of potential molecules in early 2020, it has taken years to whittle down the suggested medicines to the most promising one. AI models may give scientists a decent idea of the chemical components that make up the molecule, but figuring out how to synthesize it, in the lab and at scale, is tricky and expensive.

Insilico was able to speed the process up by building on previous knowledge of coronavirus structures from the SARS outbreak in 2003, and by choosing molecules that contain chemical compounds that can be made commercially. Now, it has chosen its best candidate to undergo preclinical testing before the drug can be consumed by humans in real clinical trials and sold as medicine.

The new molecule is reportedly effective against SARS-CoV-2 and other coronavirus variants, such as SARS and MERS. The Register has asked Insilico for further comment. ®

Updated to add

"We have a novel molecule designed from scratch by the generative multiparameter optimization AI system. The molecule is very potent and very strongly binds to the 3CL Proteases of SARS-COV-2 and MERS viruses," Insilico told The Register in response to our request for more detail.

"By binding to the 3CL protease, it disables it and inhibits viral replication. The molecule is very stable and we don't think that it will require a combination with another drug to be administered orally."

The company expects to start clinical trials in "a few months."


Other stories you might like

  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022