The AI-focused startup Genesis Therapeutics has a new name — and a new AI model it believes has best-in-class performance.
In an interview with
Endpoints News
, CEO Evan Feinberg said the new model, called Pearl (short for Placing Every Atom in the Right Location), outperformed Isomorphic Labs’ AlphaFold 3, Chai Discovery’s Chai-1, and other top AI models across several protein-ligand benchmarks.
Feinberg called predicting these structures “one of the most notoriously challenging problems to solve.” AlphaFold 3 has been seen as a leader in this space
since its May 2024 debut
.
“For the problem of modeling protein-ligand interactions, we’re confident that Pearl is the best model that exists today,” Feinberg said. (To be sure, the AlphaFold 3 release is now over a year old. Isomorphic is a “significant generation or two ahead internally in terms of what our models can do, so think like 4, 4.5, 5,” Isomorphic’s chief AI officer Max Jaderberg
told Endpoints earlier this month
.)
The Bay Area biotech has also changed its name to Genesis Molecular AI. Founded in 2019, Genesis has raised over $300 million,
including a $200 million Series B in 2023
. Feinberg was formerly a graduate student in Vijay Pande’s lab at Stanford, where he developed physics-based machine learning models that turned into Genesis. The biotech now has roughly 130 employees and is advancing its own preclinical pipeline with no disclosed timeline to entering the clinic, as well as partnerships with
Gilead
and Incyte.
Genesis detailed the claims about its model in a preprint posted Tuesday from more than 30 authors, including some Nvidia engineers. The paper used three metrics, including “Runs N’ Poses,” an academic-created test which
found in February
that models like AlphaFold 3 appear to largely memorize their training data. That raised the question of whether these models have seen enough data to actually learn and generalize.
Genesis’ Pearl achieved an 85% success rate on the Runs N’ Poses test, compared to 74% for AlphaFold 3, 74% for Boltz-1x, and 70% for Chai-1. The team also tested how these models fared against a higher bar of accuracy, defining success as predictions less than an angstrom away from the actual structure. (The standard test, on which Pearl scored 85%, looks for predictions within two angstroms.) Against that tougher metric, Pearl had a 70% success rate, compared to 62% with AlphaFold 3, 57% with Boltz-1x, and 56% with Chai-1.
The release of the preprint and model coincides with Nvidia’s AI conference in Washington, DC.
Feinberg credited Pearl’s outperformance to Genesis’ long-running focus on integrating more physics knowledge into its models.
That included taking a page from how Waymo developed its autonomous cars using synthetic data. Synthetic data are derived from experiments using physics and allowed Genesis’ models to see more examples, particularly in data-scarce areas, like small molecules in the Protein Data Bank.
“We are the first group to show any evidence of scaling law phenomena here, where we can generate more synthetic data with physics, pretrain the model with this data, get better performance and repeat,” Feinberg said, describing this idea as still being in the “early innings.”
The MIT research team behind Boltz has already extended its modeling to other protein design tasks, like
predicting binding affinity
and, most recently,
generating new protein binders
. Genesis’ preprint does not include Pearl’s performance in predicting binding affinity or potency, and Feinberg said that was beyond the scope of the paper. Having high-accuracy structures is a crucial first step to then tackling predictions like potency, Feinberg said.
“A lot of these co-folding models hallucinate, just like any other diffusion model or LLM,” Feinberg said. “They will make something that
prima facie
looks OK, but any expert looks at it, and is like, ‘That’s not quite right.’”
Editor’s note: A previous version of this story stated Pearl was compared to Chai-2. The story has been corrected to reflect Genesis compared its model to Chai-1.