OpenAI has created a AI model for longevity science.

According to the company, it has developed a model of language that can imagine proteins capable of transforming regular cells into stem cell–and it has beaten humans by a wide margin.

This is OpenAI’s very first model that focuses on biological data, and it’s also its first public claim to be able to produce unexpected scientific results. It is a test to see if AI can actually make discoveries. Some argue that this is a key step on the path to “artificial intelligence”. This link-up was not a coincidence. Sam Altman, CEO of OpenAI, funded Retro personally with $180 million as first reported by MIT Technology Review in 2023. Retro’s goal is to extend the average human lifespan by 10 more years. It studies what it calls Yamanaka factors. These are proteins that, when added into a human cell, cause it to transform into a stem cell that looks young and can produce any tissue in the body.

Researchers at Altos Labs and Retro see this as a possible starting point to rejuvenate animals, build human organs or provide replacement cells.

However, such “reprogramming” of cells is not very effective. It takes several weeks and less than 1% cells treated in a laboratory dish will complete the journey of rejuvenation.

OpenAIโ€™s new model called GPT-4b Micro was trained to suggest how to re-engineer protein factors to increase function. OpenAI claims that researchers have used the model’s suggestions in order to make two of the Yamanaka factor more than 50 times more effective, at least according to preliminary measures.

According to OpenAI researcher John Hallman, “the proteins are better across the board than what the scientists could produce themselves.” Hallman, OpenAI’s Aaron Jaech and Rico Meinl, from Retro, were the lead developers of the model.

The results won’t be real until the companies publish them, which they say they plan to do. The model is not available for general use, as it’s a demonstration and not an official launch.

Jaech says that the project is intended to demonstrate our commitment to science. “But whether these capabilities will be released to the world as an independent model or whether they will be rolled into the mainline reasoning models – that’s yet to be determined.” OpenAI stated that since the Yamanaka proteins are unstructured and floppy, they needed a different approach. Their large language models were perfect for this.

This model was trained using examples of protein sequences, from many species. It also included information about which proteins tend interact with each other. GPT-4b is an example of “small language models” that work with a specific data set. Retro scientists tried to steer the model to suggest possible redesigns for the Yamanaka protein after receiving the model. The prompting technique is similar to “few shots” where a user asks a chatbot a series questions and answers, then provides an example to which the bot can respond.

Genetic engineers can direct the evolution of molecules in a lab, but they are limited to a certain number of possibilities. Even a protein with a typical length can be altered in an almost infinite number of ways (since it’s made up of hundreds of amino acids and each one comes in 20 different varieties).

OpenAIโ€™s model, on the other hand, often generates suggestions where a third of amino acids in proteins are changed.

OPENAI (

) “We put this model in the lab immediately, and we got real results,” says Retro CEO Joe Betts Lacroix. He says that the model’s ideas are unusually good and have led to improvements over Yamanaka factors for a significant fraction of cases. Vadim Gladyshev is a Harvard University researcher on aging who consults Retro. He says that better ways to make stem cells are required. “For us, this would be extremely helpful.” He says that [Skin cells] cells are easy to reprogram but other cells aren’t. “And to do this in a different species–it is often extremely different, you don’t get any.”

As is often the case with AI-models, it is not clear how the GPT-4b makes its guesses. Betts-Lacroix says, “It is like when AlphaGo beat the best human Go player but it took a while to figure out why.” “We’re still figuring out how it works, and we believe the way we use this is just scratching the surface.” OpenAI says that no money was exchanged in the collaboration. The announcement could raise questions about the OpenAI CEOโ€™s side projects, as the work could benefit Retro, whose biggest investor is Altman.

In the past, The Wall Street Journal ( ) said Altmanโ€™s investments in private tech startup companies amount to an โ€œopaque empireโ€ that is “creating an increasing list of potential conflict” since some of these businesses also do business with OpenAI. Retro’s association with Altman, OpenAI and the race to AGI may boost its profile, increase its ability hire staff and raise money. Betts-Lacroix declined to answer questions regarding whether the early-stage startup is currently in a fundraising mode.

OpenAI claims Altman wasn’t directly involved in the project and that it doesn’t make decisions based on Altmanโ€™s other investments.

www.aiobserver.co

More from this stream

Recomended


Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464