Artificial intelligence has shown its strength in domains where progress can be tested instantly, from mathematics to software engineering. Biology has been different. Experiments are slow, expensive and bound to physical laboratories. A new collaboration between OpenAI and Ginkgo Bioworks suggests that this constraint is beginning to loosen.
Working together, the two organisations connected OpenAI’s GPT-5 model to a fully automated cloud laboratory operated by Ginkgo Bioworks. The result was an AI-driven autonomous experimental system that reduced the cost of a widely used biological process, cell-free protein synthesis, by 40 per cent. The work demonstrates how frontier AI models can move beyond analysis and simulation to directly influence physical experimentation.
Cell-free protein synthesis, or CFPS, is a method for producing proteins without growing living cells. Instead, the molecular machinery that builds proteins is extracted and run in a controlled chemical mixture. This makes CFPS attractive for rapid testing and prototyping, but also difficult and costly to optimise at scale.
Closing the loop between AI and experiments
In this project, GPT-5 was not simply asked to analyse existing data. It was embedded into a closed-loop system. The model designed experiments, those experiments were executed by robotic lab equipment, the results were returned to the model, and GPT-5 then decided what to test next.
Over six rounds of experimentation, the system ran more than 36,000 unique CFPS reactions across 580 automated plates. The scale matters. Biology is noisy, and single experiments often reveal little. Throughput and iteration are essential for identifying reliable patterns.
To ensure that AI-generated plans were practical, the experiments were subject to strict validation before execution. This prevented GPT-5 from proposing reaction designs that could not be physically carried out by the automated equipment. Once validated, the instructions were written by the model in Rover Markup Language, an XML-based format adapted here for laboratory automation.
After three rounds of experimentation, GPT-5 established a new benchmark for low-cost CFPS. Protein production costs were reduced by 40 per cent compared with the best previous baseline, while the cost of reagents fell by 57 per cent. The improvements came from novel combinations of reaction components that performed reliably under the constraints of high-throughput automated labs.
Why protein synthesis matters
Proteins sit at the heart of modern biology. Many medicines are protein-based. Diagnostics, industrial enzymes and even household products rely on them. When protein production becomes cheaper and faster, scientists can test more ideas sooner and reduce the cost of turning research into usable products.
CFPS is already valued for its speed, but reagent costs become prohibitive when thousands of reactions are run in parallel. Traditional optimisation methods struggle because the system involves many interacting variables, from energy sources and salts to buffering agents and DNA templates.
GPT-5’s contribution was not a single dramatic insight but the ability to explore this complex space systematically. The model identified combinations that humans had not previously tested in this configuration, and that proved robust under low-oxygen conditions typical of automated plate-based experiments. Small changes in buffering and energy regeneration components were found to have outsized effects relative to their cost.
A glimpse of autonomous science
The experiment does have limits. The results were demonstrated on a single protein and one CFPS system, and human oversight was still required for aspects such as reagent handling and protocol refinement. Generalising the approach to other proteins and workflows remains an open question.
Even so, the implications are significant. The work shows that AI models can reason about wet-lab processes, propose executable experiments and improve real-world outcomes through iteration. It points to a future in which autonomous laboratories remove one of biology’s biggest bottlenecks: the speed at which experiments can be designed, run and refined.
OpenAI has said it plans to apply similar lab-in-the-loop optimisation to other biological workflows, while also evaluating potential biosecurity risks associated with more capable AI-driven experimentation. The broader message is clear. As AI systems become tightly coupled with automation, the boundary between digital intelligence and physical science is beginning to blur.




