Physics-informed neural operators bring speed and intelligence to subsurface modelling. AI transforms how companies simulate complex underground systems, enabling faster decision-making for oil production and carbon storage.
AI is beginning to unearth answers buried deep below ground, reshaping how energy companies model, manage, and optimise the reservoirs that underpin fossil fuel production and carbon storage. Physics-informed neural operators are at the heart of this shift, offering a faster, data-driven alternative to traditional simulators while retaining physical realism and predictive power.
When carbon capture and storage (CCS) is seen as critical to net-zero targets and the efficiency of existing oil and gas infrastructure is under intense scrutiny, the ability to model subsurface behaviour with precision, speed and confidence has become an urgent priority. The pressure to act is not merely economic. Geological complexity, legacy data systems, and environmental constraints are converging into one of the most computationally intensive problems in the energy transition. It is here that TotalEnergies is making its move.
From equations to operators
For decades, reservoir simulation has relied on solving complex partial differential equations that describe fluid flow through porous media. These methods are trusted and accurate, but they are also slow, resource-intensive, and inflexible. According to Elias Cherif, Data Scientist at TotalEnergies, that is no longer sustainable. “Subsurface media are highly non-linear and chaotic dynamic systems,” he says. “A slight change in permeability can drastically alter CO₂ plume behaviour. Using traditional high-resolution simulators, such as a one-metre grid over a 100-square-kilometre area, involves millions of calculations.
“Machine learning and data science offer an alternative. These approaches directly learn the relationship between inputs and outputs, bypassing the need to solve all underlying equations explicitly. This allows for the development of models tailored to specific geological contexts.”
This leap is enabled by a confluence of factors: the sheer volume of available sensor and simulation data, rapid advances in AI architectures, and a policy-driven need to accelerate CCS deployment. The goal is no longer to replace physics but to encode it. This is the promise of physics-informed neural operators (PINOs), a class of AI models that incorporate governing equations into their architecture to enhance learning and generalisation.
Speeding up carbon storage simulation
TotalEnergies began by applying Fourier neural operators (FNOs) to model CO₂ injection and migration over time. The task was to forecast pressure and gas saturation in offshore CCS sites across a thirty-year horizon with sufficient accuracy to ensure containment.
“Let me first summarise how a traditional simulator works,” Cherif says. “It involves three steps: inputting the reservoir’s geological model, rock properties, and injection plan; performing numerical simulation by solving Darcy’s law and mass conservation equations using finite difference methods; and producing outputs such as pressure distribution, fluid saturation and production rates. While accurate, this approach is computationally expensive and unsuited for rapid scenario testing.”
Instead, the neural operator model learns from thousands of prior simulations. It transforms permeability, porosity, and other inputs into a frequency domain using Fourier transforms, allowing it to model global interactions without solving the full system of equations at each step.
Training on a public dataset from Imperial College London, which includes 24 snapshots over three decades, the team developed separate models for pressure and saturation. They selected tailored loss functions to match the physical characteristics of each variable. Pressure, governed by elliptic equations, was optimised with pointwise squared error. Saturation, affected by sharp fronts and governed by hyperbolic equations, required a more nuanced approach that penalised errors at the plume edge.
“Training was performed using the NVIDIA Modulus framework with 2,000 examples over 100 epochs on an H100 GPU,” Cherif continues. “The model achieved inference speeds 1,000 times faster than traditional simulators while maintaining high accuracy on test data. The R² scores remained high across all 24 time steps.”
Errors were mainly found at the leading edge of the CO₂ plume, where saturation gradients are steepest. However, even these could be mitigated by refining loss functions and expanding training diversity. The results offer a new way to run what-if scenarios at scale, with implications for policy risk assessment and real-time reservoir monitoring.
Solving the inverse with PINO and VCAE
Forward prediction is only half the challenge. Understanding how historical production maps to underlying geology is crucial for CCS and oil recovery alike. The problem is reversed here: infer rock properties from surface-level production data. TotalEnergies tackled this with a combination of PINO and a variational convolutional autoencoder (VCAE).
“The forward problem involves predicting pressure and oil saturation from known parameters such as permeability and porosity,” Cherif explains. “This enables computation of oil and water production rates. The inverse problem is more complex, requiring determining unknown geological characteristics from limited observed data.”
Using the black oil model in a synthetic reservoir of 32,000 cells, the team simulated a two-phase flow, oil and water. With only six wells (four producing, two injecting) and significant geological variation, the training set comprised just 600 examples. Nevertheless, the PINO model’s sequential architecture, which uses previous time steps as inputs to predict the next, allowed it to capture complex dynamics without overfitting.
“Only the initial pressure and saturation values at t = 0 are needed to begin testing,” Cherif explains. “The model then propagates sequentially through all 51 time steps. Flow rates are updated using Boussinesq’s equation to maintain physical consistency.” Despite the increased training time compared to the FNO model, PINO produced better accuracy, especially in long-horizon predictions. It proved particularly adept at preserving physical constraints, making it a viable candidate for integration into decision-making workflows.
Managing uncertainty with latent space inference
The final challenge was reconstructing the permeability field, a classic ill-posed inverse problem. With more unknowns than observations, traditional methods tend to become unstable. Errors compound, particularly if the forward simulator is imperfect. To solve this, TotalEnergies employed a VCAE to reduce the dimensionality of the permeability field and encode prior geological knowledge into a latent space. Each permeability map is represented as a set of latent variables, which can then be sampled, manipulated, and decoded.
“The model has three components: an encoder that extracts latent variables from the input, a reparameterisation step that creates the latent space distribution, and a decoder that reconstructs permeability fields from latent vectors,” Cherif explains. “This enables generation of new, plausible reservoirs not found in the original dataset.”
The team used the adaptive regularised ensemble Kalman inversion (AREKI) algorithm to match historical production and update latent variables iteratively. Each step compared simulated outputs with real production, adjusting the latent space accordingly. The result: in just 40 minutes on a single H100 GPU, the model generated permeability ensembles that aligned closely with observed production at all wells. The time savings are significant compared to eight hours with a conventional simulator.
“Comparing the reconstructed permeability map with the true geological model, we found strong similarities, particularly around the well locations,” Cherif continues. “Gaps in reconstruction occurred in areas with no wells, where observational data were lacking. Nonetheless, the main geological features were successfully recovered.”
Toward faster, more agile workflows
While these models are not poised to replace traditional simulators altogether, they are beginning to reshape the boundaries of what is computationally feasible. AI-powered surrogates offer a compelling balance of speed and fidelity for tasks such as scenario generation, history matching, and optimisation.
However, challenges remain. Training still demands high-performance computing, particularly for real-world reservoirs with millions of cells. Memory constraints, data quality, and the generalisability of learned models also present limitations. Parallelisation across GPU clusters and integration into hybrid modelling pipelines will be key.
“Traditional simulators will continue to be the reference for high-fidelity modelling,” Cherif concludes. “The aim is not to replace reservoir engineers but to provide complementary, AI-based tools that offer new approaches to subsurface modelling.”
As AI models become more sophisticated and compute infrastructure continues to scale, subsurface workflows will shift from being a computational bottleneck to a strategic asset. For companies navigating the complexity of hydrocarbon extraction and carbon sequestration, the ability to simulate the unseen may be the most powerful tool.