Cross-validation in an Iterative Ensemble Smoother: Stopping Earlier for Better

Iterative ensemble smoothers (IES) are among the popular history matching (UM) algorithms for reservoir characterization. The actual deployment of an IES algorithm requires implementing certain stopping criteria, normally adopted for runtime control (e.g., by stopping the IES when it reaches the maximum number of iterations) and/or safeguarding the HM performance (e.g., by preventing the simulated data from overfitting the actual observations). In practice, for various reasons, it is often challenging for existing stopping criteria to simultaneously achieve both purposes. One noticeable issue, as illustrated in this work, is that in many situations, the qualities of the estimated reservoir models may already start to deteriorate before a conventional stopping criterion activates to terminate the iteration process. Following this observation, one practically important question arises: Is it possible to further improve the efficacy of the IES algorithm by designing a different stopping criterion so that the IES can stop earlier, saving computation costs while achieving better HM performance?

As one of the rare attempts in the community, this work aims to investigate the use of a new IES stopping criterion that has the potential to provide an affirmative answer to the above question. In this regard, our main idea is based on the concept of cross-validation (CV), routinely adopted in supervised machine learning (SML) problems for early stopping to prevent SML models from overfitting the training data. Despite noticed similarities between HM and SML problems, some fundamental differences exist, making it fail to work well if one directly extends a vanilla CV procedure from SML to HM. To tackle this identified challenge, we design an efficient CV procedure tailored for HM problems, and inspect the performance of an IES algorithm equipped with this CV procedure (IES-CV) in both synthetic and real field case studies. Our numerical investigations indicate that the IES-CV algorithm achieves promising HM performance in all case studies, confirming the possibility that with the aid of a proper stopping criterion, an IES algorithm can terminate at an appropriate iteration step with near-optimal HM performance. Beyond these numerical findings, it is also our hope that the current work may help improve the best practices of applying IES to HM problems, taking advantage of the effective, CV-based stopping criterion.

A survey on multi-objective algorithms for model-based oil and gas production optimization: current status and future directions

In the area of reservoir engineering, the optimization of oil and gas production is a complex task involving a myriad of interconnected decision variables shaping the production system’s infrastructure. Traditionally, this optimization process was centered on a single objective, such as net present value, return on investment, cumulative oil production, or cumulative water production. However, the inherent complexity of reservoir exploration necessitates a departure from this single-objective approach. Multiple conflicting production and economic indicators must now be considered to enable more precise and robust decision-making. In response to this challenge, researchers have embarked on a journey to explore field development optimization of multiple conflicting criteria, employing the formidable tools of multi-objective optimization algorithms. These algorithms delve into the intricate terrain of production strategy design, seeking to strike a delicate balance between the often-contrasting objectives. Over the years, a plethora of these algorithms have emerged, ranging from a priori methods to a posteriori approach, each offering unique insights and capabilities. This survey endeavors to encapsulate, categorize, and scrutinize these invaluable contributions to field development optimization, which grapple with the complexities of multiple conflicting objective functions. Beyond the overview of existing methodologies, we delve into the persisting challenges faced by researchers and practitioners alike. Notably, the application of multi-objective optimization techniques to production optimization is hindered by the resource-intensive nature of reservoir simulation, especially when confronted with inherent uncertainties. As a result of this survey, emerging opportunities have been identified that will serve as catalysts for pivotal research endeavors in the future. As intelligent and more efficient algorithms continue to evolve, the potential for addressing hitherto insurmountable field development optimization obstacles becomes increasingly viable. This discussion on future prospects aims to inspire critical research, guiding the way toward innovative solutions in the ever-evolving landscape of oil and gas production optimization.

Balancing Conflicting Objectives in Oilfield Development: A Robust Multi-objective Optimization Framework

Optimizing production strategies for gas and oil fields is a critical challenge in petroleum engineering as it involves balancing multiple and often conflicting objectives, for instance, enhancing production rates, reducing operational costs, and mitigating the environmental effects of cumulative water or gas production. This study aims to develop and apply a robust multi-objective optimization framework to the UNISIM-II-D reservoir, which represents Brazilian pre-salt fields on nine representative models (RMs) to address geological uncertainties while considering three economic scenarios. The study focuses on maximizing expected monetary value (EMV) and the net present value of RM4 considering economic uncertainty (NPVeco of RM4), of the most pessimistic scenario among the RMs. The optimization variables are location, type (injection or production), and number of wells, while the non-dominated sorting genetic algorithm II (NSGA-II) is employed for multi-objective optimization. The study indicates that prioritizing EMV, the primary objective function, does not inevitably result in the NPVeco of RM4 achieving its optimal or near-optimal value. However, by employing the proposed framework, a 3 % improvement in EMV and a 28 % enhancement in the NPVeco of RM4 is achieved compared to the single objective optimization of EMV, which highlights the strength and robustness of the framework.

Fundamental comparison between the pseudopotential and the free energy lattice Boltzmann methods

The pseudopotential and free energy models are two popular extensions of the lattice Boltzmann method for multiphase flows. Until now, they have been developed apart from each other in the literature. However, important questions about whether each method performs better needs to be solved. In this work, we perform a fundamental comparison between both methods through basic numerical tests. This comparison is only possible because we developed a novel approach for controlling the interface thickness in the pseudopotential method independently on the equation of state. In this way, it is possible to compare both methods maintaining the same equilibrium densities, interface thickness, surface tension and equation of state parameters. The well-balanced approach was selected to represent the free energy. We found that the free energy one is more practical to use, as it is not necessary to carry out previous simulations to determine simulation parameters (interface thickness, surface tension, etc.). In addition, the tests proofed that the free energy model is more accurate than the pseudopotential model. Furthermore, the pseudopotential method suffers from a lack of thermodynamic consistency even when applying the corrections proposed in the literature. On the other hand, for both static and dynamic tests we verified that the pseudopotential method was able to simulate lower reduced temperature than the free energy one. We hope that these results will guide authors in the use of each method.

A machine learning-assisted decision-making methodology based on simplex weight generation for non-dominated alternative selection

In multiobjective decision-making problems, it is common to encounter nondominated alternatives. In these situations, the decision-making process becomes complex, as each alternative offers better outcomes for some objectives and worse outcomes for others simultaneously. However, DMs still must choose a single alternative that provides an acceptable balance between the conflicting objectives, which can become exceedingly challenging. To address this scenario, our work introduces a decision-making framework aimed at supporting such decisions. Our proposed framework draws upon concepts from the field of Multi-Criteria Decision Making, and combines a novel simplex-like weight generation method with expert insights and machine learning data-driven procedures to establish an intuitive methodology that empowers DMs to select a single alternative from a range of alternatives. In this paper, we illustrate the effectiveness of our methodology through an example and two real-world decision cases from the oil and gas industry, each involving 128 alternatives and five distinct objectives.

Estimation of distribution algorithms for well placement optimization in petroleum fields

Optimizing well placement is one of the primary challenges in oil field development. The number and positions of wells must be carefully considered, as it is directly related to the infrastructure cost and the profits over the field’s life cycle. In this paper, we propose three estimation of distribution algorithms to optimize well placement with the objective of maximizing the net present value. The methods are guided by an elite set of solutions and are able to obtain multiple local optima in a single run. We also present an auxiliary regression model to preemptively discard candidate solutions with poor performance prediction, thus avoiding running computationally expensive simulations for unpromising candidates. The model is trained with the data obtained during the search process and does not require previous training. Our algorithms yielded a significant improvement compared to a state-of-the-art reference method from the literature, as evidenced by computational experiments with two benchmarks.

A benchmark generator for scenario-based discrete optimization

Multi-objective evolutionary algorithms (MOEAs) are a practical tool to solve non-linear problems with multiple objective functions. However, when applied to expensive black-box scenario-based optimization problems, MOEA’s performance becomes constrained due to computational or time limitations. Scenario-based optimization refers to problems that are subject to uncertainty, where each solution is evaluated over an ensemble of scenarios to reduce risks. A primary reason for MOEA’s failure is that algorithm development is challenging in these cases as many of these problems are black-box, high-dimensional, discrete, and computationally expensive. For this reason, this paper proposes a benchmark generator to create fast-to-compute scenario-based discrete test problems with different degrees of complexity. Our framework uses the structure of the Multi-Objective Knapsack Problem to create test problems that simulate characteristics of expensive scenario-based discrete problems. To validate our proposition, we tested four state-of-the-art MOEAs in 30 test instances generated with our framework, and the empirical results demonstrate that the suggested benchmark generator can analyze the ability of MOEAs in tackling expensive scenario-based discrete optimization problems.

A comparative numerical study of finite element methods resulting in mass conservation for Poisson’s problem: Primal hybrid, mixed and their hybridized formulations

This paper presents a numerical comparison of finite-element methods resulting in local mass conservation at the element level for Poisson’s problem, namely the primal hybrid and mixed methods. These formulations result in an indefinite system. Alternative formulations yielding a positive-definite system are obtained after hybridizing each method. The choice of approximation spaces yields methods with enhanced accuracy for the pressure variable, and results in systems with identical size and structure after static condensation. A regular pressure precision mixed formulation is also considered based on the classical RTk space. The simulations are accelerated using Open multi-processing (OMP) and Thread-Building Blocks (TBB) multithreading paradigms alongside either a coloring strategy or atomic operations ensuring a thread-safe execution. An additional parallel strategy is developed using C++ threads, which is based on the producer-consumer paradigm, and uses locks and semaphores as synchronization primitives. Numerical tests show the optimal parallel strategy for these finite-element formulations, and the computational performance of the methods are compared in terms of simulation time and approximation errors. Additional results are developed during the process. Numerical solvers often fail to find an accurate solution to the highly indefinite systems arising from finite-element formulations, and this paper documents a matrix ordering strategy to stabilize the resolution. A procedure to enable static condensation based on the introduction of piecewise constant functions that also fulfills Neumann’s compatibility condition, and yet computes an average pressure per element is presented.

A two-level semi-hybrid-mixed model for Stokes–Brinkman flows with divergence-compatible velocity–pressure elements

A two-level version for a recent semi-hybrid-mixed finite element approach for modeling Stokes and Brinkman flows is proposed. In the context of a domain decomposition of the flow region Ω, composite divergence-compatible finite elements pairs in H(div,Ω)×L2(Ω) are utilized for discretizing velocity and pressure fields, using the same approach previously adopted for two-level mixed Darcy and stress mixed elasticity models. The two-level finite element pairs of spaces in the subregions may have richer internal resolution than the boundary normal trace. Hybridization occurs by the introduction of an unknown (traction) defined over element boundaries, playing the role of a Lagrange multiplier to weakly enforce tangential velocity continuity and Dirichlet boundary condition. The well-posedness of the method requires a proper choice of the finite element space for the traction multiplier, which can be achieved after a proper velocity FE space enrichment with higher order bubble fields. The method is strongly locally conservative, yielding exact divergence-free velocity fields, demonstrating pressure robustness, and facilitating parallel implementations by limiting the communication of local common data to at most two elements. Easier coupling strategies of finite elements regarding different polynomial degree or mesh widths are permitted, provided that mild mesh and normal trace consistency properties are satisfied. Significant improvement in computational performance is achieved by the application of static condensation, where the global system is solved for coarse primary variables. The coarse primary variables are a piecewise constant pressure variable over the subregions, velocity normal trace and tangential traction over subdomain interfaces, as well as a real number used as a multiplier ensuring global zero-mean pressure. Refined details of the solutions are represented by secondary variables, which are post-processed by local solvers. Numerical results are presented for the verification of convergence histories of the method.