Selection of Representative Scenarios Using Multiple Simulation Outputs for Robust Well Placement Optimization in Greenfields

In greenfield projects, robust well placement optimization under different scenarios of uncertainty technically requires hundreds to thousands of evaluations to be processed by a flow simulator. However, the simulation process for so many evaluations can be computationally expensive. Hence, simulation runs are generally applied over a small subset of scenarios called representative scenarios (RS) approximately showing the statistical features of the full ensemble. In this work, we evaluated two workflows for robust well placement optimization using the selection of (1) representative geostatistical realizations (RGR) under geological uncertainties (Workflow A), and (2) representative (simulation) models (RM) under the combination of geological and reservoir (dynamic) uncertainties (Workflow B). In both workflows, an existing RS selection technique was used by measuring the mismatches between the cumulative distribution
of multiple simulation outputs from the subset and the full ensemble. We applied the Iterative Discretized Latin Hypercube (IDLHC) to optimize the well placements using the RS sets selected from each workflow and maximizing the expected monetary value (EMV) as the objective function. We evaluated the workflows in terms of (1) representativeness of the RS in different production strategies, (2) quality of the defined robust strategies, and (3) computational costs. To obtain and validate the results, we employed the synthetic UNISIM-II-D-BO benchmark case with uncertain variables and the reference fine- grid model, UNISIM-II-R, which works as a real case. This work investigated the overall impacts of the robust well placement optimization workflows considering uncertain scenarios and application on the reference model. Additionally, we highlighted and evaluated the importance of geological and dynamic uncertainties in the RS selection for efficient robust well placement optimization.

A multi-scale mixed method for a two-phase flow in fractured reservoirs considering passive tracer

In this research, the mathematical model represents a two-phase flow in a fractured porous reservoir media, where the Darcy law represents the flow in both fractures and matrix. The flux/pressure of the fluid flow is approximated using a hybridized mixed formulation coupling the fluid in the volume with the fluid flow through th fractures. The spatial dimension of the rock matrix is three and and is coupled with two-dimensional discrete frac- tures. The transport equation is approximated using a lower order finite volume system solved through an upwind scheme. The C++ computational implementation is made using the NeoPZ framework, an object oriented finite element library. The generation of the geometric meshes is done with the software Gmsh. Numerical simulations in 3D are presented demonstrating the advantages of the adopted numerical scheme and these approximations are compared with results of other methods.

A posteriori error estimates for primal hybrid finite element methods

We present new fully computable a posteriori error estimates for the primal hybrid finite element methods based on equilibrated flux and potential reconstructions. The reconstructed potential is obtained from a local L2 orthogonal projection of the gradient of the numerical solution, with a boundary continuous restriction that comes from a smoothing process applied to the trace of the numerical solution over the mesh skeleton. The equilibrated flux is the solution of a local mixed form problem with a Neumann boundary condition given by the Lagrange multiplier of the hybrid finite element method solution. To establish the a posteriori estimates we divide the error into conforming and non-conforming parts. For the former one, a slight modification of the a posteriori error estimate proposed by Vohral ́ık [1] is applied, whilst the latter is bounded by the difference of the gradient of the numerical solution and the reconstructed potential. Numerical results performed in the environment PZ Devloo [2], show the efficiency of this strategy when it is applied for some test model problems.

Using the de Rham sequence for accelerating mixed finite element computations

Mixed finite element computations arise in the simulation of multiple physical phenomena. Due to its characteristics, such as the strong coupling between the approximated variables, the solution of such class of prob- lems may suffer from numerical instabilities as well as a computational cost. The de Rham diagram is a standard tool to provide approximation spaces for the solution of mixed problems as it relates H1 -conforming spaces with H(curl) and H(div)-conforming elements in a simple way by means of differential operators. This work presents an alternative for accelerating the computation of mixed problems by exploring the de Rham sequence to derive divergence-free functions in a robust fashion. The formulation is numerically verified for the 2D case by means of benchmark cases to confirm the theoretical regards.

Avaliação do efeito da variação de dimensionalidade na seleção de realizações geoestatísticas representativas para quantificação de risco, usando o método de Análise de Componente Principal

A etapa de seleção de modelos representativos (MRs) para tomada de decisão sob incerteza tem, muitas vezes, um elevado custo computacional gasto em simulações de escoamento (Schiozer et al., 2019). Como forma de reduzir este custo, Mahjour et al. (2020) simplificaram as 300.000 dimensões das realizações geoestatísticas do modelo benchmark UNISIM-II-D em duas, utilizando redução de dimensionalidade. Contudo, tal simplificação tem como consequência a perda da variabilidade do conjunto de dados. Assim, esse trabalho utiliza Análise de Componente Principal (PCA, na sigla em inglês) para redução de dimensionalidade, variando o número de dimensões geradas para avaliar a quantidade de informação capturada no sistema simplificado, e buscar a melhor configuração do fluxo de trabalho para quantificação de risco. Foi observado que, para o caso estudado, as dimensões geradas pela PCA capturam pouca variabilidade e de forma heterogenia em relação às propriedades que as dimensões representam, como porosidade. Dessa forma, um sistema simplificado com poucas dimensões, além de pouca informação, fica enviesado. Em relação à quantificação de risco, independentemente do número de MRs e técnicas de clusterização, o aumento do número de dimensões geradas não só não favoreceu os resultados como aumentou os erros relacionados à representação do risco. Este fenômeno é explicado pela literatura como a “maldição da dimensionalidade”. Recomenda- se a aplicação do fluxo de trabalho usando poucas dimensões (entre 2 e 4), Kmeans como método de clusterização para seleção de MRs e o maior número de MRs possível.

 

A Study on Image Pre-Processing and PIV Processing Techniques for Fluid Flows

Particle Image Velocimetry (PIV) is a non-intrusive and quantitative technique used for the visualization and measurement of deformation rates in fluid flows. The performance of the PIV technique is determined by the quality of the recorded images and treatment of the data obtained after the acquisition. The PIV technique heavily depends on the quality of the acquired images, i.e., homogeneous lighting, good contrast, low background noise, and suitable particle displacement. However, these conditions cannot always be achieved, and image pre-processing becomes an important tool for an accurate analysis of the problem. In the PIV pre-processing step, the aim is to enhance the correlation signal (displacement peak) and, therefore, produce higher quality vector fields based on contrast improvement, brightness correction, and noise removal. After the pre-processing step, the displacement vector is computed using a PIV correlation algorithm to obtain the velocity field in the next step. This work aims to evaluate and compare the performance of PIV image pre-processing and processing techniques. For this, two types of flows were used, Poiseuille flow and Rankine vortex, created from a PIV image generator and processed using the PIVlab toolbox, both coded in MATLAB. Three image pre-processing methods are analyzed: i) Contrast Limited Adaptive Histogram Equalization (CLAHE); ii) intensity high-pass and; iii) intensity capping. The accuracy of the DCC (Direct-Cross-Correlation) and DFT (Discrete Fourier Transform) algorithms are also evaluated and discussed.

 

Development of a Particle Tracking Velocimetry (PVT) Measurement Technique for the Experimental Investigation of Oil Drops Behavior in Dispersed Oil-Water Two-Phase Flow within a Centrifugal Pump Impeller

The objective of the current work is to present the development of a robust Particle Tracking Velocimetry (PTV) software for the analysis of oil drops behaviour in dispersed oil-water two-phase flow within a centrifugal pump impeller. The oil drop tracking was realized through high-speed camera acquisitions in a transparent pump prototype, which enabled the visualization of oil drops dispersed in water in all the impeller channels. The PTV software is based on a U-NET and standard convolutional networks, which detects oil drop contours in each frame of the high-speed camera videos. In order to assess the PTV software capabilities, a single experiment was analyzed in detail. In this experiment, due to the pump rotation speed and the water flow rate, intense transient fluctuations on the dispersed oil size distribution were observed in the recorded acquisitions. This procedure completely characterized instantaneous drop dynamics in the impeller channel. According to the results, there is a strong dependence between the oil injection flow rate, the instantaneous drop size distribution and the average velocity field.

 

Water Cut Estimation in Electrical Submersible Pumps Using Artificial Neural Networks

An artificial lift is a method used to obtain a higher oil flow rate from the well, through some scheme that reduces the pressure at the bottomhole. Electrical submersible pumping is a common method in petroleum industry. The main component of this method is the electrical submersible pump (ESP), that can operate with complex flows involving mixtures of oil, water and gas. The presence of water in oil fields is a problem because it favors the formation of emulsions, which are the mixture of oil and water. Emulsions can be found in the form of oil-in-water and water-in-oil emulsions, depending on which phase is the continuous one and which is the dispersed one. Water-in-oil emulsions increase considerably the viscosity of the mixture and affect the pump’s efficiency, diminishing its pumping capacity. The increase or decrease of the water fraction in the process may cause the phenomenon called catastrophic phase inversion (CPI), in which the
dispersed phase becomes the continuous one and rapidly alters the physical properties of the flow, causing operational instability throughout the production system. In order to identify and predict this important phenomenon in complex multiphase flows, the usage of advanced identification tools, based on experimental data, has been used in recent years. In this work, artificial neural networks are used to estimate the water fraction in a flow that runs through an ESP. For that, data like inlet and outlet pressures, temperature, vibration and the correspondent water cut values, among others, were collected from an ESP operating with water and oil. Single-phase and two-phase tests were performed with the purpose of collecting data with different water cut values, ranging from 0% (single-phase oil) to 100% (two-phase water and oil). From the laboratory experiments, it was possible to build a data-driven computational tool capable of estimating the water fraction that runs through the pump, based on an optimized artificial neural network structure, which achieved an R-score of 0.9987.