by Seyed Kourosh Mahjour, Antonio Alberto Souza Santos, Susana Margarida da Graça Santos, Denis José Schiozer, published at Society of Petroleum Engineers, September 2021, SPE-206300-MS.
In greenfield projects, robust well placement optimization under different scenarios of uncertainty technically requires hundreds to thousands of evaluations to be processed by a flow simulator. However, the simulation process for so many evaluations can be computationally expensive. Hence, simulation runs are generally applied over a small subset of scenarios called representative scenarios (RS) approximately showing the statistical features of the full ensemble. In this work, we evaluated two workflows for robust well placement optimization using the selection of (1) representative geostatistical realizations (RGR) under geological uncertainties (Workflow A), and (2) representative (simulation) models (RM) under the combination of geological and reservoir (dynamic) uncertainties (Workflow B). In both workflows, an existing RS selection technique was used by measuring the mismatches between the cumulative distribution
of multiple simulation outputs from the subset and the full ensemble. We applied the Iterative Discretized Latin Hypercube (IDLHC) to optimize the well placements using the RS sets selected from each workflow and maximizing the expected monetary value (EMV) as the objective function. We evaluated the workflows in terms of (1) representativeness of the RS in different production strategies, (2) quality of the defined robust strategies, and (3) computational costs. To obtain and validate the results, we employed the synthetic UNISIM-II-D-BO benchmark case with uncertain variables and the reference fine- grid model, UNISIM-II-R, which works as a real case. This work investigated the overall impacts of the robust well placement optimization workflows considering uncertain scenarios and application on the reference model. Additionally, we highlighted and evaluated the importance of geological and dynamic uncertainties in the RS selection for efficient robust well placement optimization.
Access the full paper.
by Salabarría, J.B.V., Lima, P., Devloo, P.R.B., Triana, O.D., presented at XLII Ibero-Latin-American Congress on Computational Methods in Engineering (CILAMCE-2021) | 3rd Pan American Congress on Computational Mechanics, Novermber 2021.
In this research, the mathematical model represents a two-phase flow in a fractured porous reservoir media, where the Darcy law represents the flow in both fractures and matrix. The flux/pressure of the fluid flow is approximated using a hybridized mixed formulation coupling the fluid in the volume with the fluid flow through th fractures. The spatial dimension of the rock matrix is three and and is coupled with two-dimensional discrete frac- tures. The transport equation is approximated using a lower order finite volume system solved through an upwind scheme. The C++ computational implementation is made using the NeoPZ framework, an object oriented finite element library. The generation of the geometric meshes is done with the software Gmsh. Numerical simulations in 3D are presented demonstrating the advantages of the adopted numerical scheme and these approximations are compared with results of other methods.
by Victor B. Oliari, Paulo Rafael Bosing, Denise de Siqueira, Philippe R.B. Devloo, presented at XLII Ibero-Latin-American Congress on Computational Methods in Engineering (CILAMCE-2021) | 3rd Pan American Congress on Computational Mechanics, Novermber 2021.
We present new fully computable a posteriori error estimates for the primal hybrid finite element methods based on equilibrated flux and potential reconstructions. The reconstructed potential is obtained from a local L2 orthogonal projection of the gradient of the numerical solution, with a boundary continuous restriction that comes from a smoothing process applied to the trace of the numerical solution over the mesh skeleton. The equilibrated flux is the solution of a local mixed form problem with a Neumann boundary condition given by the Lagrange multiplier of the hybrid finite element method solution. To establish the a posteriori estimates we divide the error into conforming and non-conforming parts. For the former one, a slight modification of the a posteriori error estimate proposed by Vohral ́ık  is applied, whilst the latter is bounded by the difference of the gradient of the numerical solution and the reconstructed potential. Numerical results performed in the environment PZ Devloo , show the efficiency of this strategy when it is applied for some test model problems.
by Jeferson Wilian Dossa Fernandes, Sonia Maria Gomes, Philippe Remy Bernard Devloo, presetend at XLII Ibero-Latin-American Congress on Computational Methods in Engineering (CILAMCE-2021) | 3rd Pan American Congress on Computational Mechanics, Novermber 2021.
Mixed finite element computations arise in the simulation of multiple physical phenomena. Due to its characteristics, such as the strong coupling between the approximated variables, the solution of such class of prob- lems may suffer from numerical instabilities as well as a computational cost. The de Rham diagram is a standard tool to provide approximation spaces for the solution of mixed problems as it relates H1 -conforming spaces with H(curl) and H(div)-conforming elements in a simple way by means of differential operators. This work presents an alternative for accelerating the computation of mixed problems by exploring the de Rham sequence to derive divergence-free functions in a robust fashion. The formulation is numerically verified for the 2D case by means of benchmark cases to confirm the theoretical regards.
by Henrique Hungari Rodrigues, Luís Otávio Mendes da Silva, Susana Margarida da Graça Santos, Denis José Schiozer, presented at IV Congresso Nacional de Engenharia de Petróleo, Gás Natural e Biocombustíveis, May 2021.
A etapa de seleção de modelos representativos (MRs) para tomada de decisão sob incerteza tem, muitas vezes, um elevado custo computacional gasto em simulações de escoamento (Schiozer et al., 2019). Como forma de reduzir este custo, Mahjour et al. (2020) simplificaram as 300.000 dimensões das realizações geoestatísticas do modelo benchmark UNISIM-II-D em duas, utilizando redução de dimensionalidade. Contudo, tal simplificação tem como consequência a perda da variabilidade do conjunto de dados. Assim, esse trabalho utiliza Análise de Componente Principal (PCA, na sigla em inglês) para redução de dimensionalidade, variando o número de dimensões geradas para avaliar a quantidade de informação capturada no sistema simplificado, e buscar a melhor configuração do fluxo de trabalho para quantificação de risco. Foi observado que, para o caso estudado, as dimensões geradas pela PCA capturam pouca variabilidade e de forma heterogenia em relação às propriedades que as dimensões representam, como porosidade. Dessa forma, um sistema simplificado com poucas dimensões, além de pouca informação, fica enviesado. Em relação à quantificação de risco, independentemente do número de MRs e técnicas de clusterização, o aumento do número de dimensões geradas não só não favoreceu os resultados como aumentou os erros relacionados à representação do risco. Este fenômeno é explicado pela literatura como a “maldição da dimensionalidade”. Recomenda- se a aplicação do fluxo de trabalho usando poucas dimensões (entre 2 e 4), Kmeans como método de clusterização para seleção de MRs e o maior número de MRs possível.
William Denner Pires Fonseca, Rafael Franklin Lázaro de Cerqueira, Erick de Moraes Franklin, presented at 26th International Congress of Mechanicel Engineering (COBEM 2021), November 2021.
Particle Image Velocimetry (PIV) is a non-intrusive and quantitative technique used for the visualization and measurement of deformation rates in fluid flows. The performance of the PIV technique is determined by the quality of the recorded images and treatment of the data obtained after the acquisition. The PIV technique heavily depends on the quality of the acquired images, i.e., homogeneous lighting, good contrast, low background noise, and suitable particle displacement. However, these conditions cannot always be achieved, and image pre-processing becomes an important tool for an accurate analysis of the problem. In the PIV pre-processing step, the aim is to enhance the correlation signal (displacement peak) and, therefore, produce higher quality vector fields based on contrast improvement, brightness correction, and noise removal. After the pre-processing step, the displacement vector is computed using a PIV correlation algorithm to obtain the velocity field in the next step. This work aims to evaluate and compare the performance of PIV image pre-processing and processing techniques. For this, two types of flows were used, Poiseuille flow and Rankine vortex, created from a PIV image generator and processed using the PIVlab toolbox, both coded in MATLAB. Three image pre-processing methods are analyzed: i) Contrast Limited Adaptive Histogram Equalization (CLAHE); ii) intensity high-pass and; iii) intensity capping. The accuracy of the DCC (Direct-Cross-Correlation) and DFT (Discrete Fourier Transform) algorithms are also evaluated and discussed.