Boosting Productivity and Reducing Costs in Petroleum E&P
Boosting Productivity and Reducing Costs in Petroleum E&P
A look into how GPUs have modernized and impacted petroleum simulation, and how this could impact your big data efforts.
Join the DZone community and get the full member experience.Join For Free
The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.
For the month of April, I am fulfilling a long-time dream of living in Florence and learning the Italian language. I’ve been thinking about the vastly different world it was when my grandparents left Italy in the 1930s. I arrived comfortably by air in about seven hours, while it took weeks for them to cross the ocean on a ship. Vast technological advances allow me to communicate real-time with my family and my office essentially at will from almost anywhere in the world. However, a mere two generations back, it took months for letters to cross the Atlantic. There is great value in speed and in embracing the technology that enables it, as the benefits can be both disruptive and transformative to our life experience.
In the field of reservoir simulation, I think of Stone Ridge Technology’s ECHELON software as one such disruptive technology. Over the last two decades there have been numerous advances in hardware, software and algorithms. In hardware, for example, processor clock speeds plateaued more than a decade ago and architectures shifted to multi- and now many-core architectures. Consequently, advances in software performance are now only achieved through a complex hierarchy of parallelism from coarse-grained domain decomposition down to fine-grained, e.g., thread-level, instruction level and data parallelism.
In addition, GPUs have joined CPUs as general-purpose compute platforms and have advanced briskly, outpacing CPU hardware in both memory bandwidth and FLOPS, both of which are critical performance factors for reservoir simulation. ECHELON is the first reservoir simulator written explicitly for GPUs to fully employ both coarse-grain and all levels of fine-grain parallelism available. Every computational kernel runs on GPUs. ECHELON also uses optimal solver algorithms instead of employing those that are more easily parallelized but algorithmically inefficient. The software was developed from scratch using a modern high-performance object-oriented software design approach to allow the quick addition of new and evolving engineering features.
By embracing newer, dense GPU systems, modern algorithms and software design approaches, ECHELON achieves performance levels that significantly outpace the competition. When compared to CPU-based hardware and software alternatives, ECHELON is exceptionally fast, — ranging from 10 to more than 100 times faster on real field assets containing complex engineering features. Furthermore, looking at the GPU technology roadmap recently announced in early April at NVIDIA GTC 2016, the performance gap between ECHELON and its CPU-bound competitors will widen with coming years.
We have seen that the combination of ECHELON and the Cray® CS-Storm system’s dense GPU platform makes a compelling solution for organizations focused on increasing the productivity of their field assets, in an economic climate that demands lowered costs. The value of speed, the ability to model at the geo-scale and the ability to employ more compact, powerful hardware have emerged as salient requirements in the E&P industry. Running on Cray/NVIDIA hardware, ECHELON can enhance productivity and lower costs for companies in several ways:
ECHELON better quantifies and reduces the uncertainty in model predictions. The idea that there is a single canonical model that describes subsurface reservoir properties is an outdated assumption. It is more accurate to describe reservoir properties with probability distributions, and any predictions about reservoir productivity should include an efficient sampling of that distribution. However, uncertainty quantification and sensitivity analysis result in a large ensemble of models. Running all such simulations has been difficult to impossible in the past because each model may take days to run. ECHELON’s immense speed and efficiency, coupled with the Cray CS-Storm system’s dense, accelerator-optimized performance, enables an accurate ensemble approach to simulation that is better grounded in statistics.
ECHELON can complete projects more quickly at every stage. When reservoir engineers can turn around multiple models in one day, their productivity is enhanced and they can make more accurate, stable and robust predictions. In addition, the longer a project draws out, the more disruptive events can derail it. Price swings, management reorganizations, asset sales and personnel shifts all negatively affect project productivity. By executing projects more quickly we can reduce the impact of these disruptive events.
ECHELON can simulate at the geo-scale. Typically models with tens of millions to hundreds of millions of cells run prohibitively slow in legacy software, so engineers have coarsened their models through upscaling. Unfortunately, the upscaling takes time and is most often an irreversible, heuristic process that destroys information and introduces uncertainty. Eliminating this step is a major advancement because it allows a direct comparison between the geo model and the reservoir model. In one test, for example, SRT and Cray demonstrated results on a 243-million-cell model running out 45 years completing in about two-and-a-half hours on just two CS-Storm nodes.
Published at DZone with permission of Adnan Khaleel , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.