Enterprise Services Blog
Get the latest thought leadership and information about the role of Enterprise Services in an increasingly interconnected world at HP Communities.

Improving Oil & Gas Analysis Results with High Performance Computing

I have been working recently with some of our Oil & Gas clients who want decrease the time it takes for them to process and review O&G resource data to determine where to drill and produce potential reserves.  While projects to improve time to oil span multiple areas in upstream, some of the most intense activities focus on the primary questions addressed in O&G since the 1800’s.  Where should I drill for O&G?  How can I produce O&G economically using today’s technology?  What impacts will exist, and how can I optimize production while I am producing from a reserve?    


Gasoline, Natural Gas, and other products produced from O&G reserves are what we rely on to power the majority of our transportation infrastructure, as well as some of our power generation needs.  The process of extracting them from the ground, refining them into usable products, and distributing them for consumption supports how society currently functions.  Computing technology, analytics, and making the right information to the right people at the right time are all components in successfully finding, producing, and optimizing production of these resources. 


Analyzing geology and identifying where hydrocarbon reserves are located is the first step in this process.  It involves analyzing terabytes of data with specialized programs that are designed to indicate where O&G may be trapped beneath the surface.  As they may be trapped under a dome of rock, along a fault or fracture, or layered under another type of non-permeable rock formation, different techniques may be required to access and produce them.  How the hydrocarbons are trapped beneath the surface, and what technique is used to extract them can make the difference between profitably producing the resource, or failing to cover extraction costs.   


The first analysis step is aided by continuing advances in High Performance Computing (HPC).  Once the domain of expensive supercomputers, HPC capabilities advanced rapidly in the 90’s as commodity grade computers were integrated into compute clusters where software programs enabled application and data processing to be shared.  This ‘scale-out’ approach enables the cost effective implementation of high performance parallel computing, and precipitated a revolution in analysis for O&G companies.  It also supports cost effective scaling and the number of analysis applications in an O&G company’s application inventory increases. 


With O&G software providers such as Schlumberger and Halliburton leveraging a HPC approach, processing cycles can be reduced from what used to be weeks and days to hours.  This enables not only faster decision making, but supports improvements in the analysis process with more precise information and images for targeted subsurface geologies.  In fact, one of the largest commercial subsurface analysis systems, with a processing capability of 2.2 petaflops, memory of 1000 terabytes, and disk space of over 24 petabytes is currently being built on the west side of Houston, TX.  What is possible with this type of computing power?  Its owner hopes to model and predict fluid flow from high resolution 3D subsurface images with a resolution and scale equivalent to 1/50th of the thickness of a human hair. 



An HPC cluster doesn’t have to be massive to reduce computation times and improve analysis capabilities.  With only a modest cluster supporting half a petaflop and two petabytes of data, engineers and geotechnical professionals can reduce the amount of time required for analysis significantly, providing payback times measured in months, not years.   

Installing these systems can be complex.  With the ability to pack 320 cores of processing capability into a standard four unit framework, power, cooling, and the network and power cabling must all be carefully considered, especially when these systems need to provide full redundancy.  In meetings with data center teams, they have to understand the level of density and potential impacts these new compute environments represent.  Graphics capable of driving multiple high resolution monitors and use of independent GPU’s can become another consideration when supporting analysis on two high resolution 30 inch monitors. 


This much processing power requires efficient management.  Advanced techniques for computer system self-management and security are mandatory for cost effective scaling of HPC clusters.  While initially targeted at managing support costs, self management software is enabling more self-regulating behaviors within these compute clusters.  I can use these packages to automatically correct for errors, power down systems not in use, automatically reconfigure systems to support different software packages and operating systems based on demand, or prioritize specific jobs and tasks based on pre-set conditions.  As these capabilities mature, they will continue to evolve beyond providing cost savings and enable increasingly complex software systems that scale or schedule themselves for optimized productivity.    


In the future I’ll explore more around these capabilities, the security concerns they raise, what role analytics plays in the production process, and how computing capabilities help.  Until then, please find more information about HPC and High Performance computing HP’s web site hp.com/go/hpc. For more information about HP and the Oil & Gas industry please visit hp.com/go/oilgas.

Showing results for 
Search instead for 
Do you mean 
About the Author

Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.