Skip to main content

Supercomputers in the oil and gas landscape

Hydrocarbon Engineering,


Low oil prices have sent many industry exploration and production companies scrambling to optimise their current production and find new low cost drilling targets in order to meet the huge demand of consumers. In the current state, high performance computing has become critical for companies looking to become ever more efficient.

Supercomputers enable oil and gas organisations to run simulations, modelling and other tasks with the idea of making more informed decision. As the hardware has advanced, so have the algorithms used to generate these outcomes. This simple fact has significantly altered the playing field, requiring companies to implement increasingly powerful, purpose-built solutions in order to stay competitive.

Quality and complexity

Complex algorithms are being utilised in many fields of the industry, particularly in seismic processing, an area which requires high-fidelity images. Many processing companies have run into a barrier – using more complex algorithms and running dense datasets necessary for today’s seismic simulations puts a tremendous strain on commodity clusters.

Though the possibilities presented by mathematics have always outpaced hardware, it has now become possible to complete tasks like full waveform inversions in a matter of weeks or days, rather than years.

Shifting infrastructure

While innovations in seismic processing have traditionally come from the introduction of new, faster CPUs coupled with new algorithms designed to take advantage of new hardware, we are now seeing that some algorithms must be spread across multiple nodes, making interconnects like Ethernet and InfiniBand inadequate. The input/output (I/O) demand modern algorithms put on any single node are becoming more extreme.

The new reality is that adding additional processing power to existing hardware is no longer a viable solution. In order to meet performance requirements, systems must be built from the ground up.

Applications and architecture

In order to run top-of-the-line applications, hardware with Massively Parallel Processing (MPP) characteristics is necessary. The significant projects many large companies plan on implementing would require computing power orders of magnitude higher than what is currently available to them. For example, one multinational energy company estimated that by 2020 they will need more than 100 petaflops to conduct necessary seismic analysis and processing for newer, highly-dense three-dimensional surveys.

Going forward, the oil and gas sector will need to adopt solutions that utilise innovative methods – as opposed to ones based on conventional operating models – in order to run their processing resources at maximum efficiency. There will be a growing focus on HPC systems that specialise in moving data between supercomputing nodes.

Hardware revolution

Graphics Processing Units (GPUs) and other many-core processors are now augmenting multicore CPUs in order to meet the increasing demands of seismic processing workflows.

As supercomputing resources become more powerful and scalable, the industry is inching closer to the integration of the complete asset team, which would see the entire staff work from the same model, ultimately resulting in a shorter turnaround time.

Reservoir analysis

Advances in supercomputing are expected to have a massive impact on extraction rates through step-change improvements in reservoir analysis. Increases in speed have allowed reservoir engineers to study more model realisations than ever before. With the right computing power and human expertise, companies can expect to run a 45-year production simulation in as little as two-and-a-half hours.

The growth of analytics

New levels of computing power – along with the growth in sensors and volumes of data – will likely put the oil and gas industry on new footing, as data analytics continue to play a larger role in the exploration and production workflow.

High performance data analytics now has the ability to collect data from thousands of well-heads, machines, sensors, applications and vehicles, though much of this data is still not being used to its full potential. However, platform integration utilises conventional high-performance computing workflows alongside innovative techniques like streaming analytics or Spark.

This opens the potential for teams to conduct pattern and correlation analysis concurrently with seismic processing workflows, effectively reducing the number of necessary iterations. Beneficial geophysical attributes can be picked up via seismic wavelet analysis, even if they are not apparent in the mathematical model.

Meeting the needs of the future, now

Companies can now place high-performance data analytics in areas formerly occupied by mathematically-modelled workflows. Analysing data as it arrives allows companies to determine if the information they are receiving is reliable immediately, saving both time and resources.

In the current climate, oil and gas companies must be able to find and produce hydrocarbons faster, more efficiently and safer than ever before. A worldwide shift towards sustainability has made energy efficiency crucial, while increasingly limited resources have increased the level of competition between companies. Proper knowledge can provide companies with an edge over the competition, while streamlining business functions and increasing safety in the field.

The ever-changing demands of the present and the future cannot be fulfilled by the supercomputing models of the past, but the solutions currently being developed can ensure these needs are met.

Written by Bert Beals, Global Head of Energy at Cray Inc.

Read the article online at: https://www.hydrocarbonengineering.com/special-reports/05012016/beals-supercomputers-in-the-oil-and-gas-landscape-2065/

You might also like

 
 

Embed article link: (copy the HTML code below):