Our blog

Home Our Blog HPC Challenges and Opportunities in Oil and Gas

Thursday19 September 2019

HPC Challenges and Opportunities in Oil and Gas

Main industry challenges in HPC-area:

As oil & gas prices continue to grow, there is always a need for additional investments for the development of new fields. Hence comes the key trend where operations become deeper, harsher and  more remote.
Therefore and in order to be profitable, all these challenges require higher technology intensity, complexity and level of integration.
In other words, one has to be more efficient to improve the performance of the organisation. Thus we believe that investment in HPC is one of the most reasonable and key steps to take to improve your CAPEX, balance your OPEX and ameliorate the final outcome.
Lease of an oil well comes at high cost and on top of it restarting a reserve requires some serious business implications. As the consequences we see a huge potential for high returns but it does come at high risk and numerous obligations. Therefore in this case it is imperative that everything goes right.

Now let’s discuss a number of challenges which Oil&Gas faces on a daily basis in the HPC field:

Big Data:

When we look at the requirements from the compute as well as data standpoints, we can see that they are huge and are constantly growing. One of our customers made a survey of the land area of about 2000sq.m (to do that they took more than 500,000 shots) and as the outcome the size of the date that they got was about 200TB.

Here is an example from our everyday life: let’s say we need to transfer these 200TB from our computer to our USB-storage using the well-known USB 2.0 we will spend more than 3 months only to transfer the date! (It is just an example give you a better feel of the data size).

However, even if you use industry special solutions, the time to transfer the date from standard tapes to HPC-datacentres would still take you more than 2 weeks! Thus we see crucial need for a reliable and fast storage to keep and process all the data.

Growing demand for Interconnect:

Here is another real life story. Let’s say you bought the latest high-end model of PC for online-gaming. However you decided to save some cash and didn’t pay additional 10% for the broadband internet and chose to use dial-up. How do you think, would that affect the speed of your online gaming? The answer to that is yes and even much more than the 10% that you saved.
If we apply this example to seismic imaging and modelling, scalability for high performance computing systems is bounded by parameters like floating-point capability, memory bandwidth and latency, interconnect bandwidth and latency, as well as storage subsystems. Thus we suggest that before making your purchase decision, one thoroughly considers all the necessary requirements.

Increasing OPEX (system power consumption, cooling etc):

For the past 20 years your DC power consumption was about twenty to fifty kilowatts and your distribution network could be supported by using diesel generators and static UPS only. The overhead costs of approximately 15-20 % were acceptable.
But today we have different era – Petascale (and 3 to 5 years later Exascale), where power consumption has become a real problem as a single HPC-system is likely to reach whooping 25-30MW (compare to the previous 20-50kWh)! Traditional way of thinking about power supply with low efficiency is no longer acceptable.

HPC = hardware + software + …  people

HPC is not always just about the hardware or software; it is also about people.
How to train them to use sophisticated HPC-tools to manage and control a cluster?
How to decrease learning curve for the newcomers?
Training of the new staff is also one of the growing concerns for the management team.

 

Main solutions (from NOVATTE previous experience):

  • PARALLEL FILE SYSTEM STORAGE:

For most of our Oil&Gas customers, we would like to recommend NOVATTE Lustre Appliance that utilizes the same parallel distributed file system that is used in fifteen out of the top 30 supercomputers, including the world’s fastest supercomputer Sequoia (according to TOP500; dated on Nov. 2012).

Parallel file storage is regarded as a critical element for timely processing of large volumes of seismic data. The real advantage of this kind of storage is the faster fabric to connect FS clients.

Our Lustre Storage Appliance consists of two types of modules – high IOPS SSD Modules and high-capacity SAS Modules.

To increase the performance of your application we would recommend to use modules with high-end SSD drives. They deliver  higher results for Read Access Time (0.1ms vs 13.6ms for 7.2K SATA), huge improvement in IOPS (up to 20,000 IOPS vs 100 for 7.2K SATA) and lower power consumption (up to 10-15 times vs conventional SATA).

So you would have more performance and flexibility in creating your high-performance storage.

  • POWER CONSUMPTION:

Power is one of the main commodities that are required by modern technology. Therefore we are doing our best and are putting in a lot of efforts using our engineering knowledge to maximize the efficiency where we can. We’ve got several projects that require networked sensors to monitor data centre profile temperatures, humidity and air pressure under the data centre floors at any point of time. These sensors allow our customers to not only analyse the efficiency of their cooling systems but also to tune the facility to maximize the efficiency.

Moreover, with the power densities (in some data centres) going up towards 100 kilowatts per rack, we see the trend moving towards liquid cooling.
There are one main reason for this trend: liquid cooling hundreds times denser than air, and this helps to conduct heat about 10-20 times greater than air alone.

  • INTERCONNECT:

In most cases we would recommend our customers to use QDR/FDR (40/56 GBps) InfiniBand Interconnect.

That allows to achieve (if comparing with the standard 10GB Interconnect) up to x10 times lower latency, x4 higher throughput and RDMA-features and helps your CPU to perform more efficiently, especially when one buys the latest E5-2690 series we believe that you would want to make sure it delivers you 100% of its performance.

As an example of using InfiniBand Interconnect vs 10GB you can have a look at the picture below. Results can differ from one project to another but primary trend remains the same.

pic

(Courtesy of HPC Advisory Council, 2009)

 

Call to action!

If you look at the performance results of TOP500 supercomputers and compare №1 to №500 you will be able to see the following:

In 1997 to be №1 you had to reach 1TFLOPS, but already in 2004 you needed 1TFLOPS+ for a much lower position among the Top500. 1PFLOPS performance was achieved in 2008 and according to the forecasts of specialists in 2014-2016 you will require 1PFLOPS+ for a bottom position on list of TOP500.

Technologies keep on changing, and today you need to look further and not only know how powerful your system currently is but understand how powerful your system will be in 3-5 years, how scalable it is and how efficient it could be.

 

Reference: Courtesy of HPC Advisory Council, 2009. Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing, White Paper

JA Minisite

2857.orig.q75.o0 - Copy IntelTPP amd    qctlogo e   Mellanox APAC Partner   1