Skip to main content

Too much data?

Published by , Assistant Editor
Hydrocarbon Engineering,


Scott Lehmann, Petrotechnics, UK, explains why it is essential to get the digital approach right if operators are to achieve operational excellence in the fourth industrial revolution.

Industry 4.0 and its enabling technologies promise enormous value for downstream operators, but the sheer volume of data that these systems will produce threatens to obscure this potential.

Even by the standards of the past few years, the pace of change in the oil and gas industry over the last 12 months has been extremely fast. Very simply, the world has changed – and so too have the stakes for the industry. New geopolitical realities are disrupting common operating practices. There is relentless pressure to cut costs while simultaneously increasing productivity, and hazardous events have induced more rigorous compliance requirements and amplified public scrutiny.

All of these factors make the drive for operational excellence more critical than ever. The need to simultaneously reduce risk, increase productivity and cut costs is no longer optional. It is the new baseline for the industry.

Too much data?

The advance of digitalisation

Achieving operational excellence requires everyone, from the boardroom to the frontline, to make the most effective decision every time. This is easier said than done. The risk-cost-productivity equation is a delicate balancing act, and a decision made to increase productivity without considering the impact on risk is all too easy and all too dangerous.

Fortunately, digitalisation offers major promise for the industry. In its ‘2016 Global Industry 4.0 Survey’, PwC claimed: “Industry 4.0 will be a huge boon to companies that fully understand what it means for how they do business. Change of this nature will transcend your company’s boundaries and lead to a complete transformation of your organisation.”1

Consultants at Accenture agree: “Digital technology brings more than incremental improvements to operations. It has the potential to transform plants, enable operational excellence and disrupt the competitive landscape in the industry.”2

The advancement of technologies such as smart sensors, digital twins, machine learning, artificial intelligence (AI) and cloud-based solutions to process it all, is breath-taking. These technologies promise greater information technology/operational technology (IT/OT) convergence, making the Industrial Internet of Things (IIoT) a reality for operators.

They also promise to unlock further business process integration within the organisation and across the supply chain. For example, armed with the ability to automatically assess current asset health, recognise patterns that lead to failure, deliver alerts and automatically trigger maintenance processes downstream, operators can stay ahead of the risk curve and optimise productivity.

Danger ahead?

The promise of digitalisation is being directly linked to better decision-making that will, in turn, improve safety and productivity, increase asset uptime and reduce costs.

However, viewing digitalisation as a silver bullet is a mistake. It is a dangerous assumption that we can point analytics at disparate data and apply machine learning with the expectation that actionable insight will come out the other end. Recognising that the excitement around digitalisation comes with an equally large potential for disappointment is an important first step to getting on the right path.

Back to basics

For those on the frontline during the hype of the ‘dot.com’ era, the excitement surrounding digitisation feels oddly like déjà vu. In order to avoid the same mistakes, the industry needs to ensure technology is used to solve real business problems.

To ensure digitalisation delivers and operational excellence is achieved, operators need to go back to the basics of technology planning and implementation. They need to first define what they are trying to achieve downstream and then look at the data and insight they need to support those objectives.

For example, take risk management – an essential business goal for any organisation. The reality is that people at the frontline intervene dynamically to operate, maintain, inspect and fix equipment on an asset. By its nature, this is a dynamic and inherently dangerous place to work. But information on the multiple components of risk is managed differently by different parts of the organisation, held in silos and, for the most part, inaccessible. As a result, a holistic and up-to-date view of risk is not automatically available to decision-makers. Instead, they are forced to resort to manual searches for relevant data, relying on their experience and instinct to judge when situations become unsafe.

Common currencies are key to better understanding

To gain visibility, control risk and ensure operational continuity, operators need to ensure that everyone across the business understands and manages risk against the same criteria – and has a practical understanding of how their decisions directly or indirectly influence the risk picture downstream.

Recognising the potential sources of risk and how they can accumulate is a key challenge that requires a ‘common currency’ approach to managing the disparate sources of data.

The dynamic nature of risk makes it difficult to connect the performance of safety systems and processes to the operational reality of the business in a meaningful way. This is where operational excellence platforms come to the fore. They can unlock the potential in data that is typically hidden away in silos, and provide meaningful insights by translating, aggregating and making information available in real time, based on a common currency. This simple, elegant approach to operational risk management connects process safety and a risk-control system performance to frontline operations. This approach is practical, and makes major accident hazard risk exposure visible, prominent and available for everyone at any time.

This common currency of risk allows comparison of concrete, meaningful risk information that is necessary for making decisions. For example, a maintenance engineer understands the impact of delaying valve maintenance – not just on key performance indicators (KPIs), the team’s schedule or the department’s workload for the next few days, but the likely consequences that will ripple across the organisation. It is how everyone starts to understand that the short-term fix may create as many problems as it solves – and enables them to find the optimal solution instead. It is how everyone within an organisation understands the interrelated, interconnected nature of risk that the business is undertaking, and can assure themselves that their next activity stays within agreed risk parameters.

When data is presented in a user-focused way that accurately informs decisions and improves the prioritisation of facility operations, the value is clear. For example:

  • Frontline operations have immediate access to easy-to-read, data-rich information on equipment and activity status, with context to make better-informed, risk-dependent decisions during each shift.
  • Site management and planners have access to tools that show the risk implications of scheduling decisions, protracted deviations and planned activities, as well as what-if scenarios for enhanced future planning.
  • Asset leadership can see accurate levels of risk and productivity as well as trends for plan-attainment. This kind of data enables them to compare asset performance and to dive into the data, to see what is generating the highest levels of risk.
  • Executives can take an enterprise-wide view to compare asset performance, securing for themselves greater insight into how risk is managed across the whole organisation.

Connected, collaborative, excellence and operational

While technology cannot do the heavy lifting on its own, it can provide support for a more collaborative culture, in which a disciplined approach to everyday decision-making enables key business objectives. The crucial part is to start with the business challenges that need to be solved.

The truth is, if downstream operators want to achieve operational excellence through effective management of risk, productivity, and costs, they need to know what is happening, when it is happening, and where it is happening in real time.

So, is there such a thing as too much data? If the oil and gas industry can aggregate once-disparate pieces of information and translate it into actionable insights, downstream operators can access a true view of the operational reality of every asset and facility. When this information is made accessible to an entire organisation and in a way that makes sense to everyone, it is possible to make better operational decisions.

References

1 'Industry 4.0: Building the digital enterprise', PwC, https://www.pwc.com/gx/en/industries/industries-4.0/landing-page/industry-4.0-building-your-digital-enterprise-april-2016.pdf

2 'Embracing digital plant operations', Accenture, https://www.accenture.com/us-en/insight-digital-plant-reaping-rewards-disruption

Read the article online at: https://www.hydrocarbonengineering.com/refining/28022018/too-much-data/

You might also like

 
 

Embed article link: (copy the HTML code below):