Skip to main content

Embracing condition-based calibration

Published by
Hydrocarbon Engineering,

In the past decade, the oil and gas industry has seen a significant increase in the uptake of primary and secondary instrumentation that make use of smart transmitter technologies. These devices are capable of outputting big data sets over industrial networks such as WirelessHart, Modbus, Foundation Fieldbus and Profibus as shown in Figures 1 and 2. The data itself can contain process values relating to the device’s primary function, e.g. fluid flow measurement, as well as diagnostic information that has the potential to be utilised for assessing device performance or gaining secondary information on the process stream.

Figure 1. Simplified example of a fluid flow facility with multiple communication networks servicing a variety of primary and secondary instrumentation.

Figure 2. Overview of a typical digital communications infrastructure.


Historically, this data has been used by metering technicians and commissioning engineers for maintenance and quick checks, to ensure that the device is performing as expected before integration into a facility supervisory control and data acquisition (SCADA) system. Many facilities also use the diagnostic values for simple range checking to indicate when a given parameter has gone out with acceptable conditions. When implemented correctly, this information can alert facility operators to potential problems, allowing for a targeted investigation and preventative maintenance to be carried out.

For the downstream sector, this kind of information can be used to minimise financial loss when metering end products for delivery to customers. For example, if it is possible to determine through CBM when a meter is outputting incorrect flow measurement data, the potential financial exposure due to either providing too much product due to the device under reading, or not providing enough product due to the device over reading, can be reduced.

There is now an increasing interest within industry in accessing and logging these diagnostic process values. Software packages that make use of machine learning and advanced mathematical modelling techniques can automatically interpret device performance by identifying the correlations between diagnostic parameters across multiple sensors and process control equipment. Such a system gives end-users access to a new level of facility performance analysis and therefore has the potential to streamline decision making with regards to production and maintenance spending.

Calibration methods

A more specific example is the financial and operational desire to move towards a system which embraces condition-based calibration (CBC) as opposed to time-based calibration (TBC) on devices such as flow meters.

For example, with a TBC method there is the possibility that facility operation may be stopped unnecessarily to calibrate a flow meter, which in reality has not deviated from its required operating parameters. The combined costs of meter calibration, pipe fitting, electrical isolation/connection and facility down-time can be significant, depending on the specifics of the facility in question. Conversely, it is also possible for a meter to have deviated from its expected performance envelope, but not be due for recalibration, resulting in fluid measurement errors that may have significant financial consequences to the facility operators.

A CBC schedule has the potential to reduce these types of operating costs, by allowing facilities to develop more dynamic operating patterns that are based on continuous, automated, diagnostic analysis of facility and meter performance. By logging key meter diagnostic values in tandem with standard device outputs and comparing them to known baseline conditions, it is possible to determine whether a flow meter is operating within specification. Additionally, with enough historical information on a specific device, it is possible to predict data calibration drift over time. If CBC is implemented in place of TBC, then planning is more challenging due to the irregular intervals, and so predictive capability becomes crucial to allow effective planning to continue.

Case study 1: identifying the types of installation error occurring within ultrasonic flow meters based on diagnostic variables

Based on experimental data obtained in research carried out by TÜV SÜD National Engineering Laboratory, it was observed that a variety of ultrasonic flow meter types behaved differently under the same installation and failure conditions. In addition, different errors, such as vertical misalignment and horizontal misalignment, incurred the same drift within the same diagnostic variables. In other words, using basic observational diagnostic assessments, it was difficult to distinguish between the causes of error. Motivated by this, different mathematical modelling techniques and machine learning algorithms were used to overcome this challenge and predict, with high accuracy, the condition responsible for the errors.

Through analysing and learning the trends and correlations between different variables under each known condition, high accuracy predictions were achieved by using a machine learning model. A sample of the results is summarised in Figure 3, where each unseen data represented a condition which was not known to the model.

By making use of this machine learning modelling technique, the cause of drifts were made identifiable, as illustrated in Figure 3, where blue bars represent correct predictions and red bars represent false predictions. For example, any drifts which occurred within the diagnostic data in ‘Unseen Data B’ were due to the fact that the ultrasonic flow meter had been installed in a misaligned manner in the vertical orientation. This prediction had a corresponding probability of 0.91. Similar interpretation can be made for other conditions.

Figure 3. Prediction results from the machine learning model in Case study 1.

Case study 2: identifying which transducer ports within ultrasound flow meters have deposition based on diagnostic variables

Over time, deposition such as wax can build up in transducer ports within ultrasonic flow meters. If those are left unattended, ports can become completely blocked, affecting the reliability and accuracy in outputs generated from these flow meters. However, determining which ports have wax build-ups can be a time-consuming and labour-intensive process. As a result, machine learning models can be used to aid this process. As with case study 1, a sample of the results is summarised in Figure 4, where each unseen data-point represented a condition which was not known to the model.

By making use of this machine learning modelling technique, identification of which transducer ports have wax build-up was made possible, as illustrated in Figure 4, with a probability of 1 for each unseen condition. In other words, the model managed to predict the right ports that contain wax with 100% accuracy.

Figure 4. Prediction results from machine learning model in Case study 2.


However, depending on the metering technology, the parameters for diagnostic interpretation can vary considerably. For instance, a Coriolis flow meter produces different diagnostic data from an ultrasonic meter due to the differing underlying physics of operation. Both metering technologies also have different installation requirements and environmental conditions to consider. For example, it is known that external influences such as vibration and ambient temperature can affect the quality of data output from Coriolis meters. It is therefore crucial that when facility operators consider moving to a CBC system, credible scientific data, which is both meter and facility specific, must first be obtained, to ensure that any resulting operational decisions subsequently made are done so based on quantifiable data.

There are a number of potential variables associated with a large production facility, such as valves, pipe bends, temperature and pressure effects. When this is combined with the variations in meter design, it becomes clear that implementing a reliable CBC system which has full user confidence is no small task. This is currently one of the key reasons that TBC methods are still widely used in industry.

Modern technology providing a path

Factors that are gradually increasing the uptake of CBC-based facility maintenance patterns are the continued growth and adoption of cloud-based computing and data storage, as well as affordable computing power which is required for complex modelling and prediction. The standardisation of digital communication protocols, as well as individual manufacturers supporting the integration of their devices into cross-platform packages, have also allowed for a number of unique and application-specific software solutions to be developed that support CBC facility operation.

The principle of CBC and monitoring is a component of a much larger concept, broadly referred to as ‘Digital Oilfield’. The overall aim of this is to optimise facility operating costs by streamlining areas such as maintenance, staff scheduling, production and data analysis. The exact parameters of a ‘Digital Oilfield’ system are largely influenced by the specifics of the facility it is to monitor.

The specification and commissioning of such a system requires an in-depth understanding of the facility’s electronic, electrical and mechanical design, as well as its normal operating requirements and capabilities. When predictive information is initially generated it should be validated by staff with relevant knowledge and experience before key decisions are made on the data. Over time, after multiple tuning iterations, confidence in the data is built up and in doing so the facility can start to adopt an efficient and intelligent decision-making process as opposed to a regimented and potentially inefficient one.

Ongoing research

Research is currently underway in multiple industry and academic sectors, with the aim of helping end-users build confidence in the types of systems described herein. Manufacturers of instrumentation, flow meters and diagnostic software packages are, in some documented cases, supporting this endeavour. This level of interaction between researcher, end-user and manufacturer is key to building overall competence and confidence in identifying useful data for informing operational decisions.

The UK’s national standards for flow and density measurement, operated by TÜV SÜD National Engineering Laboratory, are currently tackling such research areas. Using their flow laboratories, which rely on multiple industry-standard digital networks, they aim to develop correlations between the data output from field devices and their operational efficiency. Parameters such as device age and structural integrity, facility ambient conditions and the properties of the fluid will be considered, as well as bigger picture flow facility components such as pump speeds and valve positions.

Additionally, many companies are in the process of analysing the historical big data sets associated with the specifics of their operation. This is not limited to the oil and gas industry. Sectors such as food production, retail, automotive, etc. are all undertaking digitisation strategies, with the aim of getting to grips with the subtleties and unrealised potential in the historical and live data which they hold.


With the ever-growing interest in logging and understanding diagnostic data, it is reasonable to suggest that the day is coming where facility operators and technology end-users in general have the confidence to fully switch over from traditional time-based calibrations to automated and intelligently lead condition-based calibrations.

Written by Gordon Lindsay, TÜV SÜD National Engineering Laboratory, UK.

Read the article online at:

You might also like


Embed article link: (copy the HTML code below):


This article has been tagged under the following:

Downstream news