Skip to main content

A clear view

Published by
Hydrocarbon Engineering,


Ankur Pariyani, Nancy Zarrow, Ulku Oktem and Deborah Grubbe, Near-Miss Management LLC, USA, discuss methods to reduce cognitive bias in industrial operations.

The chemical processing industry, including chemical manufacturing, oil refining, and gas production, has experienced astounding advancements in the ability to collect and store information at every point in their processes, yet the ability to pinpoint problems continues to elude many.

The weak link in the data chain

Today, a modern industrial plant monitors hundreds of parameters, generating upwards of 10 to 50 million data points every day, that can be tracked and analysed for current and historical patterns. However, with such an overwhelming amount of data at hand, operating teams are struggling to uncover meaningful, actionable insight. The answer may not lie in more analytics, but a better way to comprehend the analytics – converting this ‘big data’ into ‘smart data’ to help draw conclusions easily and make intelligent decisions quickly.

With many tools and analytic engines at hand, humans are often still part of the final steps of deciphering and prioritising what the data informs them to do. This means natural biases in filtering information play a significant, and sometimes costly, role. Incident investigations and new research studies1 - 2 have shown that the ‘answers’ are available, but are often disguised as disparate information hidden in the reams of data. Operating team members, responsible for everything from identifying issues and deciding on corrective actions, to running the plants reliably, are faced with a monthly inventory of over 1 billion recorded data points for just 500 process variables. Decision science research3 - 4 shows that when individuals or teams are bombarded with large amounts of information and are required to make decisions within a short period of time, they often resort to mental shortcuts to save time and energy. This often introduces cognitive bias during the judgement process, which can lead to inaccurate interpretation and irrationality. In such scenarios, individuals simply discredit information that does not support their views and heuristics, without realising and evaluating their underlying assumptions. Even the most experienced human operators and engineers are not impervious to these weaknesses in the daily environment.

One cognitive bias that comes into play, especially when making a decision with uncertainty, is anchoring, which is relying too heavily on the first piece of information offered (the anchor) when making decisions. In addition, confirmation bias (the tendency to search for or interpret information in a way that confirms one's preconceptions), may lead individuals or teams to ignore evidence of a problem. Examples of the barriers created by biases were illustrated in research5 conducted by the Joint Research Centre and Denmark Risk National Laboratory of the European Commission, which showed that even quantitative risk estimates based on generic reliability/failure databases could result in large deviations due to differences in thought process. This project employed seven partners that conducted risk analyses for the same ammonia storage facility, finding ‘large differences in frequency assessments of the same hazardous scenarios’.

With growing recognition of the need to take often unrecognised human biases out of the analysis, leaders are turning to objective, autonomous solutions. Operating teams can now guard against biases by leveraging new technologies that can help unveil obscure information from a process that is otherwise unavailable.

Learning from others’ experiences

Interestingly, there are lessons to be learned from another industry struggling with similar data overload – medical diagnostics. As illustrated in a report published in the BMJ Quality & Safety Journal in April 2014,6 in the US alone, there are 12 million misdiagnoses a year and at least half of these resulted in some form of morbidity. That number represented 1 in 20 adults (seeking medical help) experiencing a misdiagnosis. These failures in healthcare delivery cost lives and money. In 2011, an estimated US$102 billion to US$154 billion in healthcare costs was alleged to be due to delayed treatment as a result of diagnostic failures.7 Within the medical industry, volumes have been published about how to improve diagnostic accuracy with suggestions related to diagnostic assistance software tools and collection of more data via new patient testing technology and devices. However, in the last decade, there has been an explosion of research supporting the concept that a critical component of accurate medical diagnostics lies within doctor-patient communications.8 In his book, 'How Doctors Think',9 Dr. Jerome Groopman states that the average time of when a doctor starts to speak when interviewing a patient about their symptoms is 18 seconds. The same studies also stated that, on average, when uninterrupted, patients will talk for another six seconds. However, once the doctors interject, 25% of patients tend not to continue with other symptoms outside of the doctors’ questions. Such interruptions lead to incomplete descriptions that result in missed opportunities in gathering potentially important patient data. In addition, the economic incentives and pressure to see more patients in less time are immense – resulting in cognitive errors becoming more common. In the face of acute time pressure, doctors come to rely more on shortcuts to make judgments and pattern recognition based on an instantaneous appraisal of the patient. Once a regimen has begun, based on a wrong assumption, it often leads to taking the analysis further from the truth, demonstrating how cognitive errors lead to diagnostic mistakes.10

Groopman, and others in the medical community studying these phenomena, support having medical diagnostician’s use their expertise and experience only after all the information has been gathered and the examination of the patient has been completed. Furthermore, the medical community is uncovering issues not only with diagnostics per se, but also with detection – the predecessor to diagnostics. Therefore, the medical industry is highlighting that a pivotal part of improvement in diagnostic skills actually lies in every step of the data processing: (a) extensive data collection, (b) detection of abnormalities within that data, and (c) a more accurate diagnosis made in an unbiased manner.

Bringing a more objective lens to identifying problems

Similar to medicine, the process industries can identify and resolve the underlying causes of problems in a process as evidenced by its dynamic data feed. The data needs to be ‘listened to’ thoroughly. It is important that the analysis is not interrupted nor influenced by the particular expertise and experience of the analyser, such as the case in using tools that introduce bias from the beginning of the analysis by requiring that the engineers choose all the elements of the analytics tool (i.e. only include their select or favourite priority variables, methods and timeframes). However, with billions of data points, the average human analyser cannot absorb, compare and investigate all the process data. There is just not enough time. The analyser's cognitive bias is invoked to make the work ‘manageable’. However, what if a software system could be used? This is where computational power has its greatest advantage. The right type of software system can handle this magnitude of absorption and analyses, and then ‘point and direct the human’ to review a more unbiased set of results. Recent breakthroughs in large scale machine learning and dynamic risk analysis approaches11 - 12 allow advanced analytics to ascertain process issues in the early stages when still hidden in the data and long before process variables reach alarm levels. Such objective, autonomous systems are referred to as early risk detection or warning systems. Two real-life case studies are considered examples of bias in the interpretation of an unbiased indication herein.

Case study

Consider a refinery alkylation process, which had recently installed such an early risk detection system. This system identified abnormalities for fuel pressure going to the burner for several weeks. Initially, the operations engineer disregarded them because the fuel pressure was well within its alarm limits (example of confirmation bias) even as the associated flow variable did not show any increase. Over the next few weeks, the system continued to highlight a slow yet gradual increase in the fuel pressure, which caught the attention of another engineer. Upon investigation, the team confirmed there was plugging in the burner and submitted a work request to have the burner cleaning added to the upcoming scheduled maintenance. The team estimated savings of 5 – 7 days of unexpected downtime and prevention of a potential rupture of the furnace tube due to the insight by the early risk detection system and proactive actions by the engineers.

Conclusion

Famous Canadian novelist Robertson Davies once said: "The eye sees only what the mind is prepared to comprehend". In that same vein, the need to extract information from rich data sources brings the issue of cognitive bias to the forefront of analytics, as it is an important factor in the selection of tools for diagnostics and identification of actionable results. Similar to medical practice, preventing cognitive bias from seeping into operating teams’ analyses can prevent unexpected adverse outcomes. By leveraging advances in machine learning and dynamic risk analysis technology,11 - 12 an objective, autonomous system can improve the detection and diagnostic capabilities of operating teams and help unveil information that is otherwise unavailable. With the help of such an early risk detection system, operating teams would have the ability to harness the extensive spectrum of data, without having to resort to personal experiences or biases for manageability.

References

  1. PARIYANI A.; SEIDER, W. D.; OKTEM, U. G.; and SOROUSH, M., 'Incidents investigation and dynamic analysis of large alarm databases in chemical plants: a fluidized-catalytic-cracking unit case study,' Industrial & Engineering Chemistry Research, 49 (17), pp. 8062 – 8079, (2010).
  2. OKTEM, U. G.; SEIDER, W. D.; SOROUSH, M.; and PARIYANI, A., 'Improve process safety with near-miss analysis,' CEP Magazine – On The Horizon Article, AIChE Publication, (2013).
  3. LAU, A. Y. S. and COIERA, E. W., 'Do people experience cognitive biases while searching for information?', Journal of the American Medical Informatics Association, 14(5): pp. 599 – 608, (2007), available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1975788/.
  4. SIEFERT, W. T. and SMITH, E. D., 'Cognitive biases in engineering decision making', IEEE Aerospace Conference, (2011), available at: http://ieeexplore.ieee.org/document/5747663/.
  5. LAURIDSEN, K.; KOZINE, I.; MARKERT, F.; AMENDOLA, A.; CHRISTOU, M.; and FIORI, M., 'Assessment of uncertainties in risk analysis of chemical establishments,' Summary Report on Assurance project, Risk National Laboratory, Roskilde, Denmark, (2002).
  6. SINGH, H.; MEYER, A. N. D.; and THOMAS, E.J., 'The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations,' BMJ Quality and Safety, (2014), available at: http://qualitysafety.bmj.com/content/early/2014/04/04/bmjqs-2013-002627.
  7. KNOCHE, C. and KALINYAK, J., 'Does a med school degree guarantee diagnosis skills?' Med Page Today, (2015) available at: http://www.medpagetoday.com/primarycare/generalprimarycare/52991.
  8. ROTER, D. L. and HALL, J. A., 'Doctors talking with patients/patients talking with doctors: improving communication in medical visits', Praeger, (2006).
  9. GROOPMAN, J., 'How doctors think', Houghton Mifflin, (2007).
  10. HORTON, R., 'What’s wrong with doctors?', The New York Review of Books, (2007), available at: http://www.nybooks.com/articles/2007/05/31/whats-wrong-with-doctors/.
  11. KHAN, A.; ANUAR, S.; ASRI, A.; PARIYANI, A.; OKTEM, U. G.; and GRUBBE, D. L., 'Predicting process risks for improved safety and operational excellence: a breakthrough technology and case studies,' AIChE’s 61st Annual Safety in Ammonia Plants and Related Facilities Symposium, (2016).
  12. VILLA, V.; PALTRINIERI, N.; KHAN, F.; and COZZANI, V., 'Towards dynamic risk analysis: A review of the risk assessment approach and its limitations in the chemical process industry,' Safety Science, 89, 77 – 93, (2016).

To receive your free copy of Hydrocarbon Engineering, click here.

Read the article online at: https://www.hydrocarbonengineering.com/special-reports/26062017/a-clear-view/


 

Embed article link: (copy the HTML code below):