Skip to main content

Maximising capacity availability with advanced analytics

Published by , Editorial Assistant
Hydrocarbon Engineering,


The difference between the theoretical and achievable production volumes of a chemical manufacturing asset stem from planned and unplanned impacts on unit capacity availability. Capacity availability losses can manifest as unit downtime or throughput rate limitations brought on by reaching different constraints. These constraints are related to factors such as process fouling, mechanical reliability, environmental conditions, supply chain disruptions, and more.

Disruption to raw material availability or power supply as a result of a natural disaster can lead to manufacturing units suffering capacity losses outside of their control, but many of these losses can be predicted and limited to decrease downtime. Digitalisation in the form of advanced analytics has improved organisations’ abilities to predict, plan for, and manage constraints. Using software that transparently connects business teams with plant operations provides visibility into all levels of a process, empowering production leaders to make data-driven decisions.

Decision making focused on maximising the long-term production of an asset often requires proactive or opportunistic downtime. While it can seem counter-intuitive to shut down for the purpose of increasing production, throughput rate improvements often justify brief mechanical outages.

With the right digitalisation tools in their arsenal – including advanced analytics solutions – organisations can leverage historical data to understand and provide insight into various failure modes, their impacts on production capacity availability, and the effects of different operating strategies. This presents decision makers with viable alternatives, empowering them to select the best strategies.

This approach was not always easy

The volumes of data, numerous legacy storage systems, and lack of user-friendly analysis tools have historically limited engineers’ abilities to tackle capacity availability optimisation problems. Common approaches involved cumbersome spreadsheet tools, with users struggling to meet analytical needs.

Some of the most significant limitations of these approaches include live data connectivity challenges, a lack of computational capability, poor online collaboration, and clumsy visualisation and reporting functionalities. When analysing historical availability loss data, it can be helpful to overlay process data with contextual information, such as operator logbook notes or maintenance work orders. Live connections to these data sources are difficult to set up and maintain, and without them engineers must manually query each individual database, extract the necessary data, then aggregate and align mismatched timestamps in a spreadsheet. When a new time period of interest is identified, the process must be repeated.

With the many hoops to jump through, it is easy to see why nearly 80% of engineers, scientists and analysts surveyed in a 2016 CrowdFlower study spend more time collecting and wrangling data into a format suitable for analysis than any other task (see Figure 1).1 This leaves little time for gleaning meaningful insights.


Figure 1. A 2016 CrowdFlower survey revealed that nearly 80% of data scientists spend more time collecting and wrangling data into a format suitable for analysis than any other task.

The performance limitations of spreadsheets when dealing with high data volumes present challenges to understanding historical capacity losses, root causes, and mitigations. This makes it difficult – if not impossible – to model future behaviour. Data dumps into advanced statistical modelling software can meet the algorithmic needs for analysis, but without a live data stream for continuous learning, the model becomes outdated the moment that it is created.

The fact that algorithm and process expertise are often not held by the same individual adds an additional layer of complexity to performing loss analysis and prediction. In order to maximise the effectiveness of data science experts and process subject matter experts (SMEs), collaboration is required, and these SMEs may work from many offices and countries. Desktop-based tools are becoming obsolete as organisations recognise that browser-based solutions deliver the online collaboration, knowledge capture, and reporting that are needed in their digitalisation efforts.

Advanced analytics address challenges

Self-service advanced analytics software applications fill the gaps left by spreadsheet-based calculation and reporting tools. Thanks to browser-based access, live data connectivity server architecture, and interactive visuals, it is easier than ever to configure auto-updating models, reports, and summary dashboards.

These solutions provide out-of-the box connectors to time-series sources such as manufacturing data historians and SQL-based contextual databases, making process, maintenance, shift log, and other data available to SMEs for analysis from a single application. Overlaying contextual information – as with maintenance performed to address a constraint – makes knowledge historically held by process SMEs available to all parties analysing the data. The data’s origin remains the single source of truth for data because it is not replicated, but instead queried either automatically or on demand, maintaining full data integrity.

Leveraging the elasticity and scalability of cloud-based servers empowers SMEs to make calculations beyond the capabilities of their computers. Working with large, raw data sets rather than pre-aggregated or down-sampled data improves the quality of analysis performed and the outputs generated. Working with browser rather than desktop-based tools ensures that every click in an analysis is captured and saved, eliminating the anxiety of a computer crash without recent saves.

After working for decades within the confines of rows and columns, incorporating visualisation into the analysis build-up process drives faster innovation by providing real-time identification of missteps and successes. Combining a flexible visualisation pane and a robust calculation engine encourages engineers to rapidly iterate on analyses, nail the approach, then scale to a wider time range of data or additional assets. The time recovered compared to manual data wrangling and reporting is motivation to identify the next big process optimisation opportunity.

In an industry where production assets change hands frequently in response to mergers and acquisitions, or market volatility, it is common to find multiple sites within the same organisation that are struggling with the same capacity-limiting issues over time. However, the adoption of an enterprise-wide advanced analytics strategy promotes cross-site collaboration, such as sharing operational best practices for predicting and preventing common failure modes among assets, to address these issues and beyond.

Use cases demonstrate the impact of an empowered workforce

With the right tools in hand, significant capacity availability improvements can be achieved with existing assets and workforce, driven by new analytics technologies’ insight generation capabilities and ease of implementation.

Historical capacity loss identification, categorisation, and summary

Challenge

Understanding the leading sources of historical losses, their warning signs, and past prevention or mitigation strategies is critical to maximising future production capacity. Process manufacturing companies track and categorise periods of capacity loss to identify bad actors, justify improvement projects, and perform cross-site benchmarking. The loss accounting process is tedious, consuming valuable SME time with each analysis iteration.

It requires the identification of losses, performing root cause investigation, and documenting the events leading up to the loss and the ensuing actions. Communicating insights to relevant personnel requires aggregation into reports that summarise overall bad actors of equipment effectiveness and reliability, informing decisions on operational improvement strategies ranging from no-cost procedural changes to capital spending on process debottleneck projects. This categorisation is most accurately captured by implementing a logic-based assignment that is verified by frontline personnel at the time of, or shortly after, a capacity loss event.

Solution

Advanced analytics applications identify performance losses by comparing actual production to theoretical capacity, and by flagging time periods when operation is constrained when compared to the target. Losses are categorised by breaking this broader event or condition down into multiple sub-categories of similar events. The flexibility of these applications provides teams with the option to manually categorise events, or to auto-categorise them logically, based on configured thresholds.

Supporting both graphical and tabular methods of summarising data ensures that it can be presented to different stakeholders in the most desirable format. After constructing the calculation methodologies and visualisations, the summary views can be assembled in a report and scheduled to auto-update monthly (see Figure 2). This makes a simple login to the application for viewing updated data the only ongoing effort required by SMEs and the business team.


Figure 2. Diagram from a Seeq production loss analysis dashboard for a petrochemical production unit.

Result

For periodic analyses such as capacity loss investigations, automatically-generated monthly reports can save as much as a week of valuable SME time with each reporting period. This time saved can be invested in identifying additional capacity improvement opportunities and driving projects. Easy filtering and aggregation of historical capacity loss data empowers SMEs to spend more time adding value to improvement efforts, and less time wrangling data, to quickly build cost justifications.

Minimising throughput rate loss through optimised maintenance cycles

Challenge

The degradation of process throughput constraints due to fouling over the course of a production campaign is one of the most common issues impacting capacity availability. These constraints can be reversible, but they typically require either online or offline action to restore rates. The time taken to restore throughput capacity comes at a price of downtime or poor product quality.

When timed appropriately, short-term downtime to perform throughput restoration procedures can improve long-term throughput rates, enabling a unit to meet its production goals. Fulfilling production volumes provides additional unit capacity availability and the option to grow annual production volumes. Developing solutions to these types of optimisation problems often requires complex mathematical analysis, typically solved by advanced modelling packages, which require programming experience to configure.

Solution

A major US polymer manufacturer experienced rate-degrading process fouling, which they corrected with a periodic ‘defoul’ procedure. While the procedure did not require a shutdown, it resulted in quality degradation of the material produced during the procedure.

The manufacturer implemented Seeq, an advanced analytics application for process manufacturing data, to compute the number of fouling/defouling cycles that would result in the shortest overall production campaign length. With this information, engineers calculated the production rate at which a defoul procedure should trigger, and they combined this with the average degradation rate while running to create a golden profile of the fouling cycles. By forecasting the golden profile and adjusting actual production accordingly, the team optimised the production timeline (see Figure 3).


Figure 3. Sketch depicting the optimisation problem to be solved (top) and the Seeq graphical solution to the problem (bottom).

Result

Plant personnel running the sold-out polymer production unit had been exploring ways to increase capacity to meet growing market demand. By implementing this data-driven defoul strategy, they were able to decrease production time by 11%, while increasing annual batch delivery by the same value. This freed up reactor capacity, enabling the manufacturer to increase production volumes for multiple polymer grades, growing sales and market share.

Conclusion

As data volumes grow along with pressure to increase personnel productivity, new digitalisation solutions are needed to aid staff and tackle emerging issues. As part of a larger digitalisation effort, advanced analytics applications specifically designed to work with time series process data provide a solution to create value from large data repositories. Implementing the right tools can free up several weeks a year for SMEs, providing them with time to focus on high-value activities to optimise production and increase operational reliability.

Reference

1. 2016 Data Scientist Report’, CrowdFlower, (2016), https://visit.figure-eight.com/rs/416-ZBE-142/images/CrowdFlower_DataScienceReport_2016.pdf.

Read the article online at: https://www.hydrocarbonengineering.com/special-reports/10102022/maximising-capacity-availability-with-advanced-analytics/

You might also like

TotalEnergies and SINOPEC join forces to produce SAF

TotalEnergies and China Petroleum and Chemical Corp. (SINOPEC) have signed a Heads of Agreement (HoA) to jointly develop a sustainable aviation fuel (SAF) production unit at a SINOPEC's refinery in China.

 
 

Embed article link: (copy the HTML code below):