Skip to main content

Editorial comment

Improving the model
We find ourselves in June, six months or more into a global pandemic that has caused incalculable damage to almost every aspect of modern living. Around the world, people in many countries – having endured a period of so-called 'lockdown' – are beginning to emerge from their isolated states and contemplate resuming life as normal, or at least an approximation of it. Businesses are also looking ahead: taking steps to reverse emergency measures that were put in place months ago, making forecasts and plans for the next few quarters or tentatively looking for signs of recovery and restitution.

View online issue »

Many companies will have been forced to adapt to new working conditions, changes in demand and stresses in the supply chain. Being able to move forward confidently with plans, purchases and projects is something we’d all like to do; how confidently you move depends on the information at your fingertips.

In the UK, the government tailored its response to the COVID-19 virus by analysing a software model derived by Neil Ferguson and his team at Imperial University, London. While the government did use other sources of advice and evidence, it is understood that ministers relied heavily upon the Imperial model. This was the model that was presented to the public in nightly news conferences, predicting how quickly the virus would spread and when it would potentially overwhelm the capacity of the country’s national health service. The shocking data it generated persuaded policy makers to virtually shut down the economy by closing businesses and severely curtailing freedom of movement.

Recently, the credibility of the software model has been called into question: two Big Data experts writing in The Telegraph called the model "totally unreliable", arguing that the programming language used is outdated, in that it "contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies."1 The article (along with others of a similar ilk published in May), criticises the modelling code and asks why the British government didn’t seek a second opinion from a computer scientist. The article states that Ferguson’s model "ignores widely accepted computer science principles known as 'separation of concerns', which date back to the early [19]70s and are essential to the design and architecture of successful software systems. Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole." In essence, they argue that the software model is based on crude maths and any results would therefore not be deterministic or reproducible (reliable models must be both).

Accurate and reliable modelling, simulation and digital technologies are key for the pipeline industry, which utilises various analytical tools in order to solve operational challenges. In this issue of World Pipelines, Denka Wangdi at Emerson’s machine automation solutions business writes about how edge controllers can help pipeline operators modernise and improve their systems by adding new capabilities (p.53). Wangdi posits that adding edge control can simplify system architecture for critical infrastructure: "control, co-ordination, communication and care of pipeline operations can be improved with the latest edge processing capabilities". Any calculations for predictive or prescriptive maintenance work, sensing, corrosion detection and/or surge control can be performed locally at the edge controller, using data from more sources than traditional control and monitoring systems. Thus, edge technology can help to build ever more reliable technology platforms, where the true condition of the pipeline can be reflected and then analysed.

  1. 'Neil Ferguson’s Imperial model could be the most devastating software mistake of all time', The Telegraph, 16 May 2020,

View profile