With testing rarely far from the headlines, Dil Begum examines whether the IT industry can heed the lessons and experiences of scientists battling COVID-19
COVID-19 has blazed a destructive trail around the world – and it’s far from over. This invisible enemy has criss-crossed the globe, challenging governments like never before, irrespective of their size or state of development.
Some have responded well – I’m looking at you, New Zealand and Taiwan – whereas others continue to struggle. No doubt scholars and policymakers, historians and commentators, will long debate this disparity in performance, but we can already draw a dividing line between those governments that are testing frequently and those that are not.
Regular testing can prevent quarantining people for long periods and can also identify symptomatic cases and asymptomatic cases. Frequent tests also ensure that governments can change direction on these decisions when the results suggest they need to – and this has prompted me to think about decisions in the IT sector and whether the IT systems we build can also change direction when needed.
After all, when we think we need a change to our IT we usually assess our tools, talk to our people and review our processes. Occasionally, we even build new systems if a capability doesn’t exist but I keep coming back to the question, “are we testing change frequently enough and fundamentally are we conducting the right tests?”
Thankfully, the IT sector already comes equipped with an arsenal of tests. We all know about agile and failing fast to respond rapidly to changes in requirements. We all know about A/B testing and testing the layout of applications to assess whether users are being directed effectively. We even write our business cases with the benefits we expect to realise built into our funding requests and then we test whether those benefits have been realised.
With this in mind, surely it seems reasonable to surmise that we already do enough testing and can confidently believe that the decisions we make are the right ones – or do we?
This probably isn’t much of a problem if our aim is to make a decision in the short term but what if we need to make a decision that will impact us for decades and we need to spend hundreds of millions on a new capability? How can we confidently assess the opportunity cost of making these decisions?
During this pandemic, scientists the world over are working hard developing vaccines; but they don’t just check that these vaccines work, they use randomised control trials and I wonder how these approaches compare to IT change:
- Testing cause and effect – Scientists take great pains to identify the variables and the causal link between the vaccine and the outcome. Is the new IT system really making a difference or could it be due to a combination of other changes?
- Double blind trials– Scientists minimise the impact of their involvement on the experiment. What impact does our collective involvement have on the platforms we build and does our governance allow us to minimise the impact of bias?
- Sample size – Scientists perform their experiments on a reasonably large sample before rolling out a change. How many times do we design major IT platforms that become too difficult/expensive to change after a small discovery phase with contributions from only a few key stakeholders?
- Control groups and placebos – Scientists recognise that simply the act of intervention can lead to better outcomes even if there is surprisingly little substance behind the change.
Identifying the IT world’s sugar pill
In the book Black Box Thinking1 there is a story about an aid agency working to improve educational outcomes of students. Providing free textbooks was considered a common sense response so it took them by complete surprise when free textbooks had minimal impact on students.
They discovered some of the children didn’t speak English, so they provided textbooks in the same language the children understood. But yet again, minimal impact. The agency then took a step back and re-assessed which variables could impact students. They rolled out a de-worming medicine as some students would take time off school whenever they were unwell. This small change was antithetical to their common sense approach but was a resounding success.
I wonder how many of the systems we roll out across public sector suffer the same blind spots the aid agency suffered. Are we often trying out common sense approaches, new versions of textbooks again and again, yet hoping for slightly better outcomes each time?
- If a technology change or new IT system is the answer, identify the problem – do we really understand the issue before forging ahead?
- Do we make future direction changes too difficult and expensive? – have we already made all the fundamental decisions on system design before hearing from a wide range of users?
- And how can we test our pre-solution thinking with the same rigour as we test the new system – how can we be confident we aren’t attributing the outcomes to a great white sugar pill?
As engineers and developers we always seek to follow the data – it would be remiss of us not to follow the lessons of the pandemic, too.
About the author
Dil Begum is an Account Manager at BAE Systems Applied Intelligence
1 Matthew Syed, Black Box Thinking: Marginal Gains and the Secrets of High Performance, United Kingdom, John Murray Press, 2016), pp. 188/344, Kindle
Explore Government Insights
Stay up to date with the latest thinking, trends, technologies and projects from our Government teamFind out more
- Mapping the cyber impact of Covid-19. The Covid-19 pandemic has uprooted all our lives but what about its national cyber and policy implications? RUSI’s Rebecca Lucas examines its impact so far
- 5 ways emerging technologies can help tackle Coronavirus. As the Covid19 pandemic continues to envelop the world, Roberto Desimone considers how emerging technologies can help resolve both the current crisis, as well as prevent future outbreaks from taking root
- Robots in disguise. Swarm robots need not be the stuff of nightmares, says Tom Longstaff. In fact, they offer huge potential to the public good
- Figuring out the road trip. Ameen Ali considers how to map data priorities using demand management principles
- The Transformer. For a variety of reasons, digital transformation continues to be a step too far for many organisations. Here, Sandy Boxall says that they can be done, pointing to the success of the Royal Navy’s NELSON programme to illustrate his point
- A cross domain conversation. BAE Systems’ Douglas Steil and Richard Byng discuss the impact of Cross Domain Solutions and their potential to transform the operations of organisations large and small