Nick Wallis’s book “The Great Post Office Scandal” should be compulsory reading for anyone setting out on a career as a software developer, tester, risk manager or internal auditor – or indeed anyone starting to study the law. The author sets out, in exhaustive detail, the story of how the Post Office’s Horizon system, developed by Fujitsu, and managed appallingly by both corporations ruined thousands of lives.
Nick Wallis followed the scandal for over a decade, showing remarkable and commendable committment to an important cause. Nobody could have written a more complete account of this scandal – and it is a story that the world has to hear.
A tale of flawed software and corporate malpractice might sound dull, but Nick Wallis never loses sight of the human impact. Throughout the book he weaves into his narrative the stories of individuals who have suffered heartbreaking persecution and tragedy. The result is a highly readable, deeply moving and gripping book.
Before reading the book I was already very familiar with the scandal, but I was still shocked on a professional and human level. Nick Wallis has uncovered a wealth of detail that will dismay those, like me, who have worked in IT in more responsible, competent, and professional companies.
If there is any justice “The Great Post Office Scandal” will become a classic. It should be widely read throughout the IT world. People working with these complex IT systems should reflect on how IT affects people and remember Jerry Weinberg’s words; “no matter what they tell you, it’s always a people problem.” Horizon was a human tragedy, caused by people – not technology.
Nick Wallis focuses on the human and legal story rather than the technical issues, but he devotes one brief chapter to the difficulties of working with large, complex systems. His explanation is well informed and he describes clearly how these systems change and evolve rather than being similar to a static machine. A separate chapter relates the disgracefully amateurish development by ICL/Fujitsu. This account is shocking but Nick’s scathing version of the development’s history has been by the evidence that has emerged in the official inquiry.
Setting aside the technology, the book illustrates aspects of modern Britain that should make us feel deeply uncomfortable. It was not inevitable that the subpostmasters would be vindicated. If a different judge, less technically aware than Justice Fraser, had been appointed to hear the group litigation brought by the subpostmasters it is very likely that the outcome would have been different. It is easy to see how the Post Office could have got away with its appalling behaviour and crushed the innocent victims, leaving them financially ruined, their reputations destroyed, their health broken. The justice system and legal profession have difficult lessons to learn.
However, I hope that Nick Wallis’s book will reach a much wider audience than computer experts and lawyers. It is a deeply moving version of one of the oldest stories in the world; it is a classic tale of good people fighting back to overcome fearsome odds and defeat the villains. It is a wonderful read.
For our small corner of this enormous problem, Software Testing, any particular insights / take aways / learnings? Because, in principle, the whole long chain of misery should have all stopped before it was shipped.
It’s a big question. I’ll try and find time to answer it tomorrow.
This is a big topic. I’m researching and writing about it now.
Yes, you are right that good testing could and should have flagged up problems and risks that would have delayed implementation.
However, Horizon was developed and amended over a couple of decades. It would not be a matter of looking only at the pre-implementation testing.
The Post Office have publicly commented on Fujitsu’s testing, in submissions to the court for the Horizon Issues case. They expressed satisfaction. I have not seen any sign of the Post Office providing any detail about its own acceptance testing, other than saying they performed it. Fujitsu did not offer any evidence at all.
According to the Post Office submissions to the court there were three testing stages; unit, link/integration and acceptance. Regression testing was performed when changes were made. This seems to have been patchy. Bugs introduced by changes were not picked up by the regression testing. There were clearly problems with the configuration management. Changes sometimes resulted in code with bugs that had already been fixed in the live environment being re-released.
There was a pattern of connectiviy problems exposing bugs in Horizon. If the branch connection failed or was disrupted, as happened regularly, this could generate discrepancies because transactions had not been completed properly. This should all be basic technical stuff. I suspect that the testing failed to explore the implications of connection problems and simply tested the functionality against requirements in an artificial environment.
I am pushed towards that conclusion by Fujitsu’s apparent commitment to traditional testing techniques (standards-based and document driven), and the repeated insistence by the Post Office when bugs were exposed that the system was actually operating as designed.
Further, the Post Office was brutally dismissive of the needs of the branch users. Horizon was a system that served the central Post Office management, and it did not provide branch managers with the information they needed to manage and control branch accounts. That would have made it more difficult for testers to flag up problems that would affect these users. The likely response would have been “if the client doesn’t care then why should Fujitsu?”
That stance was dismantled by Justice Fraser in the Horizon Issues case. He ruled that Horizon did not work reliably for the subpostmasters, and that the system was therefore defective. So even if management doesn’t care, there can still be painful legal problems further down the line. This raises difficult questions for testers. What do they do if they see the system could harm people, and neither management nor the client care? How should they test the system if an important class of users have been clearly neglected by the client and the writers of the requirements?