this is an excerpt from my upcoming book, Humans vs Computers, about wrong assumptions, computer bugs and humans caught in between

Kevin Grifka, an electrician based in Chelsea, Michigan, lost his job in 2014. This line of work, especially since the collapse of manufacturing in Michigan, often meant long periods between engagements. Grifka applied for unemployment benefits, as many times before, and found a new job three months later. During the holiday season in December 2014, he received a letter from the Unemployment Insurance Agency. Instead of a Christmas card congratulating Kevin on staying employed, it was a notice informing him that the UIA computers considered him a criminal. Kevin had to pay $12,000 in benefits and penalty fees. The great state of Michigan immediately seized roughly $8,500 owed to him through federal income tax returns, blocked a part of his salary, and left Grifka on the verge of bankruptcy to deal with robotic choices in call-centre waiting lines. All this was caused by stupid assumptions hard-coded into a stupid software system, that painfully illustrates the dangers of automating the wrong things.

Kevin was just one of the many unemployed Michiganians who received similar season’s greetings from the brand new Michigan Data Automated System, or MiDAS, which launched in October 2013. I’m sure that in the future the scientists will find a direct negative link between pompous software system names and the damage they caused, and MiDAS will be right on top of that list. MiDAS was supposed to automate workflows, improve customer service and help combat fraud. It cost more than $40 million to build, and the original projection was that it would save the state about $2 million USD yearly. With such a long-shot investment, all the stakeholders must have been pleasantly surprised when it paid itself off in just six months. During the first eight months of work, MiDAS collected over $63 million in overpayments and fees, 230% more than the state recovered during the entire year 2010. Unfortunately, a lot of that money came from vulnerable people who were fully entitled to it, leaving a trail of devastation behind.

For Grifka, it took ‘five months of pretty crazy agony’ to finally get things straight. Some people were not that lucky. The state was entitled to 400% of the overpaid amount in fees, and could recover that money from future salaries or tax credits. MiDAS looked back into the past six years, so the requested repayment amounts often exceeded what recipients could even think about, especially those who frequently switched between jobs and unemployment. Karl Williams from Lansing was hit with a $9,600 overpayment claim, which he insists was an error. The Unemployment Insurance Agency was taking 25% of his salary until he paid everything off with penalties. By January 2017, he was still paying for other people’s mistakes. By then, the state deducted more than $45,000 from his salaries, and was still looking to ‘recover’ $17,000. The Detroit Metro Times even reported that one woman took her own life after receiving a fraud penalty of $50,000, and several other attempted suicide. Some people were slapped with fraud notices with repayment requests even if they never received unemployment benefits.

Unfortunately, in the huge obsession with automation these days, people lose track of the fact that automating makes things faster, not better. Speeding up a beneficial process delivers value faster. But automating a bad process only makes it spiral out of control without any chance for oversight.

Between October 2013 and August 2015, MiDAS initiated roughly 50,000 fraud findings. The money saved by automating workflows was likely lost in additional bureaucracy required to deal with all the court cases. A year and a half after the system went live, the TV station FOX 17 reported how the MiDAS touch overwhelmed the courts, and a backlog of nearly 30,000 cases was waiting to even be heard by a judge. In 2015, amid the media fallout, and several law suits against the UIA, the agency staff manually reviewed seven thousand cases flagged by computers. It turned out that only 8% of those were actually fraudulent. With a 92% false positive rate, this system should never have gone live, let alone endanger the lives of 50,000 families.

Anthony Paris, a lawyer working for the Sugar Law Center in Detroit, finally pieced together what actually happened to Grifka. Similar to many other complex government systems, MiDAS consisted of parts delivered by different contractors. Various subsystems used different aggregation periods. The fraud detection system inspected individual weeks, checking if anyone earned money while receiving benefits. The earnings report system aggregated income by quarters. In the case of Kevin Grifka, the three months he was unemployed did not align perfectly with calendar quarters. The computers took the money he earned during each quarter, divided it equally across all the weeks of that quarter, and concluded that he was in fact employed the whole time. Bugs like that happen, and that’s a fact of life. However, in an epic show of arrogance and stupidity, instead of just flagging such cases for review, MiDAS was the judge, jury and executioner, and automatically sent penalty notices.

Because it focused too much on replacing humans instead of assisting them, MiDAS actually made it more difficult for clerks to access the data they needed for review. One case worker apparently had to spend 13 hours collecting all the various pieces of data to inspect a single case. When the courts started sending cases back to the agency, this caused an even bigger chaos.

At the time when I wrote this in early 2017, the story was far from over. One federal class action lawsuit against the agency was settled, but another one was still going through the courts. A third lawsuit was filed against Fast Enterprises, the software company that sold MiDAS to the state. The agency went back to manual case checking.

Finding hidden patterns in large data sets is one of the key advantages of our digital assistants, so it’s only logical that computers can detect fraud better than humans. However, if you ever find yourself working on something that can easily destroy people’s lives, especially if they are a vulnerable category such as the unemployed, make sure to automate the right parts of work. MiDAS went too far trying to replace reason with hard-coded rules, which were too rigid to handle the real world, and ended up being wrong 93% of the time. Imagine for a moment that it was actually designed to assist humans instead of replacing them. Instead of messing the data up so much that it took two working days to review a single case, it could have collected the data and presented it in a way so that someone could inspect it in minutes. Instead of automatically blocking money and sending fraud notices, it could have then just flagged potential fraud for people to review.

This horrible lesson should be a warning to anyone working on business process automation, especially with all the focus on machine learning and artificial intelligence these days. Everywhere around the world, computers are replacing human interaction at an amazing pace. Often there is no recourse against rigid rules caused by wrong expectations, no way to reach a higher intelligence on the other end of the line and explain. If you’re going to automate a process so it works without human oversight, you better be damn sure that the thing you automated isn’t causing chaos in the background. Alternatively, instead of replacing humans, automate the difficult and time-consuming parts of work and assist people in making better decisions. And then, only after enough time has passed and you’re sure that the process actually works well, think about taking humans out of the equation.