- Opinion
- 26 Jul 25
UnitedHealthcare: "A 90% error rate is not an error rate. It's a feature."
There’s a rumour that the bullets that killed high-flying health insurance CEO Brian Thompson had three words inscribed on them: deny, delay and depose. It is, after all, what insurance companies do to maximise profits. Now they’re adding the blatant discrimination engineered by AI to the mix, insisting: the computer is always right…
Twenty minutes before dawn, on Wednesday, December 4, 2024, twenty-six-year-old Luigi Nicholas Mangione was alleged to have shot dead fifty-year-old Brian Thompson, CEO of UnitedHealthcare, on a cold New York Street.
Thompson was considered a star executive, having boosted profits at the health insurance giant by billions. According to Andrew Witty, CEO of the overall UnitedHealth Group, he was a “truly extraordinary person who touched the lives of countless people throughout our organization and far beyond.”
Not many people would dispute that, though some would not mean it as a compliment.
On Wednesday, May 7, 2025, a group of investors – or shareholders – sued UnitedHealthcare. Their beef: that the public backlash following the murder of Brian Thompson prevented the company from pursuing “the aggressive, anti-consumer tactics” that it would need to achieve its earnings goals, Matt Lavietes reported, for NBC News.
Some of the bullets used in the killing of Brian Thompson were rumoured to have the words “deny,” “delay” and “depose” inscribed on them. The group of investors were essentially claiming that, because Mangione had become a folk hero to many, UnitedHealthcare had decided not to pursue their usual denying, delaying and deposing when it came to resisting or long-fingering health claims as a matter of routine business practice.

KKK LIES
Flashback: two years earlier, on Tuesday, November 14, 2023, another lawsuit was delivered to the United States District Court in the District Of Minnesota, where the UnitedHealth Group HQ was located.
Beth Mole described the contents of the lawsuit for Ars Technica:
"UnitedHealthcare, the largest health insurance company in the US, is allegedly using a deeply flawed AI algorithm to override doctors’ judgments and wrongfully deny critical health coverage to elderly patients. This has resulted in patients being kicked out of rehabilitation programs and care facilities far too early, forcing them to drain their life savings to obtain needed care that should be covered under their government-funded Medicare Advantage Plan.”
The lawsuit argued that UnitedHealthcare’s AI system has a 90% error rate, meaning that 9 out of 10 decisions were – or would be – reversed on appeal.
A 90% error rate is not an error rate. It’s a feature. It’s part of the design. In the case of a health insurance provider – or indeed any insurance company – it amounts to a form of theft.
These sort of ‘error rates’ is what AI excels at. It is a wonderful tool for denying and delaying healthcare, and the associated costs – and that is why the health industry loves it. If this super-advanced AI system we have denies you, then denied you are. Nothing we can do about it, I’m afraid. The computer is always right.
So let’s look under the hood.
It doesn’t take much of a mechanic to spot that Healthcare AI has been specifically designed to target poor people, minorities and women. Again and again, AI has been found to be reinforcing “harmful, race-based medicine.” All the models that were tested displayed “examples of perpetuating race-based medicine,” wrote Jesutofunmi A. Omiye, and other researchers, in an analysis published in Nature magazine.
And those AI doctors? Spewing out those old medical KKK lies about how Black men had 10-15% lower lung capacity than that of whites.
“Analysis of social media language, using AI models, predicts depression severity for white Americans, but not Black Americans,” the US National Institutes of Health found.
ARE YOU REALLY DEAD?
Data is political. What you collect. What you don’t collect. How you interpret it. Data is life and death.
“Women who have a heart attack in the UK are 50 per cent more likely to be misdiagnosed than men,” author Caroline Criado Perez wrote. “They’re also more likely to die. And it’s basically because the vast majority of medical data we have collected historically and continue to collect today, including in cardiovascular research, has been in the male body.”
A report by Dr. Anne-Sophie Morand, an attorney-at-law, said that “a lack of physiological indicators of heart attacks in women led to AI systems being 50% more likely to misdiagnose heart attacks in women compared to men.”
Such behaviour is, of course, merely replicating what male doctors have done forever. Research has found that AI “could transform” such misdiagnosis for the better. That might happen for rich, white women, maybe. However, if AI starts doing proper diagnoses that result in higher costs, AI will be quickly sent back for retraining.
AI must cut healthcare costs. Otherwise, how are senior managers going to get their bonuses and investors the enormous returns they crave? In case you didn’t know, there are huge costs involved in running
AI systems, so AI must cut costs elsewhere to justify its existence – and the best way to cut costs, as always with capitalism red in tooth and claw, is to target poor, marginalised and helpless people, including older people.
The World Health Organization has warned about AI agism and how it could seriously impact elderly healthcare. Does anyone in power care? An editorial from Nature magazine stated that, “Debates on these issues are being starved of oxygen.”
At the other end of the age scale, an AI system had an 83% error rate in diagnosing children. You don’t get 90% and 83% ‘error’ rates by accident. It’s all part of the design.
“AI says that you’re dead,” a poor Indian recounted, according to report by Kumar Sambhav Tapasya and Divij Joshi, published by Aljazeera. “Who are you to say that you’re alive?”
In India, AI marked poor people as dead so as to deny them their pensions. It also denied people food support and other benefits.
This isn’t a problem that happens only in poorer countries. In the UK, the government – for a while – was forced to suspend AI’s ability to automatically deny people benefits. And then there was the Post Office scandal in the UK, caused by a mere piece of accounting and administration software.
ENSHRINING DISCRIMINATION
In case you hadn’t heard, in perhaps the greatest miscarriage of justice in UK history – and as we know there have been a few – thousands of sub-postmasters were wrongly blamed for financial losses misdiagnosed by the UK Post Office’s Horizon computer system, developed by Fujitsu.
Approximately 1,000 post-masters were wrongly convicted as a result of being accused of theft by the computer. Their lives, and those of their families, were utterly destroyed. In a scandal that was dragged out for 16 years (and is still ongoing for many), 236 of those sub-postmasters were imprisoned. 13 people died by suicide as a result. Over 7,300 sub-postmasters had to be compensated. It cost over £1 billion in compensation alone. But no money could ever compensate people for what happened to them. One post office worker described the consequences:
“The impact on me of the treatment the Post Office subjected me to has been immeasurable. The mental stress was so great for me that I had a mental breakdown and turned to alcohol as I sank further into depression. I attempted suicide on several occasions and was admitted to a mental health institution twice.”
All because senior management in the UK Post Office believed the crazed mantra: the computer is always right. Every shred of common sense screamed that it could not be true, that thousands of sub-postmasters had turned into thieves and criminals overnight.
But they persisted: the computer is always right. And then they started to cover up, when the truth they should have seen years earlier was too blindingly obvious to deny any longer.
This is the world that we’re entrenching even more deeply with AI, which will always side with the people who design and control it – that is, with those in power.
Under the guise of making everything automated, thereby allegedly doing away with so called human error – the claimed benefit of which is to eliminate bogus claims, fraud and waste – insurance companies are systematically stealing from their customers.
They are enshrining discrimination and prejudice.
They are behaving as judge and jury. They are robbing from the sick, the elderly and those with disabilities.
Unless we turn the tables, we are all part of a cruel world designed to deny, delay and depose.
The way the investors like it. I wonder why?
RELATED
- Opinion
- 07 Jul 25
Former President Barack Obama to speak at Dublin event later this year
- Opinion
- 30 Jun 25
The Environmental Crisis: We Need To Act Now Or Regret It Forever
- Lifestyle & Sports
- 30 Jun 25
Keelin Moncrieff: "The online community wasn’t cutting it for me anymore"
- Lifestyle & Sports
- 26 Jun 25