- Opinion
- 16 May 26
Welcome to the AI Apocalypse – brought to us by largely uncontrolled, mostly US-owned tech corporations
On several levels, and in so many ways, the world is teetering on the brink of disaster. Can the situation be turned around in time to avoid the devastation that would follow-on from peak apocalypse? The Hog here reflects on what Bob Dylan characterised as a World Gone Wrong.
What is the apocalypse tolerance of the average person? How many are too many? And are we at peak apocalypse now?
We’re certainly in the middle of a shitstorm – a phrase apparently coined in 1948 by Norman Mailer in his best-selling debut novel about the Second World War, The Naked And The Dead.
Well, we’re back in that kind of territory.
For a start, there’s the climate change apocalypse, about which so many people in power are now in denial. How close are we to a catastrophic breakdown? No one knows for sure. But it could be very close.
In parallel, as war rages in the Gulf, some senior American political and military figures, and probably others in Israel, are talking about it in terms of the great final battle between good and evil, the Armageddon Apocalypse foretold in religious texts – the most prominent example being the increasingly unhinged pronouncements of the self-styled “Secretary of War”, Pete Hegseth, an avowed Christian nationalist, around whom accusations of war crimes are mounting.
And now, on top of all that madness, arriving at warp speed, we have the AI Apocalypse, brought to us by largely uncontrolled, mostly US-owned tech corporations.
But, I hear you say, we haven’t yet really dealt with the personal, social, economic and cultural impact of social media and, in particular, the fetid slime that has been sluiced by it into private and political discourse and media coverage.
This is true, of course. But tech waits for nobody, especially if there’s vast amounts of money to be made.
That it has already disrupted a whole galaxy of assumptions, arrangements and agreements may be no bad thing.
The problem isn’t change itself – in many ways, we could do with more of that. It’s the nature of the change, and the impact it is having socially and politically, that’s deeply disturbing.
CONCERNS ABOUT PLAGIARISM
Global Tech is the ultimate expression of pure capitalism.
Look at its vast scale; the pandering to, and exploitation of, humanity’s dark side; the stealthy capture – for power and for profit – of incredible amounts of personal and institutional information and data; and the unimaginable wealth a small number of individuals have been able to extract from what, in some respects, amounts to brazen theft.
It would be silly to deny that there is an upside. Digital communications have ushered in an amazing era of interpersonal, social and transnational connectedness, the speed and depth of which is incredible.
We’ve all benefitted. Just think of the thousands of WhatsApp groups through which countless lives and careers are linked and organised. Think too of researchers, now able to communicate around the globe in a way their predecessors wouldn’t even have dreamed might be possible. Or of the ease with which family members in Dublin and Sydney can see one another and talk at any time of the day or night.
But instead of this being achieved via a series of inter-linked public utilities, it is being controlled by people whose only ultimate concern is making even more money and who, according to the evidence of whistle-blowers, have been going about that ambition in a totally unscrupulous way. We have all seen what happened with the privatisation of the water in the UK, with under-investment, widespread pollution from sewage, systems collapsing, red flags on beaches, water running out – and worse to come as creaking underground pipes give up the ghost.
In the same spirit, social media companies and platforms promote and exploit humanity’s worst, hidden, propensities – for viciousness and abuse, racism and neo-Nazism, cybercrime and surveillance, entitlement and echo chambers, revenge porn, hacking, stalking and deepfakes. It’s a long list – and it is growing, as bad actors are facilitated in attacking the democratic process and stirring up animosities and grievances, knowing the trigger words and obsessions that successfully play on the psychological and emotional vulnerabilities of individuals and groups...
And now? Some seers forecast that AI will finish us off, bright and dark sides, and all in-between. They stress the potentially disastrous downside, for example, of the way AI is beginning to play out in schools, academia and publishing.

You can, of course, argue that AI carries on doing what’s long been done, only very much faster. And there is some truth in that. Indeed, in some fields, scientific research being one, the technology’s capacity to find, process, synthesise and analyse huge data sets has led to fascinating discoveries, in particular in genetics and medicine.
But, even as this happens, many academics and researchers have been calling for a pause, raising legitimate concerns about plagiarism, false research, cheating and a general dumbing down they associate with increased student use of AI.
FABRICATE AND DISSEMINATE
That is only the tip of the iceberg. Concerns have arisen too about the use of AI chatbots, in particular in counselling. Early research seems to agree. Wired recently reported on a study by researchers at Carnegie Mellon, MIT, Oxford and UCLA, which found that using AI chatbots for as little as 10 minutes may have a significant negative impact on people’s ability to think and problem-solve.
Research into the inner-workings of the brain confirm that this process is a cumulative one. The act of figuring complex problems out, or even of using our memories to recall facts and information that we learned along the way, actually strengthens our neural pathways – like going to the gym and exercising our muscles strengthens our bodies – so that we become more mentally agile and capable.
These studies suggest that the opposite is the case with AI – and that the hoped-for productivity gains from widespread deployment of AI are likely to come at the expense of our individual and collective ability to develop foundational problem-solving skills.
And, of course, in the absence of robust controls and standards and sanctions, clearly toxic products of AI – like deepfake videos and photos – are even harder to distinguish from genuine ones.
They’re everywhere. And if they are allowed to proliferate, then learning how to tell the real from the fake will surely become a core skill taught from primary school onwards. If, that is, it is even possible.
Two political examples come to mind. The first is a fake video of Tánaiste Simon Harris purportedly advertising fraudulent online investment products.
The second is a set of eight photos of Iranian women posted on the so-called Truth Social platform – which he owns – by Donald Trump. As observers quickly noted, they had a look of AI enhancement about them. The accompanying facts Trump gave were wrong as well. Indeed, the very name Truth Social is an example of the kind of misdirection currently being pursued as a core matter of policy by extremist right wing ideologues, governments and autocrats. You are going to level cities, kill hundreds or even thousands by bombing them? Call it “Project Freedom”.
AI makes it so much easier to fabricate and disseminate this shit. Truth is increasingly elusive. Especially when so many people in positions of power couldn’t care less whether something is true or not, or worse still are actively engaged in spreading lies and disinformation. Russia sees cyber ops, hacking, disinformation and supporting extremist organisations in democratic countries as part of its hybrid warfare model.
And yet, for all the speed and computing power, there are weaknesses in the Large Language Models on which AI systems are trained.
Tom Griffiths, professor of information technology at Princeton University, gave this fascinating example in an essay in The Guardian.
OpenAI’s GPT-4 model was asked a question: how many letters are in this sequence: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa?
The GPT-4 model was more likely to answer correctly when given 30 letters rather than 29. Why? Griffiths explains that it’s because the number 30 is written down far more often than the number 29.
Then there’s language itself. Despite efforts to soak up as many languages as possible, the AI systems we encounter have an Anglophone bias, and an American one at that. Up on Hog Hill we foresee the possibilities of using Gaeilge to annoy and confuse AI. Just for the craic, like.
SOCIALLY DIVISIVE ALGORITHMS
AI is also certainly going to have a huge impact on employment. Taoiseach Micheál Martin commented recently that there’ll be serious disruption. That may well be putting it mildly.
Meta (which owns Facebook and Instagram) is already cutting 10% of its global workforce to trim costs as the company invests in AI. Oracle is to lay off 15% of its Irish workforce. Globally, by next December, Amazon will have cut 30,000 employees, over a period of just two years. Micosoft is cutting too.
But it’s not just the big companies that are slashing employment levels. Tech people agree that, as major AI instruments evolve (rapidly), a lot of jobs will disappear. Anthropic’s Dario Amodei estimates that half of entry-level clerical jobs will go.
This is very often presented as a good thing. And many people bought into the notion that it would be a boon in terms of reducing the “working week” and enabling work from home. But big corporations don’t care about any of that. More people on the dole – if the country is egalitarian enough to have a social welfare system in place at all, that is – is someone else’s problem.
The prospects are unnerving, for Ireland’s jobs market and Government tax take, and also for education, housing, entertainment and personal expenditure.
But is it an apocalypse or an opportunity? That’s worth thinking about.
After all, similar apocalyptic fears were expressed when computers were introduced. And production-line robots. Change isn’t necessarily for the worse.
And sometimes you come across the unexpected. For example, in the course of his acrimonious court case against Sam Altman of OpenAI, Elon Musk said he started the legal action “to prevent a Terminator outcome.”
Can we believe a word that Musk says? That’s a good question. But what we can hope is that by bringing information about the race to control AI into the open, the case will have given attentive, properly democratic governments a basis for the kind of radical action that is increasingly urgent in dealing with the abuse of human rights, anti-democratic chicanery, promotion of autocrats and dictators – and so on – of which Big Tech has too often been guilty, especially in recent times.
On all of that there is an interesting twist. To see if AI will deepen the divisiveness already fostered by social media, John Bur-Murdoch from the Financial Times conducted a fascinating experiment. He analysed a large 2025 dataset on policy preferences and socio-political beliefs to investigate whether the most widely used AI chatbots shape conversations about politics and society and, if so, how.
The results strongly suggest that, unlike social media’s deliberately socially divisive algorithms, all the AI platforms tend to nudge people away from the most extreme positions towards more moderate and expert-aligned stances.
There are, of course, prejudices inbuilt, depending on who designed the particular LLMs. We all know that there are issues about racial bias – and gender bias – that have yet to be addressed. But even Elon Musk’s Grok – his support for fascist parties in Germany and the UK notwithstanding – guided conversations about policy and society towards the centre-right. ChatGPT, Gemini and Deepseek all nudged towards the centre-left.

So, maybe artificial intelligence is smarter than many might have assumed. It’s also, almost certainly, far smarter than the phenomenon we might simply describe as “natural stupidity.”
In time we may be grateful for such small mercies. But for now, it is crucial for governments committed to democracy, social justice, the rule of law, and the international rules-based order to take the necessary steps to break the global tech monopolies – and put the safety and well-being of citizens young and old at the heart of conversations about our future.
• Niall Stokes will return next issue with The Message.
RELATED
- Opinion
- 15 Jan 26
X bans Grok AI from creating sexual deepfakes amid backlash
- Opinion
- 13 Jan 26
Minister Helen McEntee announces €42 million in aid for Palestine
- Opinion
- 08 Jan 26
Women's Aid leave X following Grok AI deepfakes controversy
- Opinion
- 06 Jan 26
AI Data Centres Will Eat You Alive
- Opinion
- 12 Oct 25