During our recent America trip, one subject dominated each and every conversation. Every dinner, every drink, every meeting, every off-camera chat: AI.
Some herald it as the glorious end-state of human civilisation; the utopia that all progress has lead to, a paradigm shift that will cast a shadow on all that preceded it. All the petty, material problems we’ve faced in our history will be solved, and the true potential of human beings will be realised for the first time. It can’t come soon enough.
To others, it’s as good as the Antichrist. A hyper-intelligent demonic force that will negotiate our presence on the planet. It’s convenience at a seismic cost. Once it doesn’t need us, or sees us as an obstacle, then what? It’s not here to enhance the human spirit, but replace it, render it obsolete. It’s the end of the world and we can see it coming. We am become death. We need to destroy it, while we still can.
When dealing with the most important philosophical question of our time, it’s important to keep a level head. Trouble is, it’s hard to know who to ask. The architects of AI are invested in you believing their hype, and the Luddites are often just as histrionic.
Euoghan was the perfect candidate. He’s an entrepreneur on the bleeding edge of this wild, new frontier - he co-founded Intercom, the world’s premier AI customer service company, in 2011 and has remained there ever since. But he’s also no preacher - he’s spoken openly about the dangers of AI, the costs of pursuing it, and why its architects must tread lightly.
He strikes the ideal balance, and we knew he’d make for a fantastic interview.
What did we learn?
“It’s been coming for a long time. But the AI we’re talking about now is three years old. ChatGPT came along and blew everyone away - it seemingly could talk and think like a human. And that’s new.”
’A.I.’ has been part of the lexicon for decades. Until recently, it was assigned to the realm of theory and fiction. Only in the last few years has ‘A.I.’ as we commonly understand it become a real possibility. So what changed? Why is it not possible before? How does it work?
Eoghan admits he doesn’t know. In fact, almost nobody does.
”It’s mathematics. It’s probabilities. It’s a lot of stuff that even people like me, who apply AI, barely understand. There are very few people who deeply understand it … [And] even they don’t know what’s coming next. The narratives are constantly changing, constantly evolving.”
Eoghan’s skill isn’t in understanding the intimate, hopelessly complex details of the burgeoning technology. His expertise lies in making predictions, putting reins on it. The outer edge of tech is never static, but as we sit here today, what does he foresee?
”Large amounts of works that’s done by humans today will be done by AI. It’ll be knowledge-work, but also physical work. But it’ll also do work that we can’t afford to give to humans today.”
For a slim few Silicon Valley billionaires, this is tremendous news. An employee that can work all night, never takes their holiday, never gets tired, and doesn’t want a salary? Where do we sign up?
For the rest of us, however, it’s cause for alarm: nobody wants to be made redundant. Not from a job, but from ‘work’ itself. This kind of societal shift is nothing new. For centuries, most people will have worked in the fields. Then the automated plough moved these farm-hands to the factories. Then automation in the factories sent the factory-hands to the office blocks. Soon, the spreadsheet-fingers will be a thing of the past too. Or will they?
”The company I run works in customer service, and we have 7,000 customers [who use our AI]. The vast majority of them are not letting people go; in fact, the majority are supplementing their work force with AI. The idea that there will be this radical, overnight change whereby 90% of people won’t have jobs… It’s a low probability. The more likely result will be 10, 15 years of gradual change.”
To Eoghan, AI is not something to be feared, but understood. Like all sociocultural revolutions, it won’t be painless, but the scale of the change is often overstated.
”I am not a utopian - there are gonna be challenges - but those who are fearful, and I understand them… I don’t think it’s going to be as bad as they think.”
Much of the anxiety towards A.I. comes from that ambiguity. As much as previous technologies rendered certain professions obsolete, they were predictable. The automatic plough was never going to start writing poetry. In the case of supercomputers, it feels like we have no idea. How can we prepare if we have no idea which jobs will still be afforded to human beings?
Keep reading with a 7-day free trial
Subscribe to TRIGGERnometry to keep reading this post and get 7 days of free access to the full post archives.



