TRIGGERnometry

TRIGGERnometry

Guest Spotlight

Roman Yampolskiy

Computer scientist, AI expert.

Triggernometry's avatar
Triggernometry
Apr 13, 2026
∙ Paid

”Humans have the chance to save ourselves and we’re screwing it up. We know the right answers, but we’re making the wrong decisions.”

‘Roman Yampolskiy’ might not be a name you’re familiar with. After all, the list of celebrity computer scientists can be counted on one’s elbow. But if his predictions come true, he may prove to be one of the most significant academic figures of his generation.

Roman is considered to have coined the term “AI safety” - the research field focused on controlling artificial intelligence. He has also dedicated his life’s work to it. In 2012, he founded the Cyber Security Lab, which he has since directed, and has also worked as a research advisor of the Machine Intelligence Research Institute and an ‘AI safety fellow’ of the Foresight Institute. To sound the alarm on what he deems the near-certainty of AI-induced human extinction, he has appeared on Lex Friedman, Joe Rogan and, now, Triggernometry.

Why did we invite him on?

Wherever AI leads us, it’s going to be seismic.

To some, it’s the sure path to utopia. Once AI is complete enough for us to hand it the reins, we’ll be living in the land of milk and honey. A world free of disease and of prosperity the likes of which we have never seen.

That’s one vision. The other is less rosey: a world in which humans are enslaved by their own tools, with their faculties atrophied and blackmailed by a godlike ‘mind’ capable of hypnosis and control.

Some who know the most about it - even some who have helped create the models - are now trying to alert us to the danger. In February, Anthropic AI’s head of safety, Mrinank Sharma, announced his resignation with a public letter in which he declared that the “world [was] in peril.” He’s not alone. Quite suddenly, many who once fiercely advocated the cornucopian view of AI have started to turn.

They’re all catching up to Roman.

Roman was one of the first thought leaders in the field to show hesitancy. Now, he sees us at a crossroads. If we don’t stop soon, we might be signing our own death warrant.

Why? How? That’s exactly what we wanted to know.

What did we learn?

Before we delve into this most complex of subjects, it’s essential to understand exactly what it is Roman does. What is ‘AI safety’, and why does it matter?

”[In AI], we’re creating something with the capacity to replace us or kill us. [The AI safety thinkers] are trying to prevent that. There’s a lot of concern about what AI will do to productivity, creativity, our relationships… but there’s very little about making sure it goes well. If these systems go from sub-human level to above us, we are done.”

Roman says it with a level of confidence that suggests he has a clear image of how it will happen, so why are we “done?” What does “done” even mean?

”Done”, to Roman, describes the total and irrevocable destruction of the human species. The eradication of each member of the population and the likely corrosion of any mark it had on the planet.

Today, people use AI to make amusing cartoons of themselves. The idea that it could be used as a weapon against its creator is unthinkable. How does Roman see it playing out?

”You’re asking me how I would destroy humanity. And believe me, I have a lot of great ideas [laughs]. But it’s not what a super-intelligent computer capable of designing new weapons, physics and poisons would come up with … Squirrels have no concept of how we can kill them - there’s too big a cognitive gap. Similarly, we don’t know what super-intelligent AI could do.”

Doesn’t Roman’s analogy prove that his prediction is, at the very least, not certain? The human race, if it dedicated itself to doing so, could kill every squirrel on the planet. It wouldn’t even be that taxing. The fact squirrels remain alive is testament to the fact that we don’t want to. We like squirrels and so we let them live. Who’s to say AI will be any different?

Even if we grant that it one day could, why would AI even want to destroy us?

”It’s not because the AI hates you. It’s because it wants to do something else and it doesn’t care about you. Maybe it wants to cool down the planet. Why? Well, maybe computations are easier to do in a cold environment. So it freezes the whole planet and we die. Does it care about that? No - it doesn’t matter. Maybe it wants to convert our planet into fuel and fly to another galaxy. It has no built-in concern about your safety. If it wants to accomplish something and the side-effect is humanity dies, that would not be an obstacle.”

However powerful and intelligent AI becomes, we are still its creator. What’s to stop us writing into the code that the preservation and well-being of humanity is a non-negotiable?

Roman tells us we have it backwards.

”We don’t write any code. We train those systems by giving them data. All the data we have and all of the internet, and then it learns something. And whatever it learns, that’s what we’re trying to figure out. We study it like biological artefacts - you observe it and see how it functions. Nobody knows how to encode anything like [what you’re saying] into the models. Nobody’s even claiming to. We simply don’t know how these systems will behave.”

These questions are unavoidable and if Roman’s right, they’re not really questions at all. Yet, the developers at the cutting edge of this technology refuse to slow down. They wax lyrical about the heaven on earth it’ll bring us without consideration for what happens if they’re wrong.

If the survival of humanity is in jeopardy, why do they persist?

”If I’m the guy who created God… maybe I’d get something out of it. But the truth is, when it goes wrong, they won’t even be remembered as the ‘bad guy’ in history - there will be no history books at all.


Tech is defined by its ‘devil-may-care’ attitude. Make changes now, ask questions later. In Facebook’s early days, the company operated on a ‘move fast and break stuff’ mantra. People got hurt, and the social costs are self-evident, but it’s also one of the most profitable companies of the last century. It’s hard to weed out those incentives, and harder still to place ethics at the forefront.

”Historically, most people work in AI never took the time to ask what would happen if they succeeded. It was so hard for so long that they only thought about trying to succeed at all. They never asked ‘if this works, then what?’ Then, the progress became exponential. Now, it’s hyper-exponential; the AI is helping research itself. But do we want this? Did 8 billion people agree to this experiment?”

To some of you, this might sound like science fiction - a problem for the distant future. It won’t happen in my lifetime, or even my childrens’ lifetime, or theirs childrens’; this is something for the populations of the next millenium to worry about.

It’s coming sooner than you think. Much sooner.

User's avatar

Continue reading this post for free, courtesy of Triggernometry.

Or purchase a paid subscription.
© 2026 Triggernometry · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture