A sneaking fear that the machines might turn on us is just not good enough - we need to be able to quantify that risk if we want to avoid it, or at least manage it. Or we could just push on regardless and see how things work out.
Whatever your take, we were thrilled to have Dr Adrian Currie of Cambridge’s Centre for the Study of Existential Risk join us for a Register Lecture on April 25 to discuss How Can We Develop a Science of Existential Risk?
As Adrian puts it, existential risks are threats to the very existence of the human species. Old-school ones such as meteor strikes, massive volcanic eruptions and climate change, leave traces for us to study. Others are much trickier to track, such as those technological developments that have enabled our species to have unprecedented effects on a global level.
So, to understand how we can reap the benefits of AI, automation, synthetic biology and advanced gene-editing techniques, and so on, without, well, imperilling our very existence, we need to find a way of understanding, communicating and minimizing those risks.
Adrian argued that a science of existential risk must be speculative and creative - and a lot more besides. It's incredible how much ground you can cover without leaving the Reg lecture room.
You can see all out upcoming lectures, and videos of our previous lectures, right here.
Sponsored: Ransomware has gone nuclear