Could AI be the Great Filter? What Astrobiology can Teach the Intelligence Community about Anthropogenic Risks
Where is everybody? This phrase distills the foreboding of what has come to be known as the Fermi Paradox – the disquieting idea that, if extraterrestrial life is probable in the Universe, then why have we not encountered it?
This conundrum has puzzled scholars for decades, and many hypotheses have been proposed suggesting both naturalistic and sociological explanations. One intriguing hypothesis is known as the Great Filter, which suggests that some event required for the emergence of intelligent life is extremely unlikely, hence the cosmic silence. A logically equivalent version of this hypothesis — and one that should give us pause — suggests that some catastrophic event is likely to occur that prevents life’s expansion throughout the cosmos.
This could be a naturally occurring event, or more disconcertingly, something that intelligent beings do to themselves that leads to their own extinction. From an intelligence perspective, framing global catastrophic risk (particularly risks of anthropogenic origin) within the context of the Great Filter can provide insight into the long-term futures of technologies that we don’t fully understand, like artificial intelligence.
For the intelligence professional concerned with global catastrophic risk, this has significant implications for how these risks ought to be prioritized.
Mark M. Bailey
Comments: 19 pages, 2 figures
Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Physics and Society (physics.soc-ph)
Cite as: arXiv:2305.05653 [cs.CY] (or arXiv:2305.05653v1 [cs.CY] for this version)
https://doi.org/10.48550/arXiv.2305.05653
Focus to learn more
Submission history
From: Mark Bailey
[v1] Tue, 9 May 2023 17:50:02 UTC (420 KB)
https://arxiv.org/abs/2305.05653
Astrobiology, SETI