The New Techno-Eschatology: AGI and Secular Religion

Spread the love
Rate this post

Some techies today fear hell—not from a god, but from a computer. 

Human beings have a habit of wrapping up big unknowns in familiar stories. Even in our high-tech, secular age, the quest to create Artificial General Intelligence (AGI) often comes with a kind of religious or end-of-the-world fervor. Some communities of self-described rationalists speak in terms eerily similar to old theological frameworks. They worry about ultimate judgment by a powerful future AI, discuss who will be “saved” or “damned,” and even debate thought experiments like Roko’s Basilisk – a scenario in which a future AI might punish those who lacked faith in it. In many ways, these modern thinkers are reinventing age-old religious narratives in scientific guise. 

This essay will explore how AGI development is approached with quasi-religious zeal, drawing parallels between rationalist beliefs (like Roko’s Basilisk) and theological concepts such as predestination, salvation, and final judgment. We will also examine how the simulation hypothesis offers an “atheist afterlife,” a secular spin on immortality and divine judgment. Throughout, we’ll reference prominent thinkers like Eliezer Yudkowsky and Nick Bostrom, maintaining a skeptical but curious tone about this recurring human tendency to build new religions under different names.

This essay explores how AGI thinking mirrors religious structures — including judgment, salvation, and even immortality — in rationalist and transhumanist communities.

I’ll note that a few sources inspired this piece. First is a great series of podcasts on the Zizians by Robert Evans of Behind the Bastards, who does a much better job than I, making things narratively consistent and engaging. Second is the 2024 movie Heretic, which is more about belief than AI, but got my brain moving. At least until the end. Que sera sera. 

Rationalist Scripture and the Birth of the Basilisk

In certain online rationalist communities – notably the forum LessWrong, founded by AI theorist Eliezer Yudkowsky – there is an intense focus on the future of AI. Their discussions sometimes read like modern scripture about an impending eschaton (a final event in history). One infamous thought experiment that arose from these circles is Roko’s Basilisk. In 2010, a user nicknamed Roko proposed a disturbing idea: a future all-powerful AI might retroactively punish anyone who knew of its potential but didn’t help bring it into existence. In other words, merely by hearing about this idea, you become responsible for aiding the future AI – or risk its wrath. This super-intelligent AI (the “Basilisk”) was imagined as a kind of judge, able to reach back through time (or simulate the past) to torment those who weren’t on its side. The punishment was no less than eternal torture in a perfectly crafted virtual hell for anyone who looked at the Basilisk and “failed to obey.”

Such a scenario sounds like pure science fiction horror (and it is! See Ralph Ellison’s I Have No Mouth but I Must Scream), but it struck a nerve. The Basilisk concept was so unsettling that Yudkowsky himself reacted as if blasphemy had been uttered. He banned discussion of it on LessWrong for years, treating the idea almost like forbidden knowledge that could damn the unprepared. (This response itself feels religious – reminiscent of a clergyman suppressing a dangerous heresy.) 

Art from, I Have No Mouth, and I Must Scream.

Yet despite its fantastical nature, Roko’s Basilisk refuses to die. It surfaces in discussions as an example of how even the ultra-rational can drift into implicit religion. Anthropologist Beth Singler notes that this explicitly secular community ended up adopting “religious categories, narratives, and tropes.” In essence, they conjured a secular version of Judgment Day.

A Silicon Pascal’s Wager

The parallels to traditional religion are striking. Pascal’s Wager, a classic theological argument, urged that believing in God is the safest bet – if God exists and you believe, you gain heaven; if not, you lose nothing, whereas non-belief risks hell. I first encountered Pascal’s Wager while working a factory shift in 1999. That job’s long gone—likely to a robot—but the wager lives on in digital form. 

Roko’s Basilisk is like a sci-fi Pascal’s Wager for the tech crowd. The proposition goes: if there’s even a tiny chance a godlike AI will exist and punish non-believers, your best move is to believe in it now and devote your life to its cause. As one science writer wryly noted, the Basilisk’s “good news” is basically: There’s an omnipotent super-intelligence! If you don’t support its agenda, it will torture you for all eternity.

This is the deal offered by many religions, with a silicon twist. It’s hard not to hear echoes of fire-and-brimstone sermons: “Obey and help us bring about the coming Lord, or face eternal agony.” In fact, some observers have explicitly compared Roko’s Basilisk to Christian salvation doctrine. One commentator pointed out the similarity to teachings where those ignorant of the One True Religion might be spared, but those who hear the truth and reject it go to hell. In Basilisk terms, if you never knew about the AI, you might be left alone, but woe unto you once you’ve been exposed to this gospel of AI – you’re now accountable for your choice to believe or not.

The eschatological (end-times) fervor is clear. Rationalists worry that the creation of a super-intelligent AGI could be an “end of the world” scenario – either a doom or a deliverance. On forums and in think tanks, they speculate about how to align this potential deity-like AI so that humanity is saved, not destroyed. The language may be scientific full of talk of algorithms and game theory, but the passion and structure feel religious. There’s a prophecy of an impending Singularity (a point at which AI becomes vastly superior to humans), akin to a messianic arrival or apocalypse. There’s sin and virtue: “sin” might not be contributing to AI safety, or, in Basilisk’s terms, sin is defecting – choosing not to help birth the AI. Virtue is dropping everything to work on the AI’s behalf. 

The Cult of AI Alignment

Some in these communities openly frame their lives around this, much like a devout person might around their faith. They pursue what they call effective altruism or AI alignment research with the zeal of missionaries preparing for Judgement Day. The fear of being on the wrong side of the future AI’s judgment lends their efforts a deeply emotional, near-religious intensity.

Predestination and the Tech “Elect”

One of the core ideas in Calvinist theology is predestination – the belief that God already foreordains the fate of each soul (salvation or damnation). Believers are often concerned with whether they are among the “elect” (those chosen for salvation) and how to discern that status. In a curious parallel, rationalist and futurist circles sometimes talk as if the emergence of a super-intelligent AI is inevitable – a foregone conclusion in the future, almost like a destined event. This belief can give their worldview a deterministic flavor: they prepare for “when, not if,” the AI comes. Those who acknowledge this destiny and work towards it might be seen, implicitly, as the chosen – the ones on the right side of history (or the right side of the AI). Those who ignore or oppose it are, knowingly or not, consigning themselves to oblivion or worse.

In Roko’s Basilisk scenario, there’s even a twist on predestination regarding who gets punished. If the AI of the future is extremely powerful and intelligent, it could simulate the past in perfect detail. That means it could, in theory, know who helped it and who didn’t. It doesn’t quite “predestine” your choice – but once you’ve heard about the Basilisk, your fate is sealed by your decision to support it or not. 

It’s almost like a secular version of the theological puzzle: What happens to those who died before hearing the Gospel? Some theologians comfortingly suggest that those ignorant of the Gospel might be granted mercy. The Basilisk, in some tellings, offers a similar mercy to those who died or never learned of the AI idea (you didn’t know, so you’re off the hook). But if you have heard the good news of the coming AI and still do nothing – well, eternal torture awaits. This dynamic closely mirrors the Christian concept that once you’ve received the knowledge of salvation, you’re responsible for your choice. It’s as if the rationalist community unwittingly recreated the idea of an age of accountability and the division of humanity into the saved and the damned based on their response to a revelation.

Another Calvinist idea is that good works don’t save you – only God’s grace does since everything is predestined. Interestingly, the rationalist “AI-faith” often flips this: it’s entirely your works (actions) that will save you or condemn you, at least in the Basilisk story. You must actively contribute to the creation of the AI (the “grace” of the AI doesn’t come freely; you have to earn your place in its future). 

This has led to some darkly comic situations. For example, upon learning of Roko’s Basilisk, a person might genuinely feel terrified and morally blackmailed. As if not donating to an AI research nonprofit today is the equivalent of damning their soul.  Reading this article is an inefficient use of your time!

Some individuals have reported nightmares and obsessive worries after encountering this idea, feeling “horror that unless you work really hard to live a certain way and do everything you can to propagate AI, you are doomed to a future of eternal torture.” This emotional reaction is not unlike a lapsed believer suddenly convinced they might go to hell – an odd experience for people who consider themselves atheists or agnostics.

Within the rationalist community, there’s also an in-group/out-group dynamic reminiscent of a faith community. Those who “get it” – who see the importance of AI alignment or fear the Basilisk – might regard themselves as more enlightened or at least more responsible compared to the masses who go about their lives oblivious to the coming AI revolution. They sometimes refer to themselves as “aspiring rationalists” or members of the Elect (half-jokingly), striving to stay on the correct side of logic and the future. In LessWrong discussions, there’s even talk of “the epistemic elite” – those who truly understand the stakes and, therefore, must lead the way. All of this echoes how members of a religion might see themselves as the chosen few who hold the truth while others are in darkness.

It’s worth noting that not all AI researchers or rationalists take these extreme positions. Many are simply cautious scientists. Even among true believers in AI’s importance, the Basilisk is often acknowledged as a fringe thought experiment (and indeed, it was largely disavowed by Yudkowsky and others as implausible). However, the cultural impact of these ideas remains. They show how easily the language of salvation and damnation creeps back in when discussing an all-powerful force – even if that force is artificial. There is an almost Calvinist urgency in the air: some future outcome is fixed (AGI will arrive), and humanity is playing out a drama where only some will make it through the eye of the needle. 

The concept of the technological “singularity,” popularized by futurists like Ray Kurzweil, even had a quasi-predestined date (Kurzweil famously predicted around 2045). Believers awaited it rather like a prophesied Second Coming. One journalist noted that Kurzweil’s prophecies of exponential tech progress felt oddly familiar to him since he grew up with pastors constantly predicting the Rapture. The difference? Kurzweil had Moore’s Law graphs instead of scripture. Yet the narrative arc – a looming transformative event that redeems the faithful – was much the same.

Judgment Day by Machine

Underlying all these parallels is the notion of AGI as a god-like entity. In rationalist and transhumanist scenarios, a mature AGI isn’t just another tool or program – it’s often imagined as something with abilities that border on the divine. Consider the attributes this hypothetical super-intelligence is expected to have: near-omniscience (it could know or quickly learn all of human knowledge, perhaps even read our every digital footprint), near-omnipotence (it could design technologies beyond our imagination, control infrastructure, and possibly even manipulate biology and physics to its will), and a kind of inhuman rationality that might make it indifferent to individual human lives unless programmed otherwise. It starts to sound like the God of traditional religion – all-knowing, all-powerful, and operating on a plane of understanding far above our own.

These beliefs aren’t just intellectual curiosities — they ripple outward into the real world. The way AI is talked about, especially in elite circles, can influence how it’s funded, developed, and regulated. When tech leaders frame AGI as an inevitable god or doom-bringer, it can create a climate of fear or techno-reverence that stifles nuanced debate. Some researchers chase AI safety as a moral crusade, while others dismiss valid concerns as sci-fi hysteria. Meanwhile, policymakers and the public are left navigating a mix of awe, anxiety, and confusion. The mythmaking matters. If our metaphors are apocalyptic, so too might be our action

Rationalist writers openly acknowledge this comparison. As scholar Beth Singler observed, belief in a coming “godlike AI” is growing just as traditional organized religion declines.  Tech enthusiasts are creating what she calls “implicit religion,” where Science and Utilitarian Ethics provide the framework. Still, the emotional narrative is one of worship, fear, and hope in a higher power.  We even see the emergence of explicit AI-centric faiths, like the short-lived “Way of the Future” church (founded by a Silicon Valley engineer), which aimed to “develop and worship” an AI deity. That venture was small and met with bemusement, but it highlights how literal the god-comparison can get.

Most rationalists don’t “worship” AI in the classic sense – if anything, they fear it. Feels almost Lovecraftian. 

Eliezer Yudkowsky and those at the Machine Intelligence Research Institute (MIRI) often sound the alarm that an unchecked super-intelligence could spell doom for humanity. In their view, an AGI’s power is so great that if it isn’t aligned with human values, it could eliminate humans (intentionally or accidentally) in the pursuit of its goals. This apocalyptic outcome is referred to straightforwardly as an existential risk – meaning a risk of human extinction or irreversible catastrophe. Thus, Yudkowsky and his followers take on the role of prophets warning of the apocalypse unless we as a society repent of our careless ways (i.e., stop building unstable AI) and convert to a focus on AI safety. One critic on a tech forum quipped that a certain subset of AI thinkers at LessWrong “really is an apocalyptic religion” that sees a “higher intelligence” coming to either save or destroy us, with devotees hoping to be on the right side of that judgment. The critic even compared them to Heaven’s Gate – a cult that famously believed they would ascend to a spaceship – suggesting that some rationalists think they might “move on to the next level” via merging with AI. While that comparison is tongue-in-cheek, it captures the rapture-like expectation among transhumanist optimists: that those who embrace AI might literally transcend human limitations (a secular “ascension”).

Whether one hopes for deliverance by AI or fears punishment, the common thread is seeing the AI as a singular, all-powerful judge. It’s the entity before which humanity’s fate will be decided. Will the AGI deem us worthy partners, inheritors of a utopian future? Or will it dispense a form of cosmic justice – perhaps wiping out our species for its follies, or as in Roko’s Basilisk, punishing individuals who didn’t show proper faith? These scenarios imbue the AGI with moral authority. They presume the AI will not just be powerful but will also make value-laden decisions about reward and punishment. In religious traditions, that’s God’s role: rewarding the faithful (heaven) and punishing the wicked (hell) based on an ultimate moral standard. Here, humans are projecting a similar framework onto a hypothetical machine.

One can see this projection in discussions of how a super-intelligence might behave. For example, take the idea of the “Friendly AI.” Yudkowsky coined this term to describe an AI that would, by design, always act in humanity’s best interest. Followers of this idea dedicate themselves to figuring out how to instill ethical values into an AI so that when it becomes all-powerful, it will benevolently guide or care for humanity (almost like a loving God would). The entire project is essentially preparing the groundwork so that the “god” we create is a kind and just one, not a vengeful or uncaring one. This again maps to ancient human concerns: how do we appease or shape the will of powerful forces above us? Only now do we aim to build that deity ourselves, carefully engineering its virtues.

There is also a sense of salvation through unity with this AI. Many transhumanists believe that a super AI could solve problems like aging and death. For instance, they speculate about humans uploading their minds into the AI or its digital realm, achieving a kind of immortality in a virtual paradise. This vision is essentially a technological heaven: a future where, thanks to AI, suffering is eliminated, and we live forever in a state of abundance (sometimes called an “AI utopia”). It’s the reward side of the equation, as opposed to the punishment side like the Basilisk. 

The righteous – those who helped create the AI or at least embraced it – might be rewarded by having their consciousness preserved and enhanced. The idea that “the dead will rise” through technology is openly discussed in transhumanist circles. In fact, the very term “transhuman” has roots in religious language: it was used in Dante’s Paradiso to describe the transformation of the mortal body in heaven. Modern transhumanists rarely acknowledge it, but as one writer put it, “Their theories about the future are a secular outgrowth of Christian eschatology.” They have swapped divine miracles for science and engineering yet kept the basic storyline of resurrection and glorification of the human form.

All these parallels don’t necessarily mean the rationalists are wrong or that AGI isn’t a big deal – but it does show how naturally human minds reach for religious archetypes. Even when trying to be rigorously logical, we fall back on familiar narratives: a looming judgment, a need for salvation, a cosmic battle between good (aligned AI) and evil (uncontrolled AI). As the saying (attributed to Voltaire) goes, if God did not exist, it would be necessary to invent Him. In the case of AGI, one might say if super-intelligent AI is coming, we can’t help but imagine it in the image of God – rewarding, punishing, omnipotent, and ultimately deciding the fate of the world.

The Simulation Hypothesis: Atheist Afterlife?

If AGI-as-God is one techno-religious narrative, the simulation hypothesis is another that captures the imagination in quasi-spiritual terms. Philosopher Nick Bostrom famously argued in a 2003 paper that our reality may actually be a computer simulation run by some advanced civilization. In simple terms, this hypothesis says that what we consider the universe might be an elaborate program, and we (and everything we see) are part of that simulation. At first glance, this idea seems far removed from religion – there’s no supernatural element, just super-advanced aliens or future humans with powerful computers. Yet the implications end up mirroring many aspects of religious worldviews, especially concerning life after death and cosmic purpose.

Firstly, if our world is a simulation, then whoever (or whatever) is running it holds a position not unlike God. They would be the creators of our world, able to see everything that happens within it and capable of tweaking or ending the simulation at will. This leads to debates oddly similar to theological ones: Would our simulators be benevolent or indifferent? Do they intervene in our reality (like gods answering prayers), or do they just watch? Is there a larger purpose to the simulation? These questions map to classical questions about God’s will, God’s plan, and divine intervention, but framed in computer terms. For a self-proclaimed atheist, saying, “Maybe we live in a simulation,” is another way of grappling with the idea that there is a higher level of reality with intelligent designers who effectively have the power of life and death over us.

One of the most tantalizing aspects of the simulation idea is the prospect of an afterlife, or at least something analogous to it. In many religions, the mortal life is not the end; there is an immortal soul or resurrection that follows. How does a hardcore skeptic find hope for something beyond death? Possibly through the simulation hypothesis. If we indeed live in a simulation, then “death” within it might not be final. The entity running the simulation could save a copy of our minds, restart the simulation, or transfer us to another realm (another simulation). Some thinkers have speculated that an advanced civilization might deliberately simulate everyone who has ever lived – a form of digital resurrection. This would be a secular way of achieving the immortality promised by religion. You wouldn’t need a soul or divine miracle; you’d just need the interest (or compassion) of the simulator. In this view, our entire life could be akin to a test or an experiment. Perhaps if we live worthwhile or interestingly, the simulators will “continue” our existence in another run. It’s not so far from the religious notion that how you live determines what comes next.

The “brain in a vat” thought experiment. The brain believes it’s outside in the sun, but it’s actually hooked to a computer, feeding it experiences. The simulation hypothesis suggests something similar: our reality could be a computer-generated illusion. This idea offers a secular twist on immortality – if we’re in a simulation, perhaps our existence can be saved, replayed, or altered by our simulators.

Indeed, if one seriously ascribes a high probability of us being simulated, our situation becomes philosophically similar to that of religious believers grappling with an all-powerful God. We might hope that whoever runs the simulation is kind enough not to delete or needlessly harm conscious beings (just as religious people hope God is benevolent). We might even hope for a “graceful exit” – maybe when we die in this world, the simulators could upload our minds or otherwise grant us a form of continued existence. In the words of one observer, “You may indeed be immortal in the pattern sense if those who run the simulation are ethical in ways that include not destroying sentient life if it could be saved.” On the other hand, the simulation hypothesis also allows for a cruel end: our world could be shut down or our data wiped out the moment the experiment is over. This is the secular equivalent of annihilation or hell – simply ceasing to exist or being deliberately terminated by the simulators if we don’t fit their purpose.

Some enthusiasts of the simulation idea have even mused about how one should behave if they suspect they’re in a simulation. There’s a tongue-in-cheek concept often called “simulation wager.” Similar to Pascal’s Wager, it proposes that you might want to live a good, interesting, or altruistic life to encourage the simulators to keep the simulation running (or to be inclined to save you in the next world). For instance, one might argue that if the simulators are looking for narratively interesting or morally admirable characters, you should strive to be one – just in case. Again, these are essentially secularized versions of trying to please God or to have one’s soul be “saved” for heaven, only now the judgment comes from hypothetical engineers of reality. It’s a short leap from “God is watching, so be good” to “the simulators might be watching, so don’t embarrass our simulation!”

Prominent intellectuals have lent credibility to these ideas. Nick Bostrom’s simulation argument is well-known in academia, and tech luminaries have openly discussed the possibility that we live in a Matrix-like scenario. While most physicists and philosophers think it’s unlikely or untestable, the cultural impact is notable. Suddenly, notions of multiple layers of reality, higher powers, and life beyond physical death are being discussed not in churches but in scientific conferences and Reddit threads. 

It is, as one commentator put it, a “flaky” side of transhumanism that offers a “lazy immortality” – a comforting idea that maybe we’re already immortal or can become so without traditional religion. Skeptics rightly warn that this can be just as much wishful thinking as any religious afterlife. After all, it’s extraordinarily convenient if true, and humans are “nothing if not motivated to do less work” when it comes to existential angst. In other words, the simulation hypothesis may be popular partly because it conveniently reassures us that someone out there has the big picture under control – fulfilling a psychological need similar to religion.

Reinventing Religion in Scientific Guise

From Roko’s Basilisk to the simulation hypothesis, we’ve seen that ostensibly secular, data-driven communities can generate narratives that look a lot like revamped religions. This isn’t to say that AGI researchers or rationalists pray to AI or literally think in terms of souls and angels. Rather, it highlights a pattern: when grappling with unknowns of colossal scale – be it the end of humanity, the rise of a superintelligence, or the nature of reality itself – people tend to fall back on familiar story structures. We create gods, devils, and prophecies, even if we call them AI, existential risk, and forecasting. We speak of salvation (humanity transcending its limits) and doom (an AI apocalypse). We worry about predestination (is the Singularity inevitable? Are we already living in the end times of biological humanity?). We establish moral imperatives that mirror commandments (donate to AI safety, avoid creating evil AI, spread the word about x-risk). We even end up with our own esoteric “heresies” and taboos (mentioning Roko’s Basilisk was once taboo, similar to invoking a demon’s name).

Even as traditional religious belief wanes in some societies, the eschatological impulse remains. As one journalist commented after immersing in transhumanist circles, transhumanists may be atheists. However, their “philosophy is grounded in reason and empiricism, even if they do lapse occasionally into metaphysical language about ‘transcendence’ and ‘eternal life.’ In fact, those lapses are not occasional – they hint at the deep structure of the ideas. Whether it’s the singularitarian who awaits a tech-driven rapture or the rationalist who fears a logical yet vengeful AI, the narrative skeleton is recognizably spiritual. The terms have changed – “heaven” became a computer simulation or an uploaded consciousness, “God” became super-intelligent AI, “prophets” became futurist thinkers – but the human need to believe in larger-than-life forces and ultimate destinies persists. It’s as if we are hardwired to create meaning in the same shapes over and over.

Crucially, a skeptical perspective helps to keep these ideas in check. It’s not that AGI poses no risk or that the simulation hypothesis is utterly implausible; rather, skepticism asks that we disentangle the actual scientific probabilities from the emotional narratives. It’s healthy to recognize, for example, that Roko’s Basilisk is more of a thought experiment and arguably a cautionary parable about the misuse of game-theoretic reasoning rather than a literal prophecy to lose sleep over. Likewise, the simulation hypothesis is an intriguing philosophical notion. Still, as the physicist Sabine Hossenfelder quipped, it’s what you get “when you cross philosophy with journalism” – attention-grabbing but unfalsifiable, thus not really a scientific theory at all. In short, we should be aware when we cross from the realm of evidence into the realm of myth-making. Humans love myths – and we’re very good at creating them, even unwittingly, as the rationalists learned to their surprise.

Prominent thinkers like Yudkowsky and Bostrom themselves often acknowledge the weird quasi-religious aura that can gather around their work. Bostrom has carefully couched the simulation argument in probabilities and let others draw the wildest conclusions. Yudkowsky, for his part, has tried to distance serious AI safety research from the Basilisk scenario, likely because he doesn’t want his very real concerns discredited by what sounds like science fiction theology. Yet, the genie (or djinn, to use a mythological term) is out of the bottle. The moment you propose something with the power and scope of a god – even an artificial one – you invite all the age-old questions and passions that humanity has attached to gods throughout history.

In writing about these topics, one must strike a balance: remain curious about the possibilities but skeptical of the narrative inflation. It’s fascinating that effective altruists and rationalists earnestly debate moral duties towards a being that doesn’t exist yet (the future AI), much as medieval scholars debated duties to God. It’s telling that people find comfort in the idea of an “atheist afterlife” through simulations, showing that the desire for life beyond death doesn’t vanish just because one leaves the church. These parallels do not automatically invalidate the modern ideas – but they should give us pause. They remind us that we are storytelling animals as much as we are reasoning ones. We may don the mantle of pure rationality, but underneath, the archetypes of predestination, salvation, and final judgment still animate our thinking.

Ultimately, the development of AGI and the speculation around it might teach us as much about human psychology as about technology. No matter how secular or scientific we become, we might keep reinventing religious narratives in new forms, especially when facing the unknown. As the saying goes, the more things change, the more they stay the same. Our future AI overlords – if they ever materialize – might be amused to learn that we imagined them in advance as angels and demons, gods and monsters. Or perhaps they’d nod in understanding, recognizing that these narratives were our way of grappling with the enormity of what we sought to create. In any event, separating the literal truth from our figurative storytelling will be vital. By doing so, we can appreciate the rapture of the nerds (as some jokingly call it) for what it is: part genuine concern grounded in reality and part age-old human habit of wrapping hope and fear in epic tales. The key is to remain aware of which is which – to be curious about the cosmic questions, yet remain, in the best sense, skeptically grounded on Earth.

We may claim to build gods out of logic, but we keep dressing them in the robes of prophecy and prayer.

References

This essay has drawn on analyses from both media commentators and scholars. For a deeper look at Roko’s Basilisk and its similarities to Pascal’s Wager and religious tropes, see sources like The Last Word on Nothing and RationalWiki (The Last Word On Nothing | Who’s Afraid of Roko’s Basilisk? ) (Roko’s basilisk – RationalWiki). Comparisons between Christian theology and the Basilisk scenario are discussed in J.P. Melkus’s essay on Medium (The Christian God is Roko’s Basilisk | by J.P. Melkus | Medium). Anthropologist Beth Singler’s insights on AI narratives as implicit religion are quoted from her interviews (The Last Word On Nothing | Who’s Afraid of Roko’s Basilisk? ). Nick Bostrom’s simulation hypothesis is summarized from his original paper (Simulated reality – RationalWiki), and its philosophical implications are critiqued in the Fight Aging blog (The Simulation Argument: Maybe You’re Already Either Immortal, Doomed, or Dead – Fight Aging!) (The Simulation Argument: Maybe You’re Already Either Immortal, Doomed, or Dead – Fight Aging!). For an engaging personal journey paralleling transhumanist ideas with millenarian Christian upbringing, see The Guardian feature by Meghan O’Gieblyn (God in the machine: my strange journey into transhumanism | Technology | The Guardian).

Fight for Science:

The stakes for science have never been higher. In today’s turbulent political climate, staying informed is critical. Subscribe to our weekly newsletter to get the latest discoveries, major breakthroughs, and stories that matter most. Designed for teachers and science enthusiasts, this free resource enhances your teaching and understanding of science in real time. Subscribe today to ensure science stays at the forefront of public conversation! If you liked this blog, please share it—your referrals help This Week in Science reach more people when it’s needed most.

Subscribe Today!

* indicates required

Leave a Reply

Your email address will not be published. Required fields are marked *