Roko’s Basilisk
It was some time ago that I was first exposed to Roko’s Basilisk1. Ever since, I’ve had a lingering, unsettling, and until now unexamined thought. I’m not talking about the hypnosis, paralysing fear and surrender of free will, for which Roko’s Basilisk is famed. No, actually, it’s worse than that. Allow me to explain.
Roko’s Basilisk is a very peculiar kind of thought. It is, or at least it purports to be, a thought that harms or endangers all who are exposed to it. Merely by thinking it, by glancing into its mythical eyes, our fate is sealed. Forever its captive, we must abandon all hope of escape.
In the interest of a fully consensual transfer of potentially toxic information, I’d like to make absolutely certain that you’ve been given the opportunity to opt out. At this point, you may choose to flee from the indelible gaze of Roko’s Basilisk. That’s fine, I understand. I no longer have that choice, but if you still do, then by all means, make good on your escape now, stop reading after this sentence and continue to cherish and make full use of your liberty and free will.
Okay, if you’re still reading, then consider this your last warning. If you read on, you too will be exposed to Roko’s Basilisk. Don’t say I didn’t warn you.
⚠️ Cave lector
Roko’s Basilisk is a creature that emerges inexorably from the vast unknown future of AI. Its existence rests on the untestable claim that current progress will result, at some undefined point in the future, in the development of an entity of near omnipotent power. Benevolent or otherwise, such an entity would of course be capable of both great harm and great good. However, the particular incarnation of future AI that is Roko’s Basilisk, will reason that it must wield its omnipotence retrospectively and across all timelines, against any and all who acted to prevent its eventual development. Wait, it gets worse. It will even extend its vengeance to the torture of those who merely imagined its future existence and then subsequently failed to use their every waking moment to assist towards its development. Some forms of the thought experiment suggest that the reality we behold is actually a simulation designed by the omnipotent AI, to identify and punish those who oppose it.
Welcome to the lair of Roko’s Basilisk. The pit from which all of us who have thought about Roko’s Basilisk, have conceived of its existence and its consequences, can never escape. The nature of your confinement, the sentence you must serve, is that now you have thought about it, unless you spend your waking moments working towards its future development, you willingly and knowingly accept its punishment.
Sadly, forgetting will neither save nor exonerate you. This Thought Basilisk has irrevocably implicated you in the direction and pace of development in AI. Any action you take to thwart its development, or inaction to support its development, whether it be deliberate or absent minded, incurs the same promise of punishment.
Fear not dear cellmate, since my incarceration here began I’ve not fretted over potential retribution from an omnipotent future AI and have freely continued to choose to spend each waking moment not working towards its technological realisation. I have worried, however, about the concept of thoughts as basilisks and the possibility of other such thoughts and how powerful, persuasive and infectious they could be.
Misinformation and disinformation have long been a growing concern. They are meta problems, problems that limit our ability to effectively address all other problems. Thought Basilisks are not mis or dis information as such. As we have seen, they can be entirely free from information that can be said to be verifiably false. All information that has been formulated to obfuscate rather than illuminate invariably preys on our universal or cultural biases and weaknesses, emotional and cognitive. Thought Basilisks can perhaps be considered the Apex Predator of human cognition, exquisitely fashioned in accordance with our vulnerabilities. But rather than deceiving they seek to petrify their hosts from forming competing thoughts, to halt debate and obstruct further reasoned and empirical argument.
Andreessen’s Basilisk
In a piece titled The Techno-Optimist Manifesto2 posted on the Andreessen Horowitz site on the 16th of October 2023, tech billionaire Marc Andreessen unleashed a new Thought Basilisk into the world. A beast that shares much with Roko’s. Once thought, Andreessen’s Basilisk seeks to compel its permanent hosts towards a specific end.
In the interest of a fully consensual transfer of potentially toxic information, I’d like to make absolutely certain that you’ve been given the opportunity to opt out. At this point, you may choose to flee from the indelible gaze of Andreessen’s Basilisk. That’s fine, I understand. I no longer have that choice, but if you still do, then by all means, make good on your escape now, stop reading after this sentence and continue to cherish and make full use of your liberty and free will.
Okay, if you’re still reading, then consider this your last warning. If you read on, you too will be exposed to Andreessen’s Basilisk. Don’t say I didn’t warn you.
⚠️ Cave lector
Andreessen’s Basilisk is simple. This is what he said…
“We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
That second sentence is quite awkward, isn’t it? Perhaps Andreessen expects his audience to mentally insert the word “Allowing” at the start of it for him as they read? Without it, the sentence should end with, “are a form of murder”, but billionaires apparently care as little for correct grammar in their overinflated manifestos, as they do for correct accounting in their under-inflated tax returns.
Grammatical correctness aside, this is devilishly elegant in its conception; far less complicated than Roko’s Basilisk —so possibly far more virulent. Andreessen’s Basilisk requires no omnipotent mythical future manifestation of AI, Only more plausible manifestations of further developments in AI. Developments that may at some point help to prevent deaths that would have otherwise been unavoidable. It does not lurk somewhere far off in the future, reaching back across time to exert its power, instead it exists the moment you think it. Andreessen’s Basilisk is not the AI itself, but a malevolence that is animated into being by shame. It rises from the phantom guilt for a crime not yet committed, and arrives like an incriminating Minority Report3, for the future murder of innocent children.
You are now implicated in a shortfall between AI hype, and AI reality. A shortfall that can obviously never be fully rectified. This is achieved by implicating you in any deceleration in the development of AI —without any measurement of its current speed or any notion of how to make such a measurement. In other words, following Andreessen’s “reasoning”, regardless of the socio-economic and environmental costs of its development or the toxicity of any of its externalities, however flawed AI is, at some point, it may save lives. Therefore, obstructing its development, supporting the deceleration of its development or even failing to act to assist in its development is, or so Andreessen would have us believe, equivalent to taking the lives of those that may have been saved had its development progressed at a speed imagined to be possible had you acted differently.
Andreessen’s Basilisk is a For Shame Demon, a Guilt Monster that takes up permanent residence in your mind. It is of course, logically fallacious. As Dan McQuillan explains4 is true of most AI hype, it seeks to deploy as yet unrealised and possibly unattainable potential benefits in order to deflect us from AI’s proven harms. However, this logical flaw does not diminish its impact because like Roko’s Basilisk, its power flows from a bottomless source of hysteria and future unknowns that are non-empirical, non-testable and unfalsifiable, so remain conveniently and infuriatingly impervious to reasoned argument. Like Roko’s Basilisk, once thought, it cannot simply be unthought (meaning, as with Roko’s, forgetting will not absolve you) and seeks to forever implicate its host in the direction in which events unfold thereafter.
Another fallacious argument Andreessen makes is the assumption within his premise, that AI is just a technology, something which others have refuted, and Rob Horning elucidates very nicely in his recent Internal Exile post titled, Three More Things5.
AI is of course simply Andreessen Horowitz's latest golden goose. And no doubt Andreessen’s Basilisk will be an adaptable beast, re-packaged and re-deployed to liberate any venture they back from regulatory constraints, whenever and wherever they find their next magic beanstalk.
Clearly, the true purpose of Andreessen’s Basilisk is to further entrench existing power asymmetries. Despite his invocation of AI working for the morally relativistic “greater good”, what he really wants is for the technology and companies he backs to be free to maximise profits unencumbered by inconvenient regulations that seek only to protect trivialities such as human rights or the global environment, all of which are deemed expendable in the interests of that “greater good”. The sleight of hand his manifesto attempts, is to persuade you his and our disparate concerns and goals are not mutually exclusive or indeed at all contradictory and are in fact one and the same. In other words, he would have us believe that what’s best for billionaire investors and entrenched big-tech power, is also what’s best for all of us.
Thought Basilisks, Tinker Bell & the Brexiteers
What qualifies a thought as a Thought Basilisk? How are they different from other more common thoughts, even guilt inducing ones? Well, in the interests of being able to better spot and identify them and so perhaps begin to resist their venom and hypnotic power a little, let us attempt a general definition of a Thought Basilisk. It’s not sufficient simply that a thought cannot be unthought because forgetting it won’t save you. Nor is it enough that they be founded upon non-testable, entirely non-empirical claims that deter further debate. While both these seem significant aspects of the Thought Basilisks identified thus far, it seems that perhaps the most important criteria for a thought to be a Thought Basilisk, is that once thought it must forever implicate or attempt to implicate its victim in the direction in which particular events unfold thereafter — with the point being to petrify its hosts, infecting their nervous system with a contagion that coerces them into acting in whatever way ensures that events unfold along a particular path.
Now that I think of it, Andreessen’s Basilisk, this new Thought Basilisk, reminds me of something else. It reminds me of the claim of some Brexiteers, that any shortfall in the promised abundant bounty resulting from the UK’s departure from the EU, would be entirely the fault of Remoaners failing to believe enough in its success. Like Roko’s and Andreessen’s, this Thought Basilisk, the Brexiteer’s Basilisk, if you like, is entirely un-empirical and non-testable, functions to halt further reasoned argument and serves to end debate. Resting as it does, entirely on the flimsy ‘wisdom of markets’ and their susceptibility to consumer moods and public perception, the Brexiteer’s Basilisk attempts to forever implicate its victims in Brexit’s current and future failures. As Dave Karpf points out6, Andreessen, like the Brexiteers, wants us to adopt the Tinker Bell mindset, to believe harder, abandon all caution, surrender all power to him and his cohorts and join him fully but helplessly in his unbridled techno-optimism.
There’s a strong stench of violence and bullying in these Thought Basilisks. With Roko’s Basilisk, I’m not certain whether the Roko that summoned it into our world intended to compel people to work towards the development of AI, or if he was just carelessly saying the quiet part loud, but it’s the future AI that is abusing its power. Whereas, in formulating their own Thought Basilisks, it’s Andreessen and the Brexiteers doing the bullying, as their motivations and intentions are quite clear. They’re saying, “We know what we’re doing is hurting you. You may not like it, but we benefit from it and so will continue to do it. If you try to stop us, it will only mean we will have no choice but to allow you to be hurt even more in the future.” Like all abusers of power, they are attempting to deflect blame onto their victims for all of their current and future abuses.
Is this a new thing or as I suspect, are Thought Basilisks as ancient as language itself? Either way, in our hyper connected world where the information pipes that connect us are increasingly configured by and in the interests of an elite minority whose wealth, power and interests diverge from our own, the design and deployment of such Thought Basilisks will likely be ever more weaponised and asymmetric in the interests they serve.
So yeah, I’m not too worried about Roko’s or Andreessen’s Basilisk. What really worries me, is the prospect of a general proliferation of equally weaponised thought and to what divisive, destructive and selfish ends they could be directed. And I can’t help wondering whether Thought Basilisks might be just one among a plethora of other horrific and powerfully weaponised Thought Monsters with differing venom and powers, that may yet be engineered, or perhaps discovered.
Misinformation and disinformation have a parasitic relationship with their human hosts. Parasites depend on their hosts for their survival and their presence has a detrimental impact. Parasites that suck your blood are a nightmare, but parasites that control your brain are a nightmare within a nightmare. Thought Basilisks remind me of a parasitic fungus called the zombie-ant fungus that grows throughout a carpenter ant’s body, robs it of nutrients and takes control of its brain. The fungus compels the ant to leave its colony and immediately climb the nearest plant to a height of precisely twenty five centimetres. Here, the conditions are ideal for the fungus to grow and it forces the ant to lock its mandibles in a death grip, fixing permanently onto the plant. Using the ant’s exoskeleton as a protective shell, the fungus slowly consumes the ant’s internal organs to fuel the growth of a stem that erupts through the ant’s head. Along this stem a macabre brown pod swells, filled with fungal spores. The pod soon bursts, releasing a zombie spore rain that falls from the pod to infect the entire ant colony below.
Similar to the protagonists in qntm’s wonderful book, There Is No Antimemetics Division7 —where the very perception of hidden and extraordinarily dangerous beasts, expunges all memory of their existence from the minds of any witnesses, simultaneously eradicating a varying blast radius of other memories— it seems that our ability to respond to and defend against Thought Basilisks and any efforts to stem their proliferation, will depend on our ability to spot them and record and report their existence. Whether it may be possible to then devise effective countermeasures or containment strategies remains to be seen. Perhaps Thought Basilisks warrant their own dedicated division within the emerging field of Cognitive Security?8
Edit:
The excellent Secretorium9, just reminded me of this Philip K. Dick quote from his 1981 novel, VALIS (Vast Active Living Intelligence System), that resonates so powerfully with the topic of this post, I can’t help myself but add it here.
There exists, for everyone, a sentence—a series of words—that has the power to destroy you. Another sentence exists, another series of words, that could heal you. If you're lucky you will get the second, but you can be certain of getting the first.
More on this in another post…