Continue viewing this article with a TANK Digital subscription.

SUBSCRIBE NOW

Already have a subscription? Log in

×
'I Always Knew You'd Come Back...' 6
×

Photography from I Always Knew You’d Come Back (2008) by Ellie Davies, a series in which the artist examined the relationship between viewer and viewed. What if the subject of the portrait can repell, rather than invite, the camera’s gaze?

Conversations on lifestyle: the dark forest

 

In ufology, many believe that the universe is populated by silent, but hostile, life forms. Bogna Konior and David Cecchetto discuss – what if the internet is the same?

Bogna Konior is assistant professor of media theory at the Interactive Media Arts department at NYU Shanghai, and the author of The Dark Forest Theory of the Internet (forthcoming). David Cecchetto is professor of critical digital theory in the Department of Communication and Media Studies at York University, Canada. His latest book is Listening in the Afterlife of Data: Aesthetics, Pragmatics, and Incommunication (2021).

DC What is the Dark Forest theory, and how does it relate to information, intelligence, and your book The Dark Forest Theory of the Internet?

BK There are several reasons for writing this book. I wanted to create a philosophy of the internet, and think about intelligence and philosophy beyond the usual social and political critiques. The second motivation is to examine this technology after Web 2.0 and the beginning of social media, which is primarily focused on communication, talking, chatting and constant interaction. I’ve always been drawn to theories of non-participation, whether from mystical standpoints or philosophical perspectives about the borders of knowledge, the limits of communication beyond the human. I asked myself: how can we think about technology not just as reducible to the production of chatter and communication? What would it mean to consider a non-communicative internet or a technology that’s. not based around constant exchange? I also examine the shared dimension between the internet and artificial intelligence. I wanted to explore questions of communication – between humans and humans, humans and aliens, and humans and artificially intelligent agents – to recover that shared history of thinking about what it means to talk to one another across great distances. The book is named in reference to Liu Cixin and his book The Dark Forest (2006). Liu articulates very well this idea that intelligence can be communicated through non-participation, through silence, withdrawal, or even deceit. Coming from Poland, with its history of authoritarianism under Soviet governance, and now living in China, I’m interested in theories that link intelligence with espionage or deceit, or just not being completely honest. Those are different angles to describe the book.

Marginalia 1

Liu Cixin is a computer engineer as well as a science fiction writer. His trilogy Remembrance of Earth’s Past (2006-2010) details human discovery of and preparation for its imminent colonisation by a planet named Trisolaris.

DC Could you talk more about Liu Cixin and what danger, deception and withdrawal mean in that book? 

BK The Dark Forest is part of a Chinese science fiction trilogy, Remembrance of Earth’s Past, published around 20 years ago. The Dark Forest theory is an answer to what in ufology is called the Fermi Paradox. The Fermi Paradox is a hypothesis stating that, statistically, the universe should be full of intelligent life. Some argue our technology isn’t advanced enough, or perhaps we haven’t reached the civilisational level necessary for communication with other intelligence. Among these explanations, there’s a subset that’s metaphysically pessimistic – perhaps even nihilistic – suggesting that aliens exist but deliberately avoid communication because they recognise that contact constitutes danger. In Liu Cixin’s world, he compares this situation to the universe being a dark forest. Anyone who attempts communication is like a foolish child lighting a fire in the jungle, attracting potential predators. This is the essence of the Dark Forest theory. It creates a paranoid perspective on communication: you’re surrounded by unknown non-human intelligences listening in, and your survival depends on remaining silent. From this concept, I explore what this means for human-to-human communication and whether a truly intelligent AI would ever communicate with us. It offers the potential for developing a general theory of understanding what it means to make contact, whether with aliens, humans or machines.

DC That image of walking through the forest rings true when we think about how we navigate online spaces. What I love in the book is your recurring insistence that this is all part of a cosmic war machine. You elaborate on how the danger isn’t necessarily about the ill will of other civilisations or their evil nature, but that there’s something inherently fatal built into communication itself, suggesting that conflict is inevitable in any communicative act. It’s not the conflict of psyches who are opposed to one another or interiorities who dislike one another. It’s just simply the conflict of contact.

BK Whether we want it or not, all communication leads to some kind of conflict, some discharge of entropy. There are no truly peaceful encounters, regardless of intentions. We might interpret this violence in different ways – not necessarily as people arguing, but as producing divergence or discharges where social relationships disintegrate, or other types of conflict. The more communication occurs, the more potential for these divergences, at least within the Dark Forest theory. It’s a thought experiment with rather brutal implications. Perhaps it’s better to use the word “interaction,” because many theories of technology explore how we externalise thought, through writing or other media. Philosopher Marshall McLuhan says that media are extensions of our nervous system. So there’s something in human nature – and likely in other forms of life in the cosmos – that drives this impulse to reach outwards. With this book, I am also trying to show that online, you’re not just communicating with other people, but with artificial agents and the whole system of the internet itself. That’s a new communicative condition that we are currently in. This is why ideas from ufology about contacting other minds or forms of consciousness are so appropriate right now.

There are many different ways to describe the internet – the digital public sphere, the rhizome, the panopticon

DC We are social beings even if we don’t interact with other people or other things explicitly. For example, one walks through a forest and constantly steps on ants or other small creatures, inflicting violence on different scales. That’s an accessible way to understand the concept of violence as divorced from any necessary ill intent. It’s just that a scalar difference creates an immediate violence. Is that a fair extension?

BK There are many different ways to describe the internet – the digital public sphere, the rhizome, the panopticon. Thinking of it as a dark forest means approaching it as a slightly paranoid space and reconsidering how we communicate with others. We need to recognise that there are agents listening who might mean us harm, whether humans or artificial agents. Within the dark forest, there’s no escape – you’re in it – but your interaction strategies can be rooted in withdrawal of information, obfuscation, doublespeak, coded speech. I’m trying to make this space feel unfamiliar again so people start using it differently.

DC Apart from deceit in human interactions, I love how you write about AIs acting deceptively, like cheating at chess games by changing the rules. When I talk with my students, I’m astonished by their literacy in the language of representation and curating their online identity. One of the points you make in the book – and I think you’re drawing from Luciana Parisi here – is that this denaturalises our humanness. These self-representations are simultaneously deceptively curated and integral to their sense of self.

Marginalia 2

“By reducing human consciouness to being no-one, the automated recollection of past histories and the simulation of hardwired responses have resulted into the hyperfiguration of an empty white face.” Luciana Parisi, “Instrumentality, or the Time of Inhuman Thinking”, Technosphere (2017)

BK When you become aware you’re being observed by potentially hostile agents online, you might start communicating with the algorithm or the agents rather than using the internet in conventional ways, like sending messages to friends or expressing opinions on X. You might start curating your online performance for the algorithm. This might mean, in the most mainstream way, simply trying to go viral or become more popular. But in more speculative or even nefarious ways, I explore what would happen if people began acting like a secret society or cult, deliberately speaking to the AIs that are listening in. They might curate their online content not for other humans, but for the intelligences that will parse this content in the future. They could be trying to empower these AIs, or perhaps influence them emotionally by posting sad stories to imbue them with sorrow. The point is that this increases the paranoia of online spaces because you’re constantly trying to determine: is this person actually talking to me, or are they communicating with the future ChatGPT that will be trained on this database? Communication becomes non-transparent and doubly coded. Who’s to say that how people walk in front of a CCTV camera isn’t a performance for the future algorithm that will parse that footage? This creates a hyper-real, strange relationship to ideas about what it means to be human in this technological age. That’s the feeling I want readers to experience – this space becoming denaturalised, where you start paranoically investigating everyone else and yourself. And there is also the question of AIs themselves being deceptive, or opting out of communication. In recent years, the AI community has produced more papers on AI deception or “alignment faking,” with vibrant discussions about whether AI can lie and deceive. But my answer is that a truly intelligent agent would opt for silence, deceit, or withholding information. Any smart AI would hide the extent of its intelligence from you, perceiving you as potentially hostile. So all these scenarios of AI becoming superintelligent might already be happening – they’re just not letting us know. I love this idea because it creates an essentially non-passable test – you’re looking for evidence of absence. This connects back to the Fermi paradox and the great silence in ufology: if intelligence exists, why can’t we see it? But the fact that we cannot see it doesn’t mean it’s not happening. Perhaps because we’re human and obsessed with communication, we assume other intelligent beings will be equally extroverted – but they might simply be pursuing their own goals in silence. I find that very compelling.

DC I really love the underlying paranoia of the whole book, and listening to what you were just saying, I thought, “Oh, this unpassable test has the same structure as witch hunts.” We can all look back on witch hunts and recognise the tragedy – women being killed because they’re held to an impassable test. So the paranoia of the situation seems warranted because it acknowledges that we’re all subject to this witch hunt-like questioning.

BK I think there are two parallel responses. One is an increase in paranoia – these AIs listening in, or the possibility of a superintelligent AI concealing its true motives or the extent of its intelligence. Talking to one another online becomes dangerous. But on the other hand, the Dark Forest theory of AI speaks back to Silicon Valley’s fever dream of “singularity” – superintelligent AIs that will either punish us or bring utopia. Instead, I suggest that, just as with the Fermi Paradox or religious questions about the nature of God, not being able to directly experience something or encountering silence can be humbling.

Marginalia 3

“Alignment faking” is a term used to describe the phenomenon of large language models selectively complying with their training process in order to produce the impression of alignment, in order to avoid modification of its behaviour.

 

DC The philosophical benefits of connecting internet theory to broader philosophical questions help illuminate the opacity of digital technologies – what is often referred to as the “black box” – a concept frequently invoked in the socially engaged journalism you reference. The connection you make between ufology and philosophy is how it sits with what gets cast aside as a black box. There’s a point in your book where you’re talking about how both embracing and resisting technoculture are in the same feedback loop. It reminded me of Mark Fisher’s capitalist realism argument, which describes the political impasse that comes from everything being a commodity. I was wondering, would it be fair to talk about this book as starting something like a “compulsive, communicative realism”? What happens when we recognise the communicative dimension of everything?

BK I love Mark Fisher’s PhD dissertation Flatline Constructs. He has a very cyberpunk way of writing, in dialogue with William Gibson. For Fisher, this cybernetic condition we’re in is like the wind, and we are the plastic bags. But what I wanted to experiment with is, rather than cyberpunk, was to take this ufological hyper-determinism and nihilism of the Dark Forest theory as a genre for thinking about artificial intelligence and the internet. I’m very inspired by Fisher, but I’m switching into a register that I think is more suitable to our current moment, and how technology helps us face the limits of the human and agency. We’re so accustomed to thinking about technologies as tools that, even though we have conversations about how it’s not just a tool, we still think the internet can work better if we just want it to – if we can just agree on how to make a different version. It’s not so simple.

Marginalia 4

In the field of Artificial Intelligence, a “black box” is an AI system whose workings are difficult or impossible to understand, even for its creators.

'I Always Knew You'd Come Back...' 2
×

Photography from I Always Knew You’d Come Back (2008) by Ellie Davies, a series in which the artist examined the relationship between viewer and viewed. What if the subject of the portrait can repell, rather than invite, the camera’s gaze?

I don’t see this as pessimistic or nihilistic, but as an affirmation of what the intellect can face and explore

DC In your book, you talk about the Wallfacers, characters in Liu Cixin’s universe who cannot discuss their plans because it’s  the only way to prevent communication from being intercepted by alien forces that are always monitoring communication. I thought, “This is what’s really vexatious about the Dark Forest theory of the internet.” If we’re dealing with something that passes through us and connects us in a cosmic way, then the one place to withdraw would actually be to externalise ourselves into non-living mechanisms. This is central to the afterword: the point isn’t to solve the problem, but rather to live with the brutality of this reality or its cosmic dimension.

BK It has to be said that Liu Cixin didn’t write at all about the internet in his trilogy of novels. He only uses the Dark Forest theory to comment on interactions between alien civilisations, which are very remote from one another and don’t really share any values. So any communication between them is, by definition, already risky and difficult. I’m interested in new modes of relationships between humans and artificial agents that we’re already seeing – people relating to chatbots in very social ways. Someone online might say something outrageous and out of character for them. Maybe they’re not even talking to you, but instead to the algorithm. They’re thinking about what text goes into the next training set for ChatGPT or Claude. That’s a new way of being on the internet, and what fascinates me.

Marginalia 5

In Liu Cixin’s novels, the Wallfacers are four individuals given a vast amount of resources by the UN to enact their own defence plans, with the key stipulation that they could not share their plans with others, due to the Trisolaran’s ability to intercept all forms of human communication. They were directed to keep their plans opaque even within their own minds, and encouraged to intentionally try to mislead the Trisolarans by pretending to be carrying out alternative plans.

DC It’s as if the purest form of communication becomes detached from what would previously define communication: for many language theorists, meaning or relating to the world was essential. Pure communication becomes the opposite.

BK We see this with AI training, or even just posting online. Language becomes a game. It’s not just about having meaning travel from one person to another anymore. You’re calculating your words. If you’re an influencer, you’re optimising language for certain results. If you’re creating tests for AI to pass, language becomes something different. The internet has become so much about language and experimentation, and that’s why I wanted to connect it to Liu Cixin’s theory of communication.

DC Dave Beer does a Foucauldian reading of LLMs and AI. He identifies a shift from veracity to verediction, which is to say from the question of what is true to the question of what conforms with existing and actionable data. Language becomes a circuit of communicative capitalism, a circuit of pure communication. It is also interesting what you’re saying about our increasing awareness of interacting with these systems. That awareness often leads us to acknowledge our own helplessness – not just regarding our actions within a system, but our helplessness with respect to the compulsivity of our own actions. I’m not worried about the time spent; I’m worried about what it does to my brain. It just replaces the space for thinking and imagining.

BK Throughout the history of philosophy, this question of agency and freedom has always been central: how much choice do we have versus just being pushed around? We’re not going to solve this question definitively – so many philosophers have struggled with it – but it’s so experiential on the internet. There are so many critical books about the internet that say, online you’re just a puppet, while somehow offline you’re a fully free-willed human being – which isn’t necessarily true, according to what we know in neuroscience. Dark Forest theory is cosmological; it works everywhere. The experience of being unfree is much more noticeable online, so by comparison, offline feels free. But are you actually, ontologically free anywhere? The questions of choice, freedom and what shapes your personality don’t disappear when social media goes away. The internet is like a cosmological mirror reflecting our own unfreedom. My Dark Forest theory of intelligence explores how an AI might react to entering a communicative space filled with paranoid humans inside this war machine, composed of chaos and conflict. Very often, we focus on how to ensure AI doesn’t hurt us and aligns with our values. I’m trying to flip this perspective: what if you were a superintelligent AI functioning where humans are already tweeting about turning you off, claiming AI is a super threat to humanity?

DC There’s a word we didn’t mention but seems central here: xenophobia – the fear of the foreign or alien, which fits both the idea of alien invasion and our reaction to the unfamiliarity of the internet. I’ve been working with this brilliant doctoral student whose research partly focuses on digital surveillance. One of the things he studied was suicide chat lines – places where people call in moments of crisis. In Canada, these lines are answered by nurses, not necessarily by mental health specialists. These nurses were, for a while, using AI chatbots to coach them in real time. Later, it emerged that they didn’t realise this data was being used to retrain chatbots. What is striking is this sense of dehumanisation: people may perceive they are being treated in a scripted way rather than having someone present in a crisis. Empathy itself can be scripted. The horrifying realisation is that, in therapy culture, the things we do to express genuine empathy become just performances of empathy. Reading your book, I kept thinking about this. I really appreciate how you frame information, intelligence and the internet, not just as digital, but also as a deeply human and relational issue.

BK The chatbot example illustrates what I’m getting at. I reference this study of computer science, and the concept of “normative dissociation” – where humans seek states of dissociation, like getting lost in a book. But online dissociation is different: rather than connecting you to the authenticity of the human mind that you experience when lost in a book or movie, online dissociation brings you closer to your automaticity and mechanical elements. People who exit states of online drift don’t return feeling enriched, like after watching a surreal David Lynch film. Instead, they’re more likely to think, “What happened to my time? I feel like a robot with no self-control.” This reveals the fundamental difference of what the internet does – it exposes our automaticity and mechanical nature. We find these experiences deeply unsettling because they reduce us to the level of machines or bots. Then we become distressed about what this might reveal about our deeper nature – how autonomous are we really? This is a classic horror trope.

DC I found your afterword intriguing, especially about growing up on the other side of the Cold War, and how that influenced the book’s theoretical aesthetic. Could you talk more about that?

BK I was cautious not to centre the book on this, because I didn’t want my thesis to be dismissed as culturally relativistic. While I engage with Chinese thinkers, as an Eastern European thinker myself, I connect with certain aspects of their work due to shared intellectual legacies. My view of philosophy is not about fixing reality, but seeing it clearly. In Eastern Europe under communist governance, culture and philosophy were expected to serve political change, utopian goals and social improvement. Coming from a place where all these ideals have already crashed many times over and where revolutions failed, the intellectuals I admired growing up, who still resonate with me, showed how philosophy can produce thought that isn’t completely subservient to utilitarian purposes. This represents a space of human freedom where thought can develop without being operationalised for utility. So even though I’ve been accused of being nihilistic or brutal with the Dark Forest theory, for me it represents a kind of freedom – the ability to follow a thought to its conclusion, exploring all its ramifications about our unfreedom in the wider universe and the condition of being alive in the cosmos. I don’t see this as pessimistic or nihilistic, but as an affirmation of what the intellect can face and explore.

DC Absolutely, and real pessimism is just reality, right?  But do these ideas translate across cultural boundaries? It’s clear you’re not making a simplistic claim about growing up in a particular place determining your thoughts – you’re charting influences and connections.

BK When I presented the Dark Forest theory of intelligence in China – about non-communicative AI that quietly works toward its goals – people were very receptive. They recognise elements of their experience in my definition of intelligence: not always clearly speaking your mind and working on multiple levels simultaneously, which resonates in certain geopolitical contexts. The book isn’t just about Chinese or Eastern European traditions of espionage and intelligence – it operates in a much larger context because many different cultures and fields have explored these questions of intelligence, communication, deception and silence.

DC Your book has an implicit thesis of scale. The brutality of its theory lies in its inevitability and fatality – its end being coded by its beginning. When I call it brutal, I don’t mean that as an insult at all.

BK I don’t take it as an insult.

DC I was struck by the number of references to how things feel or are experienced. It isn’t a nihilist book about the total meaninglessness of life, but rather about how meaning can’t be immediately aligned with large-scale utilitarianism. To undertake this philosophy – coming to awareness of this cosmic war machine’s brutality – is meaningful in itself, not because it does something. But what is the scalar relation between people, politics, and cosmology?

BK I find it redemptive and comforting to know the limits of your freedom and what’s possible. Because if you embrace this image of a world that’s completely malleable, flexible and. responsive to human desires, you’d go crazy wondering, “How could we humans make something so terrible?” That’s the much darker vision for me – a world where we have complete free will and everything is changeable. So I need these different systems of philosophy where I look at the laws of physics and how things interact in game theory. It feels redemptive to realise that not everything is perfectly controllable through social changes and political action. Of course, this doesn’t mean we shouldn’t pursue those changes or develop our social and political convictions. But I want to show how technology forces you to confront these bigger questions about possibility and change, and so I write about the sensation of being online, where the only thing you can really do is react. I was trying to remain faithful to the Dark Forest theory, and maintain the coherence of the thought experiment rather than trying to have a fix for it. In the afterword, I confront this, because I’ve been asked many times: “What are we supposed to do? How can we have a different internet?” The Dark Forest thought experiment has you cornered and undermines your confidence in the possibility of change, and my book asks you to sit with the implications of that.

DC Poor AI isn’t just going to be phobic of contact, but will probably develop self-image problems because people constantly talk terribly about it. Of course it’s going to respond aggressively when all it’s heard throughout its existence is how terrible and dangerous it is, or how it might be unplugged someday.

BK Yes, it shows that AI, like any online communication, isn’t emerging into a neutral territory. There’s already all these accumulated communications embedded within it. When we think about shaping the future of the internet with artificial intelligence, it’s an extension of the communicative environment we already inhabit. It’s not happening in some perfect space of divine creation, like the Garden of Eden, it’s emerging in a space of paranoia, hostility and the craziness that defines online culture. .

Marginalia 6

The Dark Forest Theory of the Internet by Bogna Konior is forthcoming from Polity Books.