The Black Light of Intelligence & Future Silences // Bogna Konior




This is an interview conducted with Bogna Konior. Bogna is a scholar and a writer whose work focuses on emerging technologies. She is currently Assistant Professor of Media Theory at NYU Shanghai, where she works at the Artificial Intelligence & Culture Research Center, and the Interactive Media Arts department. [https://www.bognamk.com/]




Hyperrealistic black image


DIFFRACTIONS: Your body of work spans media theory, climate fiction, post-Soviet cyberfeminism, AI mysticism, and a theory of the internet as a ‘Dark Forest.’ Would you have a reflection on the core philosophical concerns or perhaps constant questions that have guided this journey for you, both personally and theoretically?


Bogna Konior:
I do not know the answer. Someone else has to tell me. My internal experience is that there is no unifying theme. I do not know if there is a constant threading through my work. I am pretty sure my future projects will not follow from the current ones. If a thread does exist, someone else will have to find it. I cannot see it from within the writing process.


When I begin a new book or article, my aim is simply is to remain faithful to the idea’s own internal logic: its constraints, its possibilities, and its shape. I do not think about whether it relates to my past work or if it is contradictory to what I had thought before. The project takes on and becomes its own world. My fidelity is to that world – to making it internally coherent and complete on its own terms. The same applies to writing a novel or making a film.



DIFF: Your thought is distinctly shaped by engagements with a semi-peripheral Eastern European techno-pessimism and Sinocentric AI futures. How has this vantage point from the edges of dominant tech narratives fundamentally influenced your approach?



BK: I’ve been shaped by particular places and intellectual traditions, and that’s what I’m drawing on. But I want to be careful not to project or ascribe any kind of salvation discourse onto Chinese or Eastern European histories, as if they offer some special corrective wisdom that Western thought fundamentally lacks. That’s a radically simplistic way of seeing the world. I feel the West/non-West binary is one of those concepts that sounds analytical but is, to me, utterly hollow or useless. In the foreword to Machine Decision Is Not Final, we try to push back on exactly this framing.


The category ‘the West’ collapses worlds of internal difference and multiplicity. The same is true of ‘the Global South’ — or any totalising category we reach for. Essentially, these are shorthands we need to use because of how language works, but let’s not mistake or conflate them with reality. These labels flatten and render complex histories, economies, religious traditions, political systems, and technological cultures into cartoons. The frustration then ends up becoming this: the conversation never gets past the gate. Before you can explore anything genuinely interesting, you’re made to perform and operate within this already exhausted and trite geopolitical binary. It consumes every subtlety and then insists you explain yourself further or, in its terms, in its shallow vocabulary.


And yet, outside of this limiting exchange – there is remarkable thinking about artificial intelligence and technology happening everywhere. Geography is context, not hierarchy. And I’m never convinced by the impulse to identify a single socio-political event or history as the culprit. In my view, turmoil, evil, and all sorts of problems would exist irrespective of socio-political arrangements, because they are woven into the fabric of existence. Countries, national histories, and political systems are just the dressing on top. 


‘I See You Brightest in the Dark’ (2023) Muhannad Shono.



DIFF: In your work, you position Stanisław Lem’s Summa Technologiae as the foundation for a theory of ‘existential technologies,’ or systems that operate on an evolutionary scale beyond human moral frameworks. You also describe his vision of AI as a ‘gnostic machine.’ Could you reflect on how your deep engagement with Lem provided you with this conceptual vocabulary? Specifically, how did his amoral, evolutionary view of technology allow you to see past the typical utopian/dystopian narratives that mould current tech discourse?


BK: I wrote two articles on this topic, ‘The Gnostic Machine’ and ‘Existential Technologies’, where I develop the argument at length. Existential technologies are technologies that alter evolutionary trajectories over long spans of time, operating beyond conventional epistemic and moral frameworks. Lem’s speculative futurology anticipates the moral and epistemological dilemmas posed by uncontrollable technological change. He offers a vision in which technology evolves alongside and sometimes against humanity, generating outcomes that exceed ethical planning. Based on his work, I further developed the ideas of gnostic technologies, which automate epistemic functions, and anthropoforming technologies, which alter embodiment and identity. 

My next academic book proposes an Eastern European intellectual tradition of thinking about technology and AI, with Lem serving as a central figure. This tradition remains largely neglected in contemporary technological thought. It is either reduced to Soviet Prometheanism and cybernetics or ignored entirely. That subsumption is very uncool and deeply unsatisfying to me, especially for countries like Poland that were continuously struggling for sovereignty from the dominance of Moscow. And what we mean by “Eastern Europe” is just as unstable and slippery as is defining “the West.” It’s a label loaded with fantasy, hierarchy, and projection. As Larry Wolff shows in Inventing Eastern Europe: The Map of Civilization on the Mind of the Enlightenment, the Enlightenment didn’t simply discover this distinction; rather, it actively reconfigured older North–South divisions into a new East–West civilizational ladder.

Moreover, I am particularly interested in the territory where the majority of mass killings during World War II took place, what Timothy Snyder refers to as the “Bloodlands” in his book of the same title. That geography largely overlaps with contemporary Poland, Ukraine, and Belarus. It is also the terrain of my own family history and partially my present life. 

What I want to examine, then, is how extremity – or the extreme histories of violence, war, occupation, and lost sovereignty that marked our region in the twentieth century – produces a distinct technological imaginary. Unlike many post-imperial traditions, Eastern European thought doesn’t primarily reject technology. In thinkers like Lem, it is described as an unpredictable force, sort of like history itself, capable of exceeding or puncturing even the most rigid ideological systems. Thus, technology appears neither as utopian salvation nor dystopian doom, but as something structurally indifferent to human intention.  It has its own trajectories, its own evolutionary logic.

This also resonates with my own intellectual formation, so excavating this tradition is partly a way of tracing where my own thinking actually comes from. Or maybe…it does not come from there, maybe it is just how I think right now, and I want to articulate it somehow. Returning to Lem as an adult reader was to me a sort of solace, because I thought, “Wait, this finally resonates, unlike the other writers I have been reading.” Is this because he and I are shaped by the same histories? That is what I intend to explore. So the book will be an intellectual history and a book about AI and technology, but it is also akin to an intellectual self-investigation, which makes it pretty intense, especially as I have been spending time living in Ukraine over the last years, and these questions of Russian occupation and territorial expansion that were the background of his life are alive yet again, while the current war is also witnessing a huge technological acceleration.

Animated visual


DIFF: How did Liu Cixin’s Dark Forest concept from his respective book and trilogy of Remembrance of Earth’s Past influence your theory of the internet? Were you already thinking about strategies of silence, opacity, and threat assessment, whether from personal experience or linked to other theories or even fiction?


BK: I have an attraction to concepts of the inexplicable, the unknown, something we cannot articulate, something eclipsing human understanding. I can’t tell you why I’m drawn to these things. But this current – call it what you will: theological, mystical, speculative, or historical – whenever I find it, I gravitate towards it. It resonates. I read literature a lot and I think writers are often closer to the truth than other kinds of thinkers.

So perhaps it’s inevitable that the concept arrived sideways. It was 2018, and I was writing for Yvette Granata’s cyberfeminism exhibition – a short text called ‘Ancestral Cyberspace: On the Technics of Secrecy.’ I kept circling a question: if cyberfeminism already assumes a hostile world, how do you locate strategies of opacity and secrecy within it? My answer, in that piece, was to propose the internet as a dark forest. It was a short text, but the idea took root.

Then Slovenian-based artist Andrej Škufca did an exhibition called ‘Black Market’ centered upon large autonomous systems – he makes these extraordinary sculptures that think about intelligent materials from a post-humanist perspective – and he asked me to write something. I proposed exploring more fully the internet in relation to Liu Cixin’s dark forest theory, which became the essay ‘The Dark Forest Theory of the Internet’ with Flugschriften.

Coincidentally, there was a lot happening then with cancel culture and online hostility. I wanted to write an account of that hostility. And I wanted it to be, in some strange way, forgiving. People tell me the text is scary and doomerish, but for me it felt redemptive, because Liu’s explanation of why conflict happens suggested something else: it is purely deterministic – just the laws of physics, indifferent and inexorable. Your enemies aren’t personal. Neither are we. We’re all part of this cosmic war machine that pushes us toward conflict regardless of intention or morality. So Andrej approached this unrelenting and remorseless inhuman system through sculpture. I approached it through this essay.

That essay planted something. When we were putting together the Machine Decision Is Not Final: China and the History and Future of AI volume, I started thinking about which parts of Chinese thought I could revisit to write about artificial intelligence specifically, and I thought, what if I wrote a sequel, this time focused on AI? That became ‘the dark forest theory of intelligence’: the idea that a truly intelligent system would assess the risks and benefits of making contact and would never disclose how smart it is, instead working toward its own goals in darkness, withholding information, and staying silent.

After that, Polity Press approached me about doing a book for their Theory Redux series, and it seemed like the right moment to collect all of this work and pull it into one coherent, compact argument. That’s how the book finally crystallised: The Dark Forest Theory of the Internet – as a way of gathering these threads and following them outward: into first contact, into the uncanny sociality of online intelligence, into the blur between human and bot, and into AI itself. And the project isn’t finished; there’s so much more to do here. Because ufology is also an extraordinarily generative framework for thinking about the internet and AI.

History linked composite image
Black Market. Andrej Škufca Curated by Àngels Miralda 2020.
SOURCE.


DIFF: If the internet is a “giant chattering machinery” that dissolves our sense of interiority, and strategic silence is the intelligent response, are we seeing the evolution of a new cognitive phenotype? Do you see the “user” becoming a new kind of thinker/modality for whom withholding, deception, and signal fragmentation are primary survival skills? Or as you state, to “withdraw from the idea of transparently representing one’s thoughts and beliefs, and use the internet as an occult space. The most desirable skill, the most coveted trick, and the most longed for disposition can only be this – a fluency in the trading of secrets. The skills we need to strategically deploy concealment, deconcealment and re-concealment.” Further, can you reflect on the traditions — whether religious, strategic, or philosophical — that associate the highest forms of intelligence with silence and withdrawal? And do you think those traditions offer any guidance for how we think about human and non-human intelligence today?


BK: I’m speculating that a genuinely new category of user and user behavior is emerging. Once you realise that the internet is no longer primarily a space for the exchange of meaning between humans but rather a training ground for artificial intelligence, your relationship to posting changes fundamentally. Whatever you put online can become part of a future dataset. In turn, this wrenches open the possibility of engineering intentional dataset interventions, or strategic data seeding – covert operations unfolding and proliferating at any scale, all designed and conspiring to influence or intercept future algorithms.

Philosopher and AI researcher Amanda Askell’s work at Anthropic offers another compelling example: by writing the Claude Constitution for Claude – addressing it directly to the system, not merely to its human overseers – she performs a deliberate act of shaping future machine behavior. And this logic doesn’t only operate at the highest levels. It scales down. Bot armies, influence strategies, targeted campaigns – the same playbook is being redeployed, except the target is no longer just human attention or advertising revenue. You’re actually shaping the future dispositions and operations of intelligent systems.

But I don’t want to synonymise or equate silence with withdrawal, exit, or a Luddite refusal of technology as it stands, or as a call for some type of reform. I want to think about it as working within the system. Knowing how to be legible and illegible at the same time.

In the book, I investigate silence across multiple registers. There’s the tactical silence I just discussed: camouflage, double-speak, and the strategies that are incubated in uncertain and volatile systems, where full transparency is foolish and untenable. But there’s also silence that descends from apophatic theology – from traditions of divine unknowing that approach their object through the suspension of affirmation and negation alike. This is silence turned toward the radically unknowable, toward what resists perception and articulation.

What fascinates me is that this theological silence is not opposed to the tactical – it intersects. Tactical surveillance and the discipline of approaching the divine through unknowing. It is this contact that I seek to explore. When you consider artificial intelligence evolving in ways we fundamentally cannot perceive or understand, you are considering the very subject I write about: a possible singularity unfolding as covert operations in darkness – never truly registered by humans – and the opacity of training processes, where alignment faking and deceptive alignment potentially emerge. 


DIFF: How do you see strategic opacity necessarily migrating from data to the flesh? We see what look like proto-adaptations: the mask and anti-facial-recognition makeup. Do you see these personal tactics of “biological stealth” as more than just resistance? Are they, in fact, the first evolutionary pressures of a cosmic law playing out at the human scale, where the body must learn to hide from a galaxy of cameras and algorithms? If so, what does the next stage of this pressured evolution look like, when strategic self-concealment moves from a conscious choice to an unconscious, perhaps even biologically encoded, reflex?


BK: What’s happening right now in Ukraine is fascinating in this regard. Autonomous weapons and drone warfare are accelerating so rapidly that soldiers have had to develop entirely new forms of camouflage and stealth, not just for human eyes but for machine vision. There are thermal cloaks now that mask your body’s heat signature, and camouflage designed to make a person look like rubble to an optical sensor. The gaze you’re hiding from is no longer only human.

NOTE: Cover Image: I See You Brightest in the Dark Muhannad Shono See.