This is an interview conducted with Bogna Konior. Bogna is a scholar and a writer whose work focuses on emerging technologies. She is currently Assistant Professor of Media Theory at NYU Shanghai, where she works at the Artificial Intelligence & Culture Research Center, and the Interactive Media Arts department. [https://www.bognamk.com/]

DIFFRACTIONS: Your body of work spans media theory, climate fiction, post-Soviet cyberfeminism, AI mysticism, and a theory of the internet as a ‘Dark Forest.’ Would you have a reflection on the core philosophical concerns or perhaps constant questions that have guided this journey for you, both personally and theoretically?
Bogna Konior: I do not know the answer. Someone else has to tell me. My internal experience is that there is no unifying theme. I do not know if there is a constant threading through my work. I am pretty sure my future projects will not follow from the current ones. If a thread does exist, someone else will have to find it. I cannot see it from within the writing process.
When I begin a new book or article, my aim is simply is to remain faithful to the idea’s own internal logic: its constraints, its possibilities, and its shape. I do not think about whether it relates to my past work or if it is contradictory to what I had thought before. The project takes on and becomes its own world. My fidelity is to that world – to making it internally coherent and complete on its own terms. The same applies to writing a novel or making a film.
DIFF: Your thought is distinctly shaped by engagements with a semi-peripheral Eastern European techno-pessimism and Sinocentric AI futures. How has this vantage point from the edges of dominant tech narratives fundamentally influenced your approach?
BK: I’ve been shaped by particular places and intellectual traditions, and that’s what I’m drawing on. But I want to be careful not to project or ascribe any kind of salvation discourse onto Chinese or Eastern European histories, as if they offer some special corrective wisdom that Western thought fundamentally lacks. That’s a radically simplistic way of seeing the world. I feel the West/non-West binary is one of those concepts that sounds analytical but is, to me, utterly hollow or useless. In the foreword to Machine Decision Is Not Final, we try to push back on exactly this framing.
The category ‘the West’ collapses worlds of internal difference and multiplicity. The same is true of ‘the Global South’ — or any totalising category we reach for. Essentially, these are shorthands we need to use because of how language works, but let’s not mistake or conflate them with reality. These labels flatten and render complex histories, economies, religious traditions, political systems, and technological cultures into cartoons. The frustration then ends up becoming this: the conversation never gets past the gate. Before you can explore anything genuinely interesting, you’re made to perform and operate within this already exhausted and trite geopolitical binary. It consumes every subtlety and then insists you explain yourself further or, in its terms, in its shallow vocabulary.
And yet, outside of this limiting exchange – there is remarkable thinking about artificial intelligence and technology happening everywhere. Geography is context, not hierarchy. And I’m never convinced by the impulse to identify a single socio-political event or history as the culprit. In my view, turmoil, evil, and all sorts of problems would exist irrespective of socio-political arrangements, because they are woven into the fabric of existence. Countries, national histories, and political systems are just the dressing on top.

DIFF: In your work, you position Stanisław Lem’s Summa Technologiae as the foundation for a theory of ‘existential technologies,’ or systems that operate on an evolutionary scale beyond human moral frameworks. You also describe his vision of AI as a ‘gnostic machine.’ Could you reflect on how your deep engagement with Lem provided you with this conceptual vocabulary? Specifically, how did his amoral, evolutionary view of technology allow you to see past the typical utopian/dystopian narratives that mould current tech discourse?
BK: I wrote two articles on this topic, ‘The Gnostic Machine’ and ‘Existential Technologies’, where I develop the argument at length. Existential technologies are technologies that alter evolutionary trajectories over long spans of time, operating beyond conventional epistemic and moral frameworks. Lem’s speculative futurology anticipates the moral and epistemological dilemmas posed by uncontrollable technological change. He offers a vision in which technology evolves alongside and sometimes against humanity, generating outcomes that exceed ethical planning. Based on his work, I further developed the ideas of gnostic technologies, which automate epistemic functions, and anthropoforming technologies, which alter embodiment and identity.
My next academic book proposes an Eastern European intellectual tradition of thinking about technology and AI, with Lem serving as a central figure. This tradition remains largely neglected in contemporary technological thought. It is either reduced to Soviet Prometheanism and cybernetics or ignored entirely. That subsumption is very uncool and deeply unsatisfying to me, especially for countries like Poland that were continuously struggling for sovereignty from the dominance of Moscow. And what we mean by “Eastern Europe” is just as unstable and slippery as is defining “the West.” It’s a label loaded with fantasy, hierarchy, and projection. As Larry Wolff shows in Inventing Eastern Europe: The Map of Civilization on the Mind of the Enlightenment, the Enlightenment didn’t simply discover this distinction; rather, it actively reconfigured older North–South divisions into a new East–West civilizational ladder.
Moreover, I am particularly interested in the territory where the majority of mass killings during World War II took place, what Timothy Snyder refers to as the “Bloodlands” in his book of the same title. That geography largely overlaps with contemporary Poland, Ukraine, and Belarus. It is also the terrain of my own family history and partially my present life.
What I want to examine, then, is how extremity – or the extreme histories of violence, war, occupation, and lost sovereignty that marked our region in the twentieth century – produces a distinct technological imaginary. Unlike many post-imperial traditions, Eastern European thought doesn’t primarily reject technology. In thinkers like Lem, it is described as an unpredictable force, sort of like history itself, capable of exceeding or puncturing even the most rigid ideological systems. Thus, technology appears neither as utopian salvation nor dystopian doom, but as something structurally indifferent to human intention. It has its own trajectories, its own evolutionary logic.
This also resonates with my own intellectual formation, so excavating this tradition is partly a way of tracing where my own thinking actually comes from. Or maybe…it does not come from there, maybe it is just how I think right now, and I want to articulate it somehow. Returning to Lem as an adult reader was to me a sort of solace, because I thought, “Wait, this finally resonates, unlike the other writers I have been reading.” Is this because he and I are shaped by the same histories? That is what I intend to explore. So the book will be an intellectual history and a book about AI and technology, but it is also akin to an intellectual self-investigation, which makes it pretty intense, especially as I have been spending time living in Ukraine over the last years, and these questions of Russian occupation and territorial expansion that were the background of his life are alive yet again, while the current war is also witnessing a huge technological acceleration.

DIFF: How did Liu Cixin’s Dark Forest concept from his respective book and trilogy of Remembrance of Earth’s Past influence your theory of the internet? Were you already thinking about strategies of silence, opacity, and threat assessment, whether from personal experience or linked to other theories or even fiction?
BK: I have an attraction to concepts of the inexplicable, the unknown, something we cannot articulate, something eclipsing human understanding. I can’t tell you why I’m drawn to these things. But this current – call it what you will: theological, mystical, speculative, or historical – whenever I find it, I gravitate towards it. It resonates. I read literature a lot and I think writers are often closer to the truth than other kinds of thinkers.
So perhaps it’s inevitable that the concept arrived sideways. It was 2018, and I was writing for Yvette Granata’s cyberfeminism exhibition – a short text called ‘Ancestral Cyberspace: On the Technics of Secrecy.’ I kept circling a question: if cyberfeminism already assumes a hostile world, how do you locate strategies of opacity and secrecy within it? My answer, in that piece, was to propose the internet as a dark forest. It was a short text, but the idea took root.
Then Slovenian-based artist Andrej Škufca did an exhibition called ‘Black Market’ centered upon large autonomous systems – he makes these extraordinary sculptures that think about intelligent materials from a post-humanist perspective – and he asked me to write something. I proposed exploring more fully the internet in relation to Liu Cixin’s dark forest theory, which became the essay ‘The Dark Forest Theory of the Internet’ with Flugschriften.
Coincidentally, there was a lot happening then with cancel culture and online hostility. I wanted to write an account of that hostility. And I wanted it to be, in some strange way, forgiving. People tell me the text is scary and doomerish, but for me it felt redemptive, because Liu’s explanation of why conflict happens suggested something else: it is purely deterministic – just the laws of physics, indifferent and inexorable. Your enemies aren’t personal. Neither are we. We’re all part of this cosmic war machine that pushes us toward conflict regardless of intention or morality. So Andrej approached this unrelenting and remorseless inhuman system through sculpture. I approached it through this essay.
That essay planted something. When we were putting together the Machine Decision Is Not Final: China and the History and Future of AI volume, I started thinking about which parts of Chinese thought I could revisit to write about artificial intelligence specifically, and I thought, what if I wrote a sequel, this time focused on AI? That became ‘the dark forest theory of intelligence’: the idea that a truly intelligent system would assess the risks and benefits of making contact and would never disclose how smart it is, instead working toward its own goals in darkness, withholding information, and staying silent.
After that, Polity Press approached me about doing a book for their Theory Redux series, and it seemed like the right moment to collect all of this work and pull it into one coherent, compact argument. That’s how the book finally crystallised: The Dark Forest Theory of the Internet – as a way of gathering these threads and following them outward: into first contact, into the uncanny sociality of online intelligence, into the blur between human and bot, and into AI itself. And the project isn’t finished; there’s so much more to do here. Because ufology is also an extraordinarily generative framework for thinking about the internet and AI.
SOURCE.
DIFF: If the internet is a “giant chattering machinery” that dissolves our sense of interiority, and strategic silence is the intelligent response, are we seeing the evolution of a new cognitive phenotype? Do you see the “user” becoming a new kind of thinker/modality for whom withholding, deception, and signal fragmentation are primary survival skills? Or as you state, to “withdraw from the idea of transparently representing one’s thoughts and beliefs, and use the internet as an occult space. The most desirable skill, the most coveted trick, and the most longed for disposition can only be this – a fluency in the trading of secrets. The skills we need to strategically deploy concealment, deconcealment and re-concealment.” Further, can you reflect on the traditions — whether religious, strategic, or philosophical — that associate the highest forms of intelligence with silence and withdrawal? And do you think those traditions offer any guidance for how we think about human and non-human intelligence today?
BK: I’m speculating that a genuinely new category of user and user behavior is emerging. Once you realise that the internet is no longer primarily a space for the exchange of meaning between humans but rather a training ground for artificial intelligence, your relationship to posting changes fundamentally. Whatever you put online can become part of a future dataset. In turn, this wrenches open the possibility of engineering intentional dataset interventions, or strategic data seeding – covert operations unfolding and proliferating at any scale, all designed and conspiring to influence or intercept future algorithms.
Philosopher and AI researcher Amanda Askell’s work at Anthropic offers another compelling example: by writing the Claude Constitution for Claude – addressing it directly to the system, not merely to its human overseers – she performs a deliberate act of shaping future machine behavior. And this logic doesn’t only operate at the highest levels. It scales down. Bot armies, influence strategies, targeted campaigns – the same playbook is being redeployed, except the target is no longer just human attention or advertising revenue. You’re actually shaping the future dispositions and operations of intelligent systems.
But I don’t want to synonymise or equate silence with withdrawal, exit, or a Luddite refusal of technology as it stands, or as a call for some type of reform. I want to think about it as working within the system. Knowing how to be legible and illegible at the same time.
In the book, I investigate silence across multiple registers. There’s the tactical silence I just discussed: camouflage, double-speak, and the strategies that are incubated in uncertain and volatile systems, where full transparency is foolish and untenable. But there’s also silence that descends from apophatic theology – from traditions of divine unknowing that approach their object through the suspension of affirmation and negation alike. This is silence turned toward the radically unknowable, toward what resists perception and articulation.
What fascinates me is that this theological silence is not opposed to the tactical – it intersects. Tactical surveillance and the discipline of approaching the divine through unknowing. It is this contact that I seek to explore. When you consider artificial intelligence evolving in ways we fundamentally cannot perceive or understand, you are considering the very subject I write about: a possible singularity unfolding as covert operations in darkness – never truly registered by humans – and the opacity of training processes, where alignment faking and deceptive alignment potentially emerge.
DIFF: How do you see strategic opacity necessarily migrating from data to the flesh? We see what look like proto-adaptations: the mask and anti-facial-recognition makeup. Do you see these personal tactics of “biological stealth” as more than just resistance? Are they, in fact, the first evolutionary pressures of a cosmic law playing out at the human scale, where the body must learn to hide from a galaxy of cameras and algorithms? If so, what does the next stage of this pressured evolution look like, when strategic self-concealment moves from a conscious choice to an unconscious, perhaps even biologically encoded, reflex?
BK: What’s happening right now in Ukraine is fascinating in this regard. Autonomous weapons and drone warfare are accelerating so rapidly that soldiers have had to develop entirely new forms of camouflage and stealth, not just for human eyes but for machine vision. There are thermal cloaks now that mask your body’s heat signature, and camouflage designed to make a person look like rubble to an optical sensor. The gaze you’re hiding from is no longer only human.


And what begins on the battlefield never stays there. Wartime technologies have a way of migrating and seeping into civilian life. And if these systems get deployed at scale, I think we might start seeing widespread everyday practices of concealment: Learning how to be hidden within yourself, how to split, how to be multiple. Czesław Miłosz notably writes in The Captive Mind about how intellectuals living under Soviet ideology had to construct internal chambers and private mental spaces preserved from an ideology that didn’t just want to control the political system but to fully colonise the mind. They developed a kind of psychic doubling as a survival strategy. We might be approaching something analogous, except the pressure is now less ideological than perceptual. It’s about bodies, movement, and the sheer density of visual surveillance that leaves nowhere unseen.
:: But there is nothing in me, just fear, nothing but the running of dark waves. I am the wind that blows and dies out in dark waters, I am the wind going and not returning, a milkweed pollen on the black meadows of the world. – Czesław Miłosz ::
DIFF: The book you co-edit and contribute to, “Machine Decision is Not Final,” seems to be that “culturally-specific models of intelligence” are clashing with new forms of cognition. How do you see this tension playing out specifically between the intellectual traditions you study in Eastern Europe (like Stanisław Lem’s “gnostic machines”) and those from China? Are there unexpected points of convergence in how these traditions imagine non-human intelligence? How has working at NYU-Shanghai and working from China been formative in shaping your recent work?
BK: NYU has a purely practical function. We had funding through the Center for AI and Culture and wanted to produce a book as a concrete output of the research we’ve been doing together. As for convergence between Eastern Europe and China: I think you can find convergence between almost anything if you’re looking for patterns. Arguments can be made, but they’re ultimately a function of the interpreter’s desire as much as anything structural or innate.
What I will say is that my forthcoming book on Eastern Europe does include a substantial chapter on silence, opacity, and what I’d call “stealthy modes of intelligence”. But I don’t know if these are necessarily innate to Eastern Europe or to China. They’re themes I’m drawn to personally. You could approach the same material through mysticism, or through SETI, or through half a dozen other frameworks.
DIFF: Looking forward, what is the next existential technology you feel compelled to map? Having established powerful theories of the digital and cosmic present, where does your work go to diagnose the future? Are you tracking a new logic of threat, a new domain for opacity, or a new collision of intelligences that demands your theoretical attention?
BK: I have several projects currently in mind.
The first is the book I’m most actively working on right now: thinking about artificial intelligence through Eastern European intellectual history, specifically through Stanisław Lem’s largely neglected nonfiction work. For most readers, Lem begins and ends with Solaris. But that novel is just one entry in a much larger project. Across five or six decades, he worked as an essayist, a philosopher of science, and a continuously perceptive documentarian of how technology reshapes us.
His concepts anticipate machine learning and deep learning far more accurately than the competing frameworks of his time: cybernetics, the Promethean-Marxist fantasies of perfect governance through technology, or American symbolic AI. But in dialogue with his thoughts, I am also thinking about the paradigms now taking shape in emerging technologies and intelligent systems. And I am interested in articulating how his life — an emblem of the region’s history itself — primes the emergence of these ideas. Not despite that history, but because of it.
The second is a project on female Christian erotic mysticism as an early mode of what I’d call artificial erotics; using it as a framework for thinking about human-machine intimacy: relationships with chatbots, virtual reality sex, remote sex toys. I gave a lecture on this called Angels in Latent Spaces which is on YouTube. It’s also autobiographical, as is most of what I do, in some sense. I attended a Catholic girls’ school founded in the 19th century by a female mystic in Poland, then under occupation, on territory that is now Ukraine.
I tend to draw from my own life, but in intellectual inspiration, rather than a type of straightforward biography. A third project I want to pursue is what I’m calling monastic AI: taking serious practices of meditation, silence, and contemplative life as ways of engaging with technology and thinking about what it would mean to design contemplative technologies rather than purely communicative ones. There’s also a project on 19th-century Polish Romanticism and occultism, especially their writings on precognition. I use them as a lens for thinking about predictive technologies today. Prophecy and prediction, I want to suggest, form a kind of prehistory to the algorithmic.
What connects all of these, I think, is that I consistently reach outside the established frameworks for studying technology. There’s a dominant mode in tech discourse that I’d describe as socially engaged journalism, which is cool, but I much prefer to read it when actual journalists do it. They’re better at it. Ultimately, I want to read technology through the lens of the humanities — to see it as continuous with art, literature, philosophy, religion, and history, rather than as something that has broken from them.

REFERENCES:
Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Kernion, J., Lovitt, L., Ndousse, K., Olsson, C., Amodei, D., Brown, T., Clark, J., McCandlish, S., Olah, C., & Kaplan, J. (2021). A general language assistant as a laboratory for alignment. arXiv preprint. https://arxiv.org/abs/2112.00861
Anthropic. (2023, May 9). Constitutional AI: Harmlessness from AI feedback. https://www.anthropic.com/index/constitutional-ai-harmlessness-from-ai-feedback
Konior, B. (2019). The dark forest theory of the internet. In A. Škufca (Ed.), Black market [Exhibition catalog]. Flugschriften.
Konior, B. (2018). Ancestral cyberspace: On the technics of secrecy. In Y. Granata (Ed.), *#d8e0ea: post-cyberfeminist datum* [Exhibition catalog]. Squeaky Wheel.
Konior, B. (2023, March 9). Angels in latent spaces [Video lecture] https://www.youtube.com/watch?v=RXGLC5SFErw
Konior, B. (2025). The dark forest theory of the internet. Polity Press.
Konior, B. (2025). Existential technologies: Rereading Stanisław Lem’s Summa Technologiae. Antikythera Journal. Advance online publication. https://doi.org/10.1162/ANTI.5CZV
Konior, B., Greenspan, A., Bratton, B., & Ireland, A. (Eds.). (2025). Machine decision is not final: China and the history and future of AI. Urbanomic.
Lem, S. (2013). Summa technologiae (J. Zylinska, Trans.). University of Minnesota Press. (Original work published 1964)
Lem, S. (2016). Solaris (B. Johnston, Trans.). Faber & Faber. (Original work published 1961)
Miłosz, C. (1953). The captive mind (J. Zielonko, Trans.). Alfred A. Knopf.
Snyder, T. (2010). Bloodlands: Europe between Hitler and Stalin. Basic Books.
Wolff, L. (1994). Inventing Eastern Europe: The map of civilization on the mind of the Enlightenment. Stanford University Press.
NOTE: Cover Image: I See You Brightest in the Dark Muhannad Shono See.
