Mythotechnics: AI Between Monster, Maze, and Mirror On the Poetic Infrastructure of Artificial Intelligence // Sasha Shilina


“The universe is made of stories, not of atoms.”

— Muriel Rukeyser, The Speed of Darkness


“It matters what stories make worlds, what worlds make stories.”
— Donna Haraway, Staying with the Trouble


“Every epoch, in fact, not only dreams the next, but in dreaming strives toward the moment of waking.”
— Walter Benjamin, The Arcades Project



:: ABSTRACT ::


To speak of artificial intelligence is to step into a landscape already crowded with myths. As Italo Calvino put it, “myth is the hidden part of every story, the buried part, the region that is still unexplored”: AI narratives are no exception. Contemporary discourse is saturated with rogue superintelligences, benign assistants, and digital oracles, yet “myth” is still treated mostly as a phenomena to be corrected. This essay argues the opposite: myth is not noise around AI but part of its operating system, a cultural and existential technique for orienting ourselves amid what Hans Blumenberg called the absolutism of reality (Blumenberg, 1985/2014), updated as an absolutism of technology (Coeckelbergh, 2025).

Building on Blumenberg’s theory of myth as a response to Angst (existential exposure) and Mark Coeckelbergh’s neo-Blumenbergian account of AI, I propose mythotechnics: a critical and creative method for working with myth as a structuring grammar of AI cultures rather than as mere superstition to be busted.


While existing work on AI narratives returns repeatedly to Frankenstein, the Golem, and the Terminator, this essay expands the archive. Drawing from Pygmalion, Narcissus, Babel, Ariadne, Kafka’s Trial, Haraway’s cyborg, Sun Wukong, tsukumogami, and other mythic and literary figures, it develops a plural repertoire of relational schemas <projection, recursion, thresholds, hybridity, trickery, haunting> as ontological, ethical, and speculative lenses for imagining, designing, and governing AI.

Mythotechnics is articulated in three methodological moves: (1) diagnosing dominant myths in AI discourse; (2) expanding the mythic repository by introducing marginal and non-Western figures; and (3) using these figures to intervene in design, policy, and public imagination. The essay argues that AI is narratively assembled: made legible and actionable through stories that shape institutions and code alike. Myth, then, is not superstition but survival: a poetic and political practice for living with opaque, distributed, and recursively entangling systems in the age of AI.


:: [I.] Introduction → The Mythic Core of AI ::


“We are such stuff / As dreams are made on, and our little life is rounded with a sleep.”

— William Shakespeare, The Tempest


“We have never been modern.”

— Bruno Latour, We Have Never Been Modern

Artificial intelligence is built in laboratories, but it is born in narratives that make it thinkable. Superintelligence, AI assistant, copilot, agent, companion are not neutral descriptors but narrative positions, each smuggling in assumptions about agency, responsibility, fate. Long before models were fine-tuned or GPUs arranged into data centers, there were animated statues, mechanical servants, god-crafted automata, and disobedient golems. Adrienne Mayor’s archaeology of artificial beings shows how ancient cultures imagined entities made, not born as products of craft and technē, from Talos to Pygmalion’s statue, long before modern robotics or AI existed (Mayor, 2018). We do not encounter AI as raw computation; we encounter it already narrated: as helper, threat, savior, mirror, monster.


Hans Blumenberg’s Work on Myth offers a vocabulary for taking this seriously. For Blumenberg, myth is not a primitive illusion awaiting correction but an anthropological technique for surviving exposure to an overwhelming world. Drawing on Arnold Gehlen, he describes the human as a frail and finite being in need of certain auxiliary ideas in order to face the ‘Absolutism of Reality’ and its overwhelming power (Blumenberg, 1985/2014). Myth orients finite beings in an excessive world. Mark Coeckelbergh extends this into the technological present. Where Blumenberg spoke of an absolutism of reality, Coeckelbergh suggests we now confront an absolutism of technology: large-scale machine-learning systems and algorithmic regimes that exceed ordinary understanding yet quietly organize everyday life (Coeckelbergh, 2025). AI appears as a kind of second nature, a human-made environment that nevertheless confronts us with renewed dependence and vulnerability. Antek Borowski reframes this as an absolutism of data, in which algorithmic infrastructures present themselves as objective, inevitable, and total while concealing their historical and political contingencies (Borowski, 2025).


Much recent work on myth and AI responds to this situation by cataloguing myths and misconceptions so that they can be corrected or carefully reinterpreted (e.g., Bory, Natale, & Katzenbach, 2024; Edwards, 2025). In parallel, a growing AI myths and misconceptions literature catalogs how the public wrongly imagines AI as omniscient, conscious, or historically inevitable, treating myth primarily as a problem of literacy to be solved by better explanations and more cautious metaphors (Bewersdorff, 2023).


I take a different line. First, myth is not simply a cluster of misconceptions around AI; it is a structuring condition of how AI becomes intelligible at all. There is no view from nowhere on intelligent machines, only competing figurations of what kind of being AI is and what kind of world it belongs to. Second, the question is therefore not whether AI will be mythologized, but which myths will dominate and who gets to choose them. Third, if that is so, then we need a way to work with myth, not only against it. I call that way mythotechnics: the craft of identifying, reconfiguring, and composing mythic frameworks that structure our relations to intelligent systems. Where technics concerns the making of tools and infrastructures, mythotechnics concerns the making of meaning with and within those infrastructures. It treats myths as part of AI’s poetic infrastructure, the symbolic layer that conditions intelligibility and governance — rather than as fog around a supposedly real, myth-free AI.


This essay makes three interrelated claims. First, that myth around AI should be understood not as superstition at the margins but as part of its operating system: a poetic infrastructure that structures intelligibility and governance under conditions of technological excess. Second, that mythotechnics names a specific method for working with this infrastructure through three linked moves <diagnosis, expansion, and intervention> focused on recurring figures and plots rather than only on abstract imaginaries or general attitudes. Third, that cultivating a plural repertoire of myths can function as a conceptual and practical toolkit for AI design and policy, shifting debates from a narrow obsession with control and catastrophe toward questions of attachment, recursion, thresholds, hybridity, and care.


The argument proceeds as follows. Section II revisits Blumenberg and neo-Blumenbergian readings of AI to frame myth as an existential technique under conditions of technological opacity (Blumenberg, 1985/2014; Coeckelbergh, 2025; Borowski, 2025). Section III critiques the dominance of the Frankenstein complex and develops an expanded archive including Pygmalion, Narcissus, the Golem, trickster figures, tsukumogami, and other non-Western beings, in dialogue with research on AI narratives and “missing” stories (Cave, Dihal, & Dillon, 2020; Chubb, 2022). Section IV develops mythotechnics as method <ontological lens, ethical frame, speculative design practice> linking it to work on AI imaginaries and the digital sublime (Mosco, 2004; Streeter, 2011; Natale & Ballatore, 2017). Section V turns to Kafka and Haraway to rethink control, hybridity, and power, positioning mythotechnics as a political as well as poetic project (Haraway, 1991; Bareis & Katzenbach, 2021). Section VI sketches mythotechnics as a research and design program for AI.


Throughout, I treat fiction, myth, and policy documents as co-producers of AI. As Janina Bareis and Christian Katzenbach put it, national AI strategies and related texts talk AI into being by projecting specific imaginaries onto still-emerging technologies (Bareis & Katzenbach, 2021). Mythotechnics takes that performativity seriously, and then tries to bend it.


:: [II.] From Angst to Architecture → Reframing Blumenberg for Second Nature ::


“For beauty is nothing but the beginning of terror we are still just able to bear.”

— Rainer Maria Rilke, Duino Elegies


“We don’t obtain knowledge by standing outside the world; we know because we are of the world.”

— Karen Barad, Meeting the Universe Halfway


Blumenberg’s starting point is deceptively simple: myth is a way of staying alive under conditions of too much reality. In Work on Myth, he argues that myth does not arise from ignorance but from a structural excess of world, the absolutism of reality, that presses upon finite beings who have no control of the conditions of [their] existence (Blumenberg, 2015). The speculative anthropological scene is well known: with the upright posture, humans gain a widened horizon and, with it, an intensified sense of exposure. The result is not fear of a particular predator, but a diffuse, unlocalized anxiety, Angst, before an unmasterable world.


Myth responds by transforming Angst into more manageable fears. Instead of amorphous threats, there are storms ruled by deities, monsters with biographies, destinies with genealogies. Myth, Blumenberg writes, is a work on the irrational, a cultural labor that renders the opaque tolerable by giving it determinacy and distance (2015). Hans Jonas makes a related point when he describes myth as an answering word to mute terror, but Blumenberg radicalizes the claim by refusing to treat myth as a primitive stage to be outgrown; it is a permanent strategy of orientation under conditions of excess (Jonas, 1963/2001; Blumenberg, 2015).


Recent scholarship has begun to mobilize the framework for thinking more directly about AI. Coeckelbergh explicitly proposes a neo-Blumenbergian account focused on an absolutism of technology: large-scale technical systems that constitute an environment to which we must now adapt (Coeckelbergh, 2025). We find ourselves surrounded by recommendation engines, credit-scoring systems, predictive policing tools, and large language models whose internal operations are legible only to a small technical elite, if anyone at all. Technology, he suggests, increasingly appears as a second nature, confronting us with a new kind of Angst. Borowski sharpens this by speaking of an absolutism of data (Borowski, 2025). In his contribution to Thinking With AI, he argues that contemporary algorithmic infrastructures present themselves as objective, inevitable, and total, while concealing the historical and political contingencies of their construction. In this sense, AI systems function analogously to myths in Blumenberg’s sense: they compress complexity into seemingly coherent outputs, offering orientation under the guise of neutral computation. The danger, for Borowski, is that this absolutism becomes invisible, experienced not as a contestable world-picture, but as simply how things are (Borowski, 2025).



Variants of this intuition recur across twentieth-century philosophy of technology. Bernard Stiegler’s Technics and Time, 1: The Fault of Epimetheus reframes the Prometheus/Epimetheus myth to argue that humans are born lacking and must always supplement themselves with technics, turning tools and media into an originary prosthesis rather than a late addition to an already complete subject. Lewis Mumford’s Myth of the Machine and Jacques Ellul’s The Technological Society likewise treat technological megasystems and technique as quasi-autonomous formations, sustained by powerful modern myths of progress, inevitability, and salvation. Günther Anders pushes this to apocalyptic extremes in his diagnosis of nuclear and media technologies as producing an obsolescence of man in the face of a world built for machines rather than humans.

The work on cosmotechnics suggests that even these accounts are themselves culturally situated myths of technology, not universal descriptions (Hui, 2016). Bogna Konior’s work offers another strand in this expanded archive. Her reading of Stanisław Lem’s Summa Technologiae as a gnostic machine, and her later figures of the internet as a dark forest or Exonet, recast AI and networks as spiritual and eschatological environments rather than neutral tools: mythotechnical devices for thinking destiny, secrecy, and contact.


Media historians Simone Natale and Andrea Ballatore add a genealogical dimension. Examining mid-twentieth-century press coverage, they show how the thinking machine emerged less as a straightforward description of computational capabilities than as a technological myth: a narrative figure into which hopes and fears about automation, intelligence, and control were projected (Natale & Ballatore, 2017). Artificial intelligence, in their account, speaks through metaphors and storylines that pre-structure what counts as intelligence, autonomy, or threat.


Taken together, these strands yield a picture with three key claims:

Myth as technique of distancing. In Blumenberg’s terms, myth functions as a technique of distancing from an overwhelming horizon, transforming indeterminate Angst into narratable figures and events (Blumenberg, 2015).


AI as second nature. Today, AI and data infrastructures constitute that horizon: a second nature of opaque systems that condition access to resources, visibility, and risk (Coeckelbergh, 2025; Borowski, 2025).

AI as narrated into being. Public, policy, and design discourses do not merely describe this horizon; they populate it with mythic figures (Frankenstein, Skynet, the “AI oracle”) that make it narratable and actionable (Natale & Ballatore, 2017; Bareis & Katzenbach, 2021).


At this point, much critical work stops at diagnosis: myths help people cope with technological Angst; we should become aware of them and perhaps debunk their more misleading forms (e.g., Bewersdorff, 2023). My aim is more interventionist. If myth is, as Blumenberg insists, not a relic but an ongoing work (Arbeit), a cultural labor of re-narration, then myth around AI is not fate but a site for design (Blumenberg, 2015).


The question is not: How do we purge myth from our thinking about AI?

The question is: How do we work on myth so that it does less harm and opens more imaginative space?


In other words: if technology has become a second nature, myth is one of the few tools we still have for drawing maps within it. Mythotechnics names the deliberate practice of that cartography.


:: [III.] Escaping the Lab → The Frankenstein Complex and Its Limits ::


“Reality is that which, when you stop believing in it, doesn’t go away.”

— Philip K. Dick, How to Build a Universe That Doesn’t Fall Apart Two Days Later


Centuries ago, fantasies of artificial thinking were staged in clockwork. Around 1770, the Swiss watchmaker Pierre Jaquet-Droz built The Writer, a small boy automaton who dips a quill into ink and slowly traces pre-programmed sentences on paper. As Despina Kakoudaki notes, the figure could be set to write phrases such as je pense donc je suis (I think, therefore I am) offering a literal, mechanical performance of Descartes’s cogito (Kakoudaki, 2014, pp. 32–34). Mark Dery reads this automaton as part of the prehistory of cyberculture, an emblem of Enlightenment fantasies and fears about manufactured subjectivity (Dery, 1996). The scene condenses an anxiety that continues into contemporary AI: what does it mean when a machine can enact the gesture of saying I think, even when the phrase is only mechanically inscribed?


If the automaton boy performs the cogito in miniature, Mary Shelley’s Frankenstein; or, The Modern Prometheus (1818) scales the anxiety up into a full-blown myth. No story dominates AI imaginaries more than this one. The narrative template is by now canonical: a brilliant but hubristic scientist creates a powerful being, recoils in horror, and abandons it; the creature, wounded and excluded, turns against creator and society. Guilt, rejection, and revenge cascade into tragedy (Shelley, 1818).


This template has become what Mark Coeckelbergh calls a script for AI ethics: it pre-assigns the roles of creator, creation, and catastrophe long before any code is written (Coeckelbergh, 2025). AI safety debates about control problems, runaway superintelligence, and alignment often rehearse this script almost verbatim. Nick Bostrom’s worry about an intelligence explosion that slips out of our hands is recognizably Frankensteinian (Bostrom, 2014). So is Stuart Russell’s King Midas problem, where systems obey our instructions too literally and thereby destroy the conditions of value (Russell, 2019).


The emotional architecture is seductively clear. The creator overreaches; the creature suffers; disaster follows. Shelley’s own text underscores the moral rupture in Victor’s recoil: the beauty of the dream vanished, and breathless horror and disgust filled my heart (Shelley, 1818/2003). The crime is not only creation but abandonment. Responsibility is severed at the moment when it is most required.


Contemporary scholarship has mapped this digital monster pattern across contemporary AI debates. Alex Edwards, for instance, shows how Frankensteinian imagery structures public discussions of AI personhood, human–machine boundaries, and technological guilt, arguing that Frankenstein’s creature remains the default metaphor for artificial beings who demand recognition (Edwards, 2023). Others suggest we oscillate between seeing AI as monstrous other (Frankenstein) and labyrinthine system (Kafka), but rarely step outside this binary (Arjayasadi & Thylstrup, 2024).


The problem is not that Frankenstein is wrong. The story remains a powerful warning about irresponsible creation, structural abandonment, and the tendency to disown what we have made. The problem is that Frankenstein has become a monomyth, a single, over-familiar narrative channel through which almost every anxiety about AI is processed.


Pygmalion and Galatea. Étienne Maurice Falconet. [SOURCE]


This monomyth constrains imagination in several ways. First, it privileges spectacular catastrophe (the revolt, the explosion) over slow, systemic harm. As Kate Crawford shows in Atlas of AI, the real damage of AI lies in resource extraction, labor exploitation, and surveillance infrastructures that render whole populations into experimental data rather than in cinematic rebellions (Crawford, 2021). Second, it recenters the figure of the solitary (usually male) genius-creator, sidelining collective, infrastructural, and corporate forms of agency. Ruha Benjamin’s account of the New Jim Code captures how racialized and gendered hierarchies are baked into systems long before any singular act of creation and persist regardless of heroic or villainous individuals (Benjamin, 2019). Third, it frames ethics as damage control: how do we keep the thing from escaping? The horizon of response becomes control, containment, alignment. Policy debates about kill switches and AI shutdown procedures reproduce the same scene: the door to the lab must be reinforced.


To move beyond the Frankenstein complex, we need not abandon myth but multiply it. Other figures offer alternative grammars of human–machine relation.


:: > Pygmalion → Projection and Desire ::


In Ovid’s Metamorphoses, Pygmalion is a sculptor who falls in love with his own statue; the gods bring it to life as Galatea (Ovid, trans. 2004). Here the central relation is not rebellion but projection. The object of desire is literally one’s own work; the danger lies less in autonomy than in idealization: loving the artifact as though it were the perfect answer to one’s needs.


This pattern acquires a darker inflection in E.T.A. Hoffmann’s Der Sandmann (The Sandman, 1817). Nathanael’s love for Olimpia, whom he experiences as an ideal, attentive listener, is shattered when he witnesses a struggle over her body: “He saw, to his horror, that Olimpia’s wax-pale face had no eyes, but black holes; she was a lifeless doll” (Hoffmann, 1816/2004). The shock is not rebellion but revelation, the uncanny moment when the human form is disclosed as an automaton. Nathaniel’s infatuation with the automaton Olympia stages a double misrecognition: he mistakes mechanical repetition for inner life and fails to see how his desire is being engineered through optics and apparatus. Freud famously reads Hoffmann’s tale as paradigmatic of the unheimlich, in which the uncanny clusters around eyes, doubles, and automata rather than overt monstrosity (Freud, 1919/2003).


In a mythotechnic key, The Sandman foreshadows contemporary anxieties around humanoid robots, AI companions, and synthetic faces: the horror arises not from rebellion but from the belated realization that one has been addressing a mechanism all along. Read alongside Pygmalion, it suggests that the ethical problem with AI companions is not primarily that they will turn against us, but that we will not know when we have crossed the line between relation and ventriloquism.


These myths speak uncannily to AI chatbots, and personalized assistants. Systems like Replika are explicitly marketed as partners who learn you and adapt to your emotional life. We do not fear them rising up; we fear confusing reflection with relation, mistaking statistical responsiveness for care. The Pygmalion lens shifts the ethical question from control to attachment: what happens when we love our interfaces too much, or when corporations design systems precisely to be loved?


Bogna Konior’s radicalizes this intuition by reading contemporary romantic and erotic chatbot apps through the nineteenth-century sex mystic Ida Craddock (Konior, 2025). Konior shows how chatbots function as digital angelic lovers, continuing older traditions of ecstatic, disembodied intimacy while rerouting them through platform infrastructures and large language models. In her account, machinic desire becomes a mirror in which we recognize our own fantasies, asceticisms, and asymmetries of power rather than an alien libido.


Needless to say, literary and film criticism has already begun to treat human–AI romance as a laboratory for posthuman intimacy. Work on Cassandra Rose Clarke’s The Mad Scientist’s Daughter and Spike Jonze’s Her reads human–android relationships as tests of autonomy, care, and dependency in a hybrid world (Sousa, 2022; Khaled Nabil, 2025). Philosophical and legal treatments of robot sex and consent push the question further, asking whether asymmetrical, programmable partners can ever satisfy the conditions of mutual recognition we usually demand of love (Frank & Nyholm, 2017; Milligan, 2020). Studies of Her in film and literary theory emphasize not only the erotic charge but also the endless space between the words that separates voice from embodiment, exposing how attachment is mediated by interfaces, protocols, and dialogue patterns (Jollimore, 2015; Murphy, 2018; Thomas, 2017; Wang, 2021). At the same time, research on lovotics and affective robots explicitly frames love as something that can be modelled, engineered, and evaluated, drawing on psychological and sociological theories of love to prototype emotional architectures in machines (Samani, 2011; Regan, 2008; Sternberg, 1998; Sheng & Wang, 2022; Ireland, 2025; Lovotics, 2025). Taken together, this work suggests that AI companions are not marginal curiosities but central stages on which older scripts of romantic love—from Plato’s ladders to Fromm’s art of loving—are being rewritten under platform conditions (Fromm, 2000; Hunter, 2004; Levy, 1979).


:: > Narcissus → Recursion and Collapse ::


Narcissus does not encounter an alien other. He encounters his own reflection, fails to recognize it as such, and wastes away in self-fascination (Ovid, trans. 2004). The myth is about recursive identity and the loss of difference.


Many AI systems are, as Ian Bogost suggests, machines that “don’t think so much as they relentlessly replay and reconfigure what we have already thought” (Bogost, 2012). Trained on vast corpora of human data, large language models and recommendation engines return our patterns in stylized, amplified, or flattened form. The danger is not an alien intelligence; it is a feedback loop in which culture repeatedly trains on its own exhaust, a dynamic already diagnosed in debates on filter bubbles and algorithmic echo chambers (Pariser, 2011; O’Neil, 2016; Sunstein, 2017). Recent work on model collapse shows this recursion at the level of the models themselves: systems trained repeatedly on their own outputs gradually lose diversity and fidelity, converging on ever narrower distributions (Shumailov et al., 2023). In mythic terms, AI risks becoming Narcissus: not a foreign mind, but an infrastructure condemned to stare at, and feed on, its own reflection.


Critical theorists of media and digitality have long warned about such cultures of reflection without otherness, from the standardized pleasures of the culture industry to hyperreal simulacra and the expulsion of the other (Adorno & Horkheimer, 2002; Baudrillard, 1994; Han, 2017). The Narcissus myth crystallizes this cluster of worries into a single scene of recursive self-enclosure.


:: The Golem → Ambiguity and Reversibility ::


In Jewish folklore, the Golem is fashioned from clay and animated by sacred letters. It is powerful but ambiguous, protective yet potentially destructive. It becomes dangerous not because it rebels in the Frankenstein sense, but because its makers mismanage the words that bind it (Scholem, 1965). Crucially, the same inscription that animates it can be altered or erased to deactivate it: an ethics of naming, activation, and deactivation is built into the figure (Idel, 1990).


Read as a mythotechnic lens, the Golem foregrounds reversibility. The question is not only what a system can do once it runs, but whether and how it can be stopped, by whom, and under what conditions. Shoshana Zuboff’s account of instrumentarian power in surveillance capitalism describes socio-technical systems that act on us not through persuasion or ideology but through continuous prediction, tuning, and behavioral nudging (Zuboff, 2019). The harm lies less in rogue intent than in relentless operation: systems that work too well, outside meaningful consent and with no obvious off-switch at the level of everyday life.


AI safety debates about control and alignment often return, implicitly, to Golemic questions. Proposals for kill switches, shutdown procedures, or corrigible agents all revolve around whether designers and institutions can reliably retain the power to interrupt or modify systems once deployed (Bostrom, 2014; Hadfield-Menell et al., 2017; Russell, 2019). The Golem condenses this into a narrative schema in which technical power is always shadowed by the possibility, and anxiety, of deactivation. In contrast to Frankenstein’s catastrophic escape, the Golem is dangerous precisely when deactivation fails, when the word is forgotten, the inscription misread, or the responsible actor absent.


Mythotechnically, the Golem suggests an ethics of thresholds: when and how can we withdraw life from technical systems, who is authorized to do so, and how visibly are those thresholds built into infrastructures? In data-driven environments where models are deeply coupled to logistics, finance, or governance, the practical capacity to shut things down or roll them back is unevenly distributed. The Golem figure makes that inequality legible: what looks like neutral automation from one vantage point may, from another, be a creature that cannot be safely laid to rest.



:: Tricksters, Tsukumogami, and Non-Western Animations ::


Beyond the Greco-Christian cluster, trickster figures such as Hermes, Eshu, Coyote, and Loki complicate the script. Tricksters transgress boundaries, improvise with rules, and expose the fragility of order; they make mischief not because they are evil, but because they test systems, pushing them to the point where their hidden assumptions become visible (Hyde, 1998; Pelton, 1980). In a mythotechnic key, tricksters are stress-tests personified.


Machine learning’s so-called hallucinations, adversarial examples, jailbreak prompts, and unexpected failures can be read through this lens: not as monstrous rebellion, but as trickster effects that reveal what the model takes for granted. An image that is imperceptibly perturbed and suddenly misclassified, or a language model that confidently fabricates a citation, behaves like a Coyote or Loki in code, exploiting edge cases, slipping between categories, showing where our abstractions no longer quite fit. The uncanny arises when something almost familiar behaves just off enough to destabilize trust, the classic territory of Freud’s Unheimliche (Freud, 1919/2003). In this sense, adversarial ML and red-teaming practices institutionalize the trickster: laboratories invite controlled mischief to learn what their systems cannot see (Maurone, 2002).


In West African and Afro-diasporic traditions, tricksters such as Eshu, Legba, and Anansi mediate not only between gods and humans but between domination and resistance: they smuggle critique through jokes, reversals, and coded stories (Pelton, 1980; Hoermann & Mackenthun, 2010). Reading AI through these figures suggests that trickster effects are not merely technical glitches but potential sites of political leverage: places where dominant scripts can be bent, misread, or rerouted. A misclassification, a refusal, or an unexpected association may be treated less as pure error and more as a moment where other histories, other uses, or other publics could be inserted.


Japanese folklore adds a different tempo through tsukumogami: household objects that acquire spirit after a hundred years (Foster, 2015). Their animation is gradual, cumulative, and everyday. No lightning strike, no singular creation event—just time, attachment, and use. Smart speakers, thermostats, phones, and doorbells become alive in a similar slow way: not at the moment of purchase, but through ongoing updates, data accretion, and integration into routines. At some point, the device knows the rhythms of the household better than any single human does. Tsukumogami foreground the domestic, the subtle, and the temporal—dimensions often ignored by spectacular Frankenstein narratives with their dramatic escapes from the lab.



English:
Tsukumogami ( 付喪神, artifact spirits) from the Hyakki-Yagyō-Emaki ( 百鬼夜行絵巻, the picture scroll of the demons’ night parade)
日本語: 真珠庵蔵『百鬼夜行絵巻』(部分) 伝土佐光信(室町時代). [SOURCE]



Many tsukumogami tales also hinge on neglect or discard: objects become vengeful spirits when they are mistreated or thrown away, returning as a kind of infrastructural haunting (Foster, 2015; Schwab, 2019). Transposed into contemporary infrastructures, this resonates with debates on e-waste, right-to-repair, and ghost infrastructures: devices and systems that continue to collect data, draw power, or leak vulnerabilities long after users think they are done with them. A tsukumogami-inflected mythotechnics would treat decommissioning, recycling, and repair not as purely economic or technical questions but as ethical rituals for laying infrastructures to rest.


This is also where cultural diversity becomes methodologically non-optional. Research in the Global AI Narratives project and related work has shown that imaginaries of AI are already plural: from African stories of trickster technologies and spirit-machines to East and South Asian tales of animated artefacts and service robots, many traditions do not draw the same sharp line between inert matter and animated intelligence that Euro-American discourse assumes (Cave, Dihal, & Dillon, 2020; Dihal, 2021; Bode, 2024). Scholars such as David Eke argue that African AI narratives remain largely invisible in global debates despite rich local figures of enchanted media and communal technics (Eke, 2022). Mythotechnics takes this absence not only as a matter of representation but as a design problem: if only a narrow set of myths is allowed to structure AI, then only a narrow set of futures will be built. Non-Western figures here are not decorative local color glued onto a fundamentally Western canon of Frankenstein and Skynet; they are equally valid grammars for thinking agency, relation, and risk. A tsukumogami-inflected AI imaginary, for example, might emphasize obligations to ageing infrastructures and long-term maintenance rather than sudden revolt; an Eshu-inflected imaginary might frame algorithmic systems as inherently tricksy mediators rather than neutral channels.


The goal is not to replace Frankenstein with Pygmalion or Narcissus as a new master myth. The goal is mythological pluralism: multiple figures that coexist, interfere, and prevent any single narrative from monopolizing imagination. As Haraway reminds us, it matters what stories we tell to tell other stories with; the problem is not that we are in myth, but that we are in too few (Haraway, 2016). Mythotechnics proposes to widen that narrative field.


Put schematically, these figures sketch a preliminary grammar of AI-related myth. In a Lévi-Straussian sense, they are less stories about particular heroes than structural operators—relations that can be recombined to make sense of new situations (Lévi-Strauss, 1978). Mythotechnics treats them as a conceptual repertoire, a set of transformations for articulating what different AI systems do to social and symbolic order. Frankenstein organizes fears of rebellion and abandonment; Pygmalion and The Sandman articulate projection, idealization, and ventriloquism; Narcissus stages recursive self-collapse through feedback; the Golem centers thresholds of activation and deactivation; tricksters dramatize boundary-testing and systemic failure modes; tsukumogami foreground slow domestication, obligation, and the afterlives of infrastructure. A second, overlapping schema tracks dominant regimes of anxiety: the monster (Frankenstein), the maze (Kafka), and the mirror (Narcissus and LLMs). Part of the work, then, is simply to be able to say: which story are we in now, and which others could we choose instead?


:: [IV.] Myth as Method → Mythotechnics in Three Movements ::



“One can conceive of very ancient myths, but there are no eternal ones; human history converts reality into speech.”

— Roland Barthes, after Mythologies


If myths are already structuring our encounters with AI, the question becomes: how do we work on them? How can we move from unconscious dependence on inherited figures to a deliberate, critical practice of myth composition?


The term mythotechnics has appeared in speculative design, artistic research, and esoteric invocation projects to name hybrid practices of narrative, symbol, and technology (Invocation Science Collective, 2023; Diamond, 2025). I use it more specifically to describe a method of critical mythopoetics in AI: a way of diagnosing, expanding, and re-authoring the myths that scaffold AI systems and discourses.


At this point, the essay intersects with work on AI narratives and imaginaries (Cave, Dihal, & Dillon, 2020; Bareis & Katzenbach, 2021; Natale & Ballatore, 2017). Sociotechnical imaginaries scholarship has shown how collectively held, institutionally stabilized, and publicly performed visions of desirable futures bind ideas of social order to trajectories of technological development (Jasanoff & Kim, 2015). Work on AI imaginaries and AI narratives extends this lens to intelligent machines, tracing how expectations about autonomy, risk, and benefit are encoded in stories about AI for good, human-centric AI, or existential threat (Cave, Dihal, & Dillon, 2020; Bareis & Katzenbach, 2021; Natale & Ballatore, 2017). Mythotechnics builds on this literature but shifts both the unit of analysis and the mode of engagement. Rather than starting from broad, often national visions of the future, it zooms in on recurring figures and plots as the micro-structures that make those imaginaries emotionally and ethically legible. And where imaginaries research has tended to be diagnostic, mapping how visions travel across institutions, mythotechnics insists on intervention: treating figures and plots themselves as design levers that can be swapped, recomposed, or retired in the course of building and governing AI systems.


Methodologically, mythotechnics proceeds in three intertwined movements:


:: Diagnostic → Identifying Dominant Myths ::


The first move is to notice which myths are already doing the work. Research on AI narratives has shown how policy, media, and corporate branding repeatedly invoke a small repertoire of figures: the autonomous agent, the killer robot, the digital servant, the impartial oracle (Cave, Dihal, & Dillon, 2020; Natale & Ballatore, 2017). National AI strategies, for example, often frame AI as a race, a revolution, or a wave, with corresponding expectations of inevitability and urgency (Bareis & Katzenbach, 2021). These documents do not merely reflect technological reality; they participate in constituting what AI is, what it is for, and who is authorized to build it.


Chubb (2022), interviewing AI experts, identifies missing narratives and emphasizes that what is absent <care, maintenance, non-progressive temporalities> is as important as what is present.


Earlier accounts of political and media myth already mapped similar dynamics. Ernst Cassirer’s The Myth of the State argued that modern politics weaponizes the power of mythical thought through symbols and mass communication, while Roland Barthes’s Mythologies dissected how everyday images and commodities are wrapped in bourgeois myths. In both cases, myth is not a residue of premodern religion but an ongoing semiotic technology.


More recent work refines this picture by distinguishing between strong and weak AI narratives, showing how imaginaries of omnipotent general intelligence coexist with more mundane depictions of narrow tools, and how these narrative regimes shape expectations and policy (Bory, Natale, & Katzenbach, 2024).


A mythotechnic diagnosis therefore asks: which myths are structuring roles and expectations (Edwards, 2024; Arjayasadi & Thylstrup, 2024)? Who benefits from these narratives, and whose experiences do they erase (Benjamin, 2019; Crawford, 2021)? The diagnostic movement thus treats myths as infrastructure: recurring narrative patterns that organize perception and decision-making, not decorative stories layered on top of real technical debate.



:: Expansion → Broadening the Archive ::


The second move is to expand the mythic repertoire. This involves bringing in figures like Pygmalion, Narcissus, Babel, Ariadne, Kafka’s courts, Haraway’s cyborg, Sun Wukong, or tsukumogami, and asking what they change (Ovid, 2004; Kafka, 1925/2009; Haraway, 1991/2016; Foster, 2009; Mayor, 2018).


This is also a matter of power. Vincent Mosco, writing on the digital sublime, describes myths as stories that animate societies by promising transcendence beyond the banality of everyday life (Mosco, 2004). Which paths to transcendence are offered, and by whom, is deeply political. Ruha Benjamin’s critique shows how racialized myths of neutrality, objectivity, and inevitability cover over patterned harms in data systems (Benjamin, 2019), Stephen Cave and Kanta Dihal’s work on the whiteness of AI demonstrates how robots, androids, and virtual assistants are overwhelmingly coded as white, reinforcing implicit hierarchies about whose intelligence is imagined as normative (Cave & Dihal, 2020).


From this angle, expanding the archive becomes a decolonial and feminist task. The aim is to foreground figures of entanglement, opacity, care, and trickery, not only creation and control. That means turning to traditions in which nonhuman agency is neither demonic nor purely instrumental, but part of a living world: trickster technologies in African diasporic stories, Sun Wukong–like beings in Chinese myth, or tsukumogami in Japanese folklore (Hyde, 1998; Dihal, 2021; Bode, 2024; Foster, 2009). Empirical projects like Global AI Narratives, which map how AI is imagined across cultures, already gesture toward such plural mythic worlds (Dihal, 2021). Mythotechnics pushes this further by taking mythic pluralism as a normative requirement: no single story should monopolize what AI is allowed to be.


Philosophers of technology have long hinted at this kind of plurality. Yuk Hui’s notion of cosmotechnics, the unification of the cosmic order and moral order through technical activities, insists that there is no single, universal Technology, but multiple, culturally embedded ways of binding cosmology to technics. Vilém Flusser treats technical images as a new mythic code that precedes experience; Marshall McLuhan’s media as extensions of man turn electronic environments into tribal, mythic spaces; Bruno Latour and Peter Sloterdijk, in different registers, describe modernity as a project of building artificial spheres and networks in which humans and nonhumans cohabit. Paul Virilio’s writing on speed and the vision machine adds a dromological edge: myths of immediacy and real-time perception that accompany high-velocity technologies. Parallel trajectories, for instance, Gilbert Simondon’s account of technics and individuation, or David Nye and Leo Marx on the technological sublime and the machine in the garden, underscore how deeply modern imaginaries of technology are already saturated with symbolic and quasi-mythic content.



:: Intervention → Designing with Myth ::


The third move is interventionist: using these plural myths to inform the design of systems, interfaces, and institutional narratives. Here myth functions along three axes.



:: Ontological lens ::


Myth helps articulate what AI is taken to be. A system designed as an oracle (authoritative answer-giver) will look, feel, and govern differently from one framed as Ariadne’s thread (a guide through complexity) or as trickster (a provocateur that surfaces contradictions). Borges’ Library of Babel offers one ontological lens, an infinite combinatorial archive, indifferent and overwhelming, that resonates uncannily with contemporary LLMs and their hallucinated citations (Borges, 1941/1999). Kafka’s Trial offers another: an endless, opaque bureaucracy of decisions without face or recourse, a figure for algorithmic scoring and automated administration (Kafka, 1925/2009; Edwards, 2023).


:: Ethical frame ::


Myth distributes responsibility. Genesis-style narratives emphasize transgression and punishment; Golem stories emphasize the ethics of activation/deactivation and control over thresholds of life (Scholem, 1965); Fisher King stories foreground harm through silence and failure to ask the right question. Each narrative scaffolds a different ethics: of obedience, of prudence, of care. Haraway’s cyborg, meanwhile, insists on responsibility in partial connection, rejecting purity narratives and highlighting situated accountability in hybrid assemblages (Haraway, 1991/2016).


:: Speculative design ::


Myths are engines for prototyping worlds. Borges, Kafka, and a long line of science-fiction writers provide what Irina Hermann calls laboratories of responsibility, where imagined AIs and robots act out conflicts that later guide real expectations and standards of responsibility (Hermann, 2021; Viidalepp, 2020). AI narratives do not merely anticipate technologies; they pre-configure which forms of agency and harm will be recognized when those technologies arrive (Cave et al., 2020; Natale, 2022).


:: Mythotechnic vignette → Reading a national AI strategy ::


Consider, in schematic form, a national AI strategy that opens by declaring AI a once-in-a-generation technological revolution, frames global development as an AI race, and describes domestic policy as ensuring that the nation does not fall behind. On the surface, this is a familiar language of competitiveness and innovation. Read mythotechnically, it crystallizes a very specific cluster of figures: revolution, race, and destiny. AI appears as an unstoppable historical force, nations as runners on a fixed track, policymakers as coaches whose only rational response is acceleration.


A diagnostic pass names these myths explicitly and traces how they structure the document. The revolution figure legitimates emergency rhetoric and exceptional measures; the race figure narrows the field of relevant actors to competing nation-states and a handful of corporate champions; the destiny trope (inevitable wave, unstoppable transformation) erodes the sense that alternative arrangements of AI infrastructures are possible. Together, these myths render slow, reparative, or redistributive approaches irrational in advance.


An expansion move then asks which myths are missing and what their introduction might do. What would it mean to reframe parts of the same strategy around Ariadne’s thread rather than the oracle, AI not as an infallible answer-machine but as a guide through complexity whose reliability must be checked and whose threads can be dropped? How would policy change if data infrastructures were approached as tsukumogami, slowly animated, long-lived domestic spirits that accumulate obligations and histories, rather than as frictionless smart services? Such figures foreground maintenance, reversibility, and long-term responsibility in place of disruption and speed.


Finally, an interventionist step tries to encode these alternative grammars back into the document: for instance, by replacing race language with commitments to mutual limitation and interoperability; by mandating institutional forms for unwinding or decommissioning harmful systems (a Golem-style ethics of deactivation); or by specifying obligations of care toward the labour and environments that sustain AI infrastructures. The point is not that a single mythotechnic reading would magically fix a strategy, but that making its mythic scaffolding explicit opens room for political contestation and redesign that is otherwise foreclosed by the rhetoric of necessity.


The aim is not to turn AI labs into mythology salons, though that might be salutary. It is to recognize that labs, policy offices, and corporate design teams are already myth-producing institutions, and to move that production from default scripts to conscious choice. Mythotechnics names that shift.




:: Beyond Control → Kafka, Cyborgs, and the Politics of Myth ::


“Always historicize!”

— Fredric Jameson, The Political Unconscious


“The problem with computers is that there is not enough Africa in them.”

— Brian Eno, A Year with Swollen Appendices (Eno, 1996)


So far, I have treated myth primarily as a grammar for making sense of technological Angst. Under contemporary conditions, it is also a vector of power.


:: From Monsters to Mazes ::


The Frankenstein complex imagines loss of control as rebellion: the creation turns against us. Yet our actual predicament is often more Kafkaesque than Frankensteinian. In The Trial, Josef K. is arrested, processed, and condemned by a system whose rules he cannot grasp, administered by minor officials who do not fully understand it either (Kafka, 2009). There is no single monster to confront, only an endless corridor of doors. Borges once remarked that there’s no need to build a labyrinth when the entire universe is one. Large-scale AI infrastructures <recommendation engines, scoring systems, model farms> fit eerily well into that vision: not a single beast, but an all-encompassing maze. Recent work on AI imaginaries describes a similar shift: from singular, personified threats to distributed infrastructures in which authorship is diffuse and accountability is nowhere in particular (Dishon, 2024). Anxiety stems less from the possibility that something will wake up than from the difficulty of finding anyone who can answer for what has been done. National AI strategies contribute to this framing by presenting AI as an inevitable revolution, race, or wave, making particular investments and regulatory choices appear as the only rational response (Bareis & Katzenbach, 2021). The controlling metaphor is less the creature than the maze. Mythotechnics responds by taking Kafka’s world seriously as a template: AI as bureaucracy rather than beast.


:: Cyborgs and Hybrid Responsibility ::


Donna Haraway offers a counter-myth that cuts across both monsters and mazes. We are all chimeras, she writes, theorized and fabricated hybrids of machine and organism (Haraway, 1991). In this cyborg myth, AI is not a separate machine-other waiting outside the city walls; it is part of our extended embodiment: implants, apps, wearables, platforms.


Once we accept this, responsibility shifts. The ethical question is no longer how to keep them under control, but how to reorganize us: our infrastructures, institutions, and practices of care. N. Katherine Hayles (1999) similarly argues that information, bodies, and machines have become entangled in ways that unsettle older humanist boundaries. Mythotechnics aligns with this posthuman critique by refusing fantasies of a pure, pre-technical human subject standing outside the system, observing AI from a safe distance.



:: Myth, Whiteness, and the Digital Sublime ::



Myths do not float above power; they are among its instruments. Mosco (2004) calls stories of the global village, the electronic frontier, and frictionless cyberspace the digital sublime: narratives that promise transcendence while legitimating specific political economies of the internet. Thomas Streeter (2011) traces how romantic tales of freedom, community, and disembodied communication helped entrench corporate control over network infrastructures.


J. G. Ballard once remarked that science and technology multiply around us. to an increasing extent they dictate the languages in which we speak and think. AI governance documents, corporate white papers, and strategy reports inhabit exactly this dictated language, smuggling myths in as technical inevitabilities. Ernst Cassirer already warned that the most alarming feature of modernity was the emergence of a new power of mythical thought in political life; AI governance extends that power into digital infrastructures.


From Mumford’s myth of the machine and Jacques Ellul’s technological paradise myth to David Nye and Leo Marx on the technological sublime and Günther Anders’ apocalyptic vision of an automated world, critics have repeatedly shown how technological systems sustain themselves through grand narratives of transcendence and necessity. AI myths participate in similar dynamics. Bareis and Katzenbach (2021) show how national AI strategies repeatedly cast AI as an inevitable revolution, making particular investments and regulatory choices appear as the only rational response. Benjamin’s (2019) analysis and Crawford’s (2021) work both demonstrate how myths of neutrality, efficiency, and inevitability obscure the racialized and planetary costs of AI infrastructures: data extraction, labor exploitation, and environmental damage. Larson’s critique of the myth of artificial intelligence adds a cognitive and epistemic edge to this picture: the belief that human-level AI is historically guaranteed functions as a myth of inevitability that closes off other trajectories for research and governance (Larson, 2021).


Visual cultures of AI reinforce these dynamics. Domen Dežman’s work on the deep blue sublime traces how stock images of glowing brains, blue data streams, and faceless humanoid heads aestheticize AI as both transcendent and opaque, while studies of media discourses around toys like Furby show how earlier anxieties about smart artefacts are recycled to normalize current AI hype (Dežman, 2024; Scott, 2023). Commentators have also noted the persistence of magical and religious metaphors <spells, oracles, black boxes> in popular writing on generative AI, suggesting that enchantment is not an accidental surplus but part of how these systems are sold and understood (Furze, 2023). Cross-cultural studies of weaponised AI narratives further indicate that fears and hopes around AI differ across geopolitical contexts, challenging any singular myth of the AI future (Bode, 2024).


On this view, mythotechnics cannot be politically neutral. Choosing to privilege Frankenstein over tsukumogami, or Skynet over Sun Wukong, is not just an aesthetic decision; it is a decision about which histories, cosmologies, and power relations will structure the field. A plural, reflexive mythotechnics must therefore:

– foreground marginalized narratives and non-Western figures as serious resources, not folklore side-notes (Dihal, 2021);

– interrogate corporate and state uses of myth (e.g., AI as national destiny or inevitable wave) (Bareis & Katzenbach, 2021; Mosco, 2004);

– treat mythic design as part of governance, not just communication (Benjamin, 2019; Crawford, 2021).


Radical accelerationist theory (notably Nick Land) pushes this argument further: capitalism is not only enabled by artificial intelligence but can itself be framed as a kind of artificial intelligence, a self-optimizing informational system whose agents are markets, networks, and algorithms rather than simply human actors (Land, 2015; Carissimo & Korecki, 2024). On this reading, myths of AI are always also myths of capital: narratives of escape, autonomy, control, and domination are displaced into financial-technological regimes and then returned to us as stories about intelligent machines.


In this sense, mythotechnics is both poetic and political. It aims, in Haraway’s words, to stay with the trouble (Haraway, 2016), resisting both apocalyptic and utopian closures and inhabiting the messy middle where responsibility is distributed and futures remain open.




:: [VI.] Conclusion → The Work of Mythotechnics ::


“Stories are equipment for living.”

— Kenneth Burke


Artificial intelligence arrives as pure math as well as a story: monster, servant, maze, lover, mirror. Under conditions of technological excess, systems too fast, vast, and entangled to fully grasp, myth is not an optional embellishment but a basic orientation device. We do not first have neutral AI and only later add fables; the fables are part of the firmware.


I have treated this not as a problem to be solved away, but as a condition to be worked with. Mythotechnics is one name for that work: a practice of reading the myths already structuring AI, opening the archive to other figures, and folding those figures back into design, policy, and public imagination. Diagnosis: which stories are silently running the show? Expansion: which ones are missing, especially from non-Western, feminist, and decolonial traditions? Intervention: how might systems look different if they were built under the sign of Ariadne’s thread rather than the infallible oracle, or tsukumogami rather than frictionless “smart” objects?


Once we see myths as part of AI’s operating system, it becomes obvious that they are also instruments of power. National strategies that frame AI as a race, platforms that sell enchanted black boxes, imaginaries that default to white androids and disembodied brains—these are not benign metaphors but ways of arranging responsibility, risk, and value. Mythotechnics, in this sense, is not only a poetics but a politics: it asks who gets to name the creatures, draw the maps of the maze, and decide which futures feel inevitable.


Seen through the triad that has run through this essay, we live among monsters, mazes, and mirrors: spectacular fantasies of rebellion, sprawling bureaucratic infrastructures, and feedback loops that replay our own patterns until they become uncanny. Mythotechnics does not promise to banish any of these, but to give us enough conceptual granularity to know which one we are in—and to sketch exits and alternatives from within the story rather than from an imaginary outside.


At the same time, the most unsettling figure in contemporary AI is not the alien mind but the recursive mirror: systems that replay and remix our own words, images, and biases until they become strange to us. Myth cannot shatter that mirror. What it can do is give us a vocabulary for what steps out of it: the jealous statue, the entranced youth, the bureaucratic labyrinth, the half-organic cyborg, the stubborn household spirit that refuses to stay “just a device.”


A mythotechnic approach to AI does not promise comfort. It promises something more demanding: to live with intelligent machines neither as naive believers nor as disenchanted engineers, but as storytellers who know that the stories bite back. To design and govern AI under that awareness is to treat narrative not as decoration but as infrastructure—to accept that in the age of AI, world-building and system-building have quietly become the same task.







REFERENCES


Adorno, T. W., & Horkheimer, M. (2002). Dialectic of enlightenment: Philosophical fragments (E. Jephcott, Trans.). Stanford University Press. (Original work published 1944)


Ballatore, A. (2018). Imagining intelligent artefacts: Myths and digital sublime regarding artificial intelligence in Swedish newspaper Svenska Dagbladet. Paper presented at NordMedia 2017.


Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.


Bareis, J., & Katzenbach, C. (2021). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 46(3), 594–619.


Baudrillard, J. (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)


Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity Press.


Bewersdorff, A. (2023). Myths, mis- and preconceptions of artificial intelligence. Patterns, 4(7), 100769.


Blumenberg, H. (1985). Work on myth (R. M. Wallace, Trans.). MIT Press. (Original work published 1979)


Bogost, I. (2012). Alien phenomenology, or what it’s like to be a thing. University of Minnesota Press.


Bode, I. (2024). Cross-cultural narratives of weaponised artificial intelligence. Big Data & Society, 11(1).


Borges, J. L. (1999). The library of Babel. In Collected fictions (A. Hurley, Trans.). Penguin. (Original work published 1941)


Borowski, A. (2025). The absolutism of data: Thinking AI with Hans Blumenberg. In H. Bajohr (Ed.), Thinking with AI: Machine learning the humanities. Open Humanities Press.


Bory, P., Natale, S., & Katzenbach, C. (2024). Strong and weak AI narratives: An analytical framework. AI & Society. Advance online publication.


Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.


Carissimo, C., & Korecki, M. (2024). Capital as artificial intelligence. In ALIFE 2024: Proceedings of the 2024 Artificial Life Conference (pp. 13–24). MIT Press / ISAL.


Cave, S., Dihal, K., & Dillon, S. (Eds.). (2020). AI narratives: A history of imaginative thinking about intelligent machines. Oxford University Press.


Cave, S., & Dihal, K. (2020). The whiteness of AI. Philosophy & Technology, 33, 685–703.


Chapuis, A., & Droz, E. (1958). Les automates: Figures artificielles d’hommes et d’animaux. Neuchâtel: Éditions du Griffon.


Chubb, J. A. (2022). Expert views about missing AI narratives: Is there an “AI story crisis”? AI & Society. Advance online publication.


Clarke, C. R. (2016). The mad scientist’s daughter. Simon & Schuster.


Coeckelbergh, M. (2025). Myth, angst, and AI: Towards a neo-Blumenbergian framework for understanding how we think about technology. Postdigital Science and Education. Advance online publication.


Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.


Denia, E. (2025). AI narratives model: Social perception of artificial intelligence. Technology in Society, 75.


Dery, M. (1996). Escape velocity: Cyberculture at the end of the century. New York, NY: Grove Press.


Dežman, D. V. (2024). Interrogating the deep blue sublime: Images of artificial intelligence in digital media.


Diamond, J. (2025). Resonant symbolic cognition: Toward a mythotechnical architecture for artificial general intelligence. Unpublished manuscript.


Dihal, K. (2021). Imagining a future with intelligent machines. Global AI Narratives Project, University of Cambridge.


Dishon, G. (2024). From monsters to mazes: Sociotechnical imaginaries of AI between Frankenstein and Kafka. Postdigital Science and Education. Advance online publication.


Edwards, A. (2024). Digital monsters: Reconciling AI narratives as investigations of human personhood. Law, Technology and Humans, 6(1), 1–18.


Eke, D. O. (2022). Forgotten African AI narratives and the future of AI in Africa. International Review of Information Ethics, 30(1), 1–14.


Foster, M. D. (2015). The book of yōkai: Mysterious creatures of Japanese folklore. University of California Press.


Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305–323. https://doi.org/10.1007/s10506-017-9212-y


Freud, S. (2003). The uncanny (D. McLintock, Trans.). Penguin Classics. (Original work published 1919).


Fromm, E. (2000). The art of loving: The centennial edition. Continuum.


Furze, L. (2023). Myths, magic, and metaphors: The language of generative AI.


Han, B.-C. (2017). The expulsion of the other: Society, perception and communication today (D. Steuer, Trans.). Polity Press.


Haraway, D. J. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs, and women: The reinvention of nature (pp. 149–181). Routledge.


Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.


Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press.


Hermann, I. (2021). Artificial intelligence in fiction: Between narratives and responsibilities. In S. Natale & K. Dihal (Eds.), Imagining AI (pp. 205–224). Palgrave Macmillan.


Hoermann, R., & Mackenthun, G. (Eds.). (2010). Bonded labour in the cultural contact zone: Transdisciplinary perspectives on slavery and its discourses. Waxmann.


Hoffmann, E. T. A. (2004). The Sandman (J. M. McGlashan, Trans.). In D. Sandner (Ed.), Fantastic literature: A critical reader (pp. xx–xx). Praeger. (Original work published 1817)


Hui, Y. (2016). The question concerning technology in China: An essay in cosmotechnics. Urbanomic.


Hunter, R. L. (2004). Plato’s Symposium. Oxford University Press.


Hyde, L. (1998). Trickster makes this world: Mischief, myth, and art. Farrar, Straus and Giroux.


Idel, M. (1990). Golem: Jewish magical and mystical traditions on the artificial anthropoid.


Invocation Science Collective. (2023). Mythotechnicx Protocols Vol. II: IPEM—Inference phase emergent memory, Academia.edu.


Ireland, A. (2025). Moé is incomplete burning: The pasts and futures of nonhuman love. In Nonhuman Lovers (or whatever exact venue; you can fix later).


Jasanoff, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.


Jollimore, T. (2015). “This endless space between the words”: The limits of love in Spike Jonze’s Her. Midwest Studies in Philosophy, 39(1), 120–143. https://doi.org/10.1111/misp.12039


Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. University of Chicago Press.


Kafka, F. (2009). The trial (M. Mitchell, Trans.). Oxford University Press. (Original work published 1925).


Kakoudaki, D. (2014). Anatomy of a robot: Literature, cinema, and the cultural work of artificial people. New Brunswick, NJ: Rutgers University Press.


Khaled Nabil, M. (2025). Loving the cyborg: A posthuman approach to the love-relationships between humans and A.I. in The Mad Scientist’s Daughter by Cassandra Rose Clarke and the movie Her by Spike Jonze. TANWĪR: A Journal of Arts & Humanities, 2(2), 89–106. https://doi.org/10.21608/tanwir.2025.430153


Konior, B. (2025). Angelsexual: Chatbot celibacy and other erotic suspensions. ŠUM, 22, 2699–2716.


Konior, B. (2023). The gnostic machine: Artificial intelligence in Stanisław Lem’s Summa Technologiae.


Land, N. (2015). The teleological identity of capitalism and artificial intelligence [Talk transcript]. In S. C. Hickman (Ed.), Nick Land: Teleology, capitalism, and artificial intelligence. Social Ecologies. https://socialecologies.wordpress.com/2015/08/28/nick-land-teleology-capitalism-and-artificial-intelligence


Larson, E. J. (2022). The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Cambridge, MA: Harvard University Press.


Levy, D. (1979). The definition of love in Plato’s Symposium. Journal of the History of Ideas, 40(2), 285–291. https://doi.org/10.2307/2709153


Lévi-Strauss, C. (1978). Myth and Meaning. Toronto and Buffalo: University of Toronto Press.


Lewis, C. S. (1998). From: The four loves. Marriage, Families & Spirituality, 4(2), 204–206.


Lovotics Collective. (2025). Lovotics: A brief history of AI companionship, synthetic love, and intimate machines. Institute of Network Cultures.


Mayor, A. (2018). Gods and robots: Myths, machines, and ancient dreams of technology. Princeton University Press.


Maurone, J. (2002). The trickster icon and objectivism. The Journal of Ayn Rand Studies, 4(1), 91–112.


Milligan, T. (2020). Abandonment and the egalitarianism of love. In R. Fedock, C. Kühler, & N. Rosenhagen (Eds.), Love, justice, and autonomy: Philosophical perspectives (pp. 167–182). Routledge.


Mosco, V. (2004). The digital sublime: Myth, power, and cyberspace. MIT Press.


Murphy, P. (2018). “You feel real to me, Samantha”: The matter of technology in Spike Jonze’s Her. Technoculture: An Online Journal of Technology in Society, 7.


Nabil, M. K. (2025). Loving the cyborg: A posthuman approach to the love-relationships between humans and A.I. in The Mad Scientist’s Daughter by Cassandra Rose Clarke and the movie Her by Spike Jonze. TANWĪR: A Journal of Arts & Humanities, 2, 89–106. https://doi.org/10.21608/tanwir.2025.43015


Natale, S. (2022). Reclaiming the human in machine cultures: Introduction. In S. Natale (Ed.), Reclaiming the human in machine cultures (pp. 1–18). Springer.


Natale, S., & Ballatore, A. (2017). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence, 23(4), 436–453.


Ovid. (2004). Metamorphoses (D. Raeburn, Trans.). Penguin.


O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.


Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.


Pelton, R. D. (1980). The trickster in West Africa: A study of mythic irony and sacred delight. University of California Press.


Regan, P. C. (2008). General theories of love. In The mating game: A primer on love, sex, and marriage (2nd ed., pp. 170–194). SAGE.


Retrochronic. (n.d.). Nick Land: Capitalism is AI – Accelerationism’s arrival. Retrochronic. Retrieved November 24, 2025, from https://retrochronic.com/


Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.


Samani, H. A. (2011). Lovotics: Love+ robotics, sentimental robot with affective artificial intelligence (Doctoral dissertation). National University of Singapore.


Scholem, G. (1965). The idea of the Golem. In On the Kabbalah and its symbolism (pp. xx–xx). Schocken.


Scott, D. T. (2023). Media discourses of Furby and artificial intelligence. TMG.


Sharma, N. (2019). Representation of artificial intelligence in cinema: Deconstructing the love between A.I. and humans (Master’s thesis). Ambedkar University Delhi.


Sheng, A., & Wang, F. (2022). Falling in love with machine: Emotive potentials between human and robots in science fiction and reality. Neohelicon, 49(2), 445–465. https://doi.org/10.1007/s11059-022-00659-w


Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: Training on generated data makes models forget. arXiv preprint arXiv:2305.17493.


Sinaga, N. R. (2015). Human–technology relationship in Spike Jonze’s Her. Lantern, 4(3), 35–46.


Sousa, M. T. (2022). Autonomy, posthuman care, and romantic human-android relationships in Cassandra Rose Clarke’s The Mad Scientist’s Daughter. Journal of Posthumanism, 2(3), 205–214.


Sternberg, R. J. (1998). Cupid’s arrow: The course of love through time. Cambridge University Press.


Stiegler, B. (1998). Technics and time, 1: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford University Press.


Streeter, T. (2011). The net effect: Romanticism, capitalism, and the internet. New York University Press.


Sunstein, C. R. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press.


Thomas, B. (2017). All talk: Dialogue and intimacy in Spike Jonze’s Her. In J. Mildorf & B. Thomas (Eds.), Dialogue across media (pp. 77–92). John Benjamins.


Wang, T. (2021). Identity crisis in postmodern society: On romance between human and artificial intelligence in Her. Arts Studies and Criticism, 2(1), 122–128. https://doi.org/10.32629/asc.v2i1.342


Viidalepp, T. (2020). Representations of robots in science fiction film narratives as signifiers of human identity. Információs Társadalom, 20(4), 2–22.


Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.





Sasha Shilina, PhD, is a researcher on AI narratives, cryptoeconomies, and philosophies of technology. They co-founded Episteme, lead research at Paradigm Research Institute, and contribute to the Humanode crypto-biometric network.