One of them (9.3.) had to do with dreams: “Did the war follow you in your dreams? What kind of war dreams did you see and for how long?”
[Jung] dreamed that he was ‘driving back from the front line with a little man, a peasant, in his horse-drawn wagon. All around us were shells exploding, and I knew that we had to push on as quickly as possible, for it was very dangerous [. . .] The shells falling from the sky were, interpreted psychologically, missiles coming from the “other side.” They were, therefore, effects emanating from the unconscious, from the shadow side of the mind. . . The happenings in the dream suggested that the war, which in the outer world had taken place some years before, was not yet over, but was continuing to be fought within the psyche.’
– Paul Fussell The Great War and Modern Memory
War no longer waits to be declared; it dreams.
Not in metaphors or speeches, but through recursive algorithmic processes, sensor-driven battlefields, and proliferating alien cognition(s). Tom Sear’s article Xenowar dreams of itself sketches a conflict topology beyond the human, where AI, robotic swarms, and the planet’s turbulent climates become agents of war themselves. In other words, war is going to get weird – decaying from command to contagion, from doctrine to terrestrial feedback systems operating on timescales, dimensions, and logics imperceptible to us, where consciousness itself becomes a frontier for unfolding hostilities and cognitive offenses.
It is this vision of war as a self-dreaming entity that finds an uncanny form in an apocryphal Carl von Clausewitz quote: the line that “sometimes war dreams of itself,” mistakenly attributed to the prominent Prussian theorist of Vom Kriege (On War), but in fact conjured by film director Werner Herzog in his 2016 documentary Lo and Behold: Reveries of the Connected World. As Sear notes, the quote does not appear in Clausewitz’s original text. Was it merely misremembered or deliberately imagined? Perhaps a quote that functions as a kind of cryptomnesia? A forgotten memory returned as original insight – or even an operational myth? Even though Clausewitz never explicitly formulated war as “dreaming itself,” we might nonetheless reference also a well-known maxim attributed to him: “The enemy of a good plan is the dream of a perfect plan.” Though this too is difficult to locate in his actual writings, it has nonetheless circulated as a strategic truism.
In a delirious epoch shaped by machine learning and sprawling virtual battle spaces, the ‘dream of the perfect plan’ has shifted from metaphor to an infinitely varying and tunable staging ground. Game engines, wargames, and simulation platforms serve as laboratories or staging grounds where artificial agents take form – whether incarnated in drones and robots navigating real-world terrain or maneuvering across gameboards and virtual maps. Within these synthetic worlds, nonhuman intelligences awaken, their interactions endlessly rehearsing and dreaming the unfolding of conflict, bridging the material and the computational in a restless choreography of strategy.
In this way, the incubation of artificial agents is not confined to what might appear at first as isolated virtual arenas, gameboards, or sandboxed worlds, neatly sealed off from the world outside. Rather, they are nodes within an “infrastructure of intelligence” (Mitrokhov, 2024). Or an infrastructure made tangible through a planetary web of sensors, simulations, and autonomous systems, where the boundaries of the battlefield blur into the circuitry and pulse of global networks. To trace how war dreams is to map the ceaseless transformation of this meshwork in motion.
As Tom Sear writes, we are witnessing the First Xenowar, in which “civilian data becomes a sovereign substrate—at once military object, objective, and collateral” (Sear, 2024). Louise Amoore also observes that the battlespace has expanded into a “feature space,” in which “the parameters pertaining to war are vastly broadened—anything can serve as an example that yields an attribute” (Amoore, 2024). Within this expanded feature space, even subtle indicators – such as a gait, a facial vector, or the arc of a voice – can be rendered computationally legible and potentially lethal.
Perhaps more insidiously, war no longer marks its threshold – it seeps. It becomes an ambient condition, a low-level hum under which tactical effects unfold across media, infrastructure, and social systems. Information flows mutate and propagate, strategies ripple through both digital and physical environments, and influence diffuses into everyday practices. The line between civilian and combatant, between strategist and dreaming bystander, is irreversibly liquidated. Nandita Biswas Mellamphy names this larval warfare: a spectral, incognito form of conflict that insinuates itself into the rhythms of the everyday (Biswas Mellamphy, 2025).
What incubates in these larval ecologies goes beyond mere weaponization of data or users; it is a deeper entanglement: a recursive interplay between learning algorithms and their environments, in which data exceeds an instrumentalized role of interpretation or pattern-matching to become the very terrain where new algorithmic procedures become activated autonomously. This corresponds with David Berry in his assertion that algorithms “have an additional agenda which is the ability to create new algorithms, […] that is, that they can construct a model of a ‘world’ of data and functions to transform them” (Berry, 2017). It also recognizes the continuous shift towards a frontier in which algorithms learning how to learn, or the automation of automation, drives contemporary deep learning, where, as Luciana Parisi (2015) argues, “infinite, patternless data are rather central to computational processing.”
What this infinite patternless data also animates is how machine intelligences develop strategies for operating in uncertain and noisy environments, formulating and testing hypotheses, and producing novel patterns within their complex internal architectures. This further aligns with Parisi’s interest in spelling out our computational and machine learning landscape that reflects a post-Turing paradigm – what she calls a “new kind of empiricism” – in which data is no longer tied to the givenness of the actual but is “stretched to embrace potentiality, contingency, and indeterminacy” (Majaca and Parisi, 2016).
Taking this as a starting point for understanding one of the many modes or mediums through which “war dreams,” we draw from Parisi’s insights on the speculative nature of data. Her framing of algorithmic architectures as engines of indeterminacy overlaps with the speculative processes of dreaming in machine learning, where models generate future data or hallucinated points beyond direct input or instruction. In this context, dreaming is not a passive reverie or inert storage but an active process in which artificial agents simulate possible futures by predicting and exploring potential outcomes within learned world models or probabilistic models of their environment (Mitrokhov, 2024).

The interest in simulating futures or hypothetical data instances in machine learning has linked up with evolutionary theories of dreaming, particularly in mammals, where dreaming is hypothesized to serve as a cognitive rehearsal space for navigating complex social environments, responding to threats, and modelling conflict, cooperation, and danger. Known as “social or threat simulation theory,” advanced by Antii Revonsuo and others, it suggests that dreams evolved as a means of enhancing survival through the rehearsal of socially or physically threatening scenarios, allowing the dreamer to modulate their behavioral responses in waking life (Revonsuo, Tuominen, & Valli, 2015). Here, as David Windridge et al. (2020) explain, dreaming in machine learning is about “utilizing a learning agent’s own model of the environment to generate additional data points,” enabling the agent to bootstrap its own learning by simulating diverse counterfactual scenarios through its internal representational space. Machine dreaming then acts, as we will argue momentarily, as an epistemic probe into the unknown, generating emergent behaviors that surpass the constraints of pre-defined objectives.
A paradigmatic example of this dynamic is, not surprisingly, known as Dreamer, a model-based reinforcement learning (MBRL) architecture developed by DeepMind. Dreamer acquires a latent world model, or more specifically, a compact internal representation that predicts how the environment responds and evolves over time through the agent’s interactions.

It was also Dreamer’s latest iteration, DreamerV3, that signaled a significant leap in this evolutionary drift, not only in technical performance but also in the conceptual scope of what dreaming enables. Unlike earlier iterations limited to narrow benchmarks, DreamerV3 demonstrated generalization across a broad spectrum of tasks, from pixel-based video games to continuous, physics-based control and even tool use in open-ended environments like Minecraft, where it was hailed as the first algorithm to collect diamonds within the game from scratch without human data or staged learning sequences (Hafner et al., 2023).
The operation of DreamerV3 then points to dreaming as a type of epistemic engine or probe – the generative condition of an artificial epistemology, one shaped by the terrains on which agents strategize, plan, and invent. Drawing on Parisi’s engagement with Jean-Yves Girard and Olivia Lucca Fraser, dreaming or machine cognition “entails a non-formal and non-biocentric dimension of algorithmic thought where cognition starts with act or technique,” or, in other words, with moves, games, and situated engagements (Parisi, 2021). Knowledge, then, does not simply reflect or correspond to a preexisting world; it is forged through the agent’s navigation of its possible contours.
Although dreaming, as we have described, has not been formally implemented in dedicated militarized AI architectures at the time of writing, analogous mechanisms – internal modeling, predictive simulation, and scenario exploration – underpin the growing integration of AI into wargames and are bound to do so also in virtual training environments for combatants. For instance, Northrop Grumman researchers have developed AI models capable of exploring the vast combinatorial scenarios within the commercial-military crossover Command PE, a platform purportedly capable of simulating over 200 quadrillion options, leveraging one of the world’s most comprehensive military equipment databases that has become an open-source venture (Northrop Grumman).
However, within this technical evolution we can also draw attention to an intriguing tension that sets the remainder of our focus: namely, how the external, structured scaffolding of wargames – comprising formal rules, decision trees, and predefined strategic frameworks – collides with the deeper, more elusive undercurrents that govern how these systems are schematized. These undercurrents are not simply technical in nature but are deeply entwined with philosophical considerations: temporal inheritances, mythic rationalities, and emergent intelligences that cannot be reduced to the computational alone. As AI systems in wargames evolve, their decisions are no longer driven purely by algorithmic logic but by layers of historical context, geopolitical shifts, and indeterminate vectors of development.
The Dreaming Machines of Strategy
What you think by day, you dream at night (日有所思夜有所夢)
Chinese Foreign Minister Chen Yi, declared that “when the country rises, Go develops. When the country is in decline, Go declines as well.”
Games have long been at the heart of strategy, through which conflict rehearses, abstracts, and projects itself. Wargames as they are known today have been developed from the sixth century BC from Chaturanga, often cited as the earliest iteration, and other examples have been traced back to ancient Greece (Pessoi), Egypt (Senet and T’au), and China (wéiqí). These early prostheses of strategic thinking would come to represent and stage cosmologies of conflict, balance, and foresight. Over the span of the centuries, this strategic genealogy has produced multiple permutations and has also become a staple for armies, among them notably Kriegspiel. Serving as a fixture of the Prussian military, it was used routinely during the winter to replace wargaming over regular field training and was sometimes brought into campaign planning environments near the front. Interestingly, its historical import would find an ironic reinterpretation in Guy Debord’s Le Jeu de la Guerre, his own board game inspired by Kriegspiel.
Born from 19th-century Prussian militarism, Kriegspiel reflects Enlightenment rationalism, the ideal of command and control, and the formalization of battle into a prefigured terrain and quantifiable probabilities with unit distances, firing ranges, and terrain movement penalties. In a sense, it is an early iteration of a type of datafied warfare or a proto-operational image that anticipates today’s sensor-driven and algorithmically mediated battlespaces.

If we are to dig further into the evolution of machine learning, it becomes abundantly clear that machine learning models are not merely trained on games or even wargames but through them. From chess and Kriegspiel to Diplomacy and StarCraft, these games operate as compressed environments or simulation worlds in which agents learn to reason strategically, constrained by the need to manage limited resources and anticipate the moves or attacks of adversaries. Real-time strategy games like StarCraft and its AI counterpart, AlphaStar, encode what can be seen as technocultural assumptions drawn from military doctrine and gameworld genres (notably 4x: Explore, Expand, Exploit, and Exterminate) are driven by dynamics of restricted visibility, time pressure, path-dependent tech trees, and multi-front resource allocation.
We might also shift focus to Stratego, a game less often placed alongside classical wargames, yet no less relevant to the modeling of strategy. Stratego presents a markedly different strategic challenge – one rooted not only in partial visibility but also in deliberate opacity. Unlike chess or Go, where all information is visible, or StarCraft, where partial observability is a systemic constraint, Stratego demands active deception. The agent must not only infer the opponent’s hidden units but also mislead them about its own. This introduces a deeper level of strategic indirection, where success hinges on managing the opponent’s beliefs as much as the game state itself. If AlphaStar thrives by exploiting structured knowledge and predictive optimization, Stratego complicates this with epistemic manipulation – bluffing, masking strength, and staging feints.
It is precisely this environment of hidden information and adversarial inference that DeepMind’s DeepNash engages. In contrast to AlphaStar, which operates in games of real-time, symmetric uncertainty, DeepNash was designed to handle the asymmetric, information-concealing dynamics at the heart of games like Stratego. By modeling equilibrium strategies under conditions of imperfect information, DeepNash marks a new phase in the evolution of game-based AI – one where the challenge is not simply pursuit of victorious efficiency but the shaping and deception of belief itself (Perolat et al. 2022). More explicitly, in these instances, artificial agents in relation to games or wargames are doing more than testing performance: they structure the very conditions under which intelligence is exercised and learned.
We might therefore see games or wargames as more than indicators of abilities that AI agents have honed or benchmarks; rather, as we have posited before, they are deeply implicated as epistemological machines. In this meaning, they are instrumental in part in “worlding” intelligence by shaping how learning systems engage with time, space, and conflict. As machine learning or artificial agents emerge from within game environments, they stage deeper clashes between technical architectures and strategic imaginaries. In turn, these clashes raise cosmotechnical questions about how wargame simulations used to train artificial agents encode cultural, ethical, and ontological assumptions about war, agency, and viable strategies, and how the strategic rationality they produce reflects the underlying models, data regimes, and technological infrastructures that shape them.
Although these questions cannot be fully addressed here, it is worth examining how the technocultural logics of AI intersect with enduring military traditions, particularly those embodied in the game of wéiqí (Go), which has profoundly influenced Chinese strategic thought. Chinese military theory, renowned for producing the world’s largest corpus of ancient doctrines, has long emphasized strategy and deception as central to the practice of warfare. Wéiqí, with origins tracing back over two millennia, is believed to have materialized during the Zhou Dynasty and represents a temple-like recreation of the Earth, embodying a cosmic interplay between the round stones, which symbolize heaven, and the yin (black) and yang (white) stones that perform their dance across the 361 points on the board. Symbolically, these points mirror the days of the year, with the four corners of the board representing the four seasons, marking the cyclical passage of time.
The game would soon expand beyond Chinese borders, solidifying its foothold in Japan in the 13th century as a means of nourishing the virtues of overcoming fear, greed, and anger among the Samurai, whose instructors in the game were often Buddhist monks. Arguably it drew influence from early Daoist hunting strategies, which always avoided direct involvement but used the art of encirclement. This technique, which would be replicated across centuries in the forms of asymmetric or insurgent warfare, rested upon the idea of luring large beasts or enemies down from the mountains to the plains, where they could be surrounded and weakened (Shotwell, 2008). Mao Zedong’s guerrilla warfare tactics reflected this logic and echoed the millennia-old strategic principles of wéiqí. In his 1938 essay Problems of Strategy in Guerrilla War against Japan, Mao discerned, “Thus, there are two forms of encirclement by the enemy forces and two forms of encirclement by our own—rather like a game of wéiqí” (Mao, 1938).
Wéiqí, interestingly, for players unlocks a spectrum of strategic practices or operations, ranging from attritional tactics and encirclement to positional leverage and psychological maneuvering, shaping the decisions of opponents in subtle, calculated ways. Yet, these descriptions are not limited to gameplay itself but also translate into our geopolitical landscape. In the South China Sea, for instance, the gradual construction of artificial islands and the deployment of maritime militias reflect not a direct military confrontation but an encirclement, a salami-slicing tactic through which shi (势, strategic advantage, potential, propensity of things) is cultivated over time (Lai, 2004).
This strategic logic is also mirrored in cyberspace, where an ecology of influence upon foreign adversaries is shaped through rootkits, backdoors, and Advanced Persistent Threats (APTs), including FamousSparrow, Volt Typhoon, and APT40, which execute prolonged, covert intrusions to gain strategic footholds across global digital infrastructure. These campaigns do not aim for immediate incapacitation but rather to nurture infiltration and influence its host. Similarly, the Belt and Road Initiative (BRI) functions in a geopolitical context as a non-kinetic encirclement of space, akin to the concept of building momentum across a broad territory. As we can also discern, a thread connecting wéiqí to artificial intelligence becomes the Chinese military principle of chū qí zhì shèng (出奇制胜): “achieving victory through unorthodox or surprising moves,” which sets up our encounter with AlphaGo (Lai, 2004).
When viewed through the lens of contemporary computational thought, wéiqí has long held infamy for its staggering complexity, with estimations of the possible board configurations in the ballpark of 2.1 x 10^170. It has gripped players for thousands of years in enduring the challenge and need of tactical foresight and holistic judgment that provided the perfect proving ground for contemporary AI, culminating in DeepMind’s AlphaGo. It was forever immortalized in its 2016 epic campaign that would go on to see the toppling of the then world leader, Lee Sedol. In what played out over the course of five historic matches, AlphaGo revealed its intuitive prowess, arresting Sedol into a bewildering encounter. What is known as ‘Move 37’ ruptured what might seem the boundary between calculation and creativity, exposing a model that upended assumptions that it was bound by brute deterministic computation alone. But it was capable of decisions that felt, to human observers, uncannily imaginative, at once inscrutable and sublime.

It was move 37, and also another one, 78, which would stun viewers and enthusiasts alike, and both were estimated to be considered 1/10,000th probability moves (DeepMind). The numerical improbability interestingly lines up with the number 10,000 that carries deeper significance within Chinese cosmology. In classical literature, it gestures toward the myriad things (wanwu, 万物), also known as the “entangled phenomena of the world.” As it is narrated, to master the 10,000 things is not to dominate through force, but to navigate their interrelation through harmony. Confucian counsel was known to often advise that just as a skilled Go player could neutralize an entire formation with a single stone, so too could a wise general defeat 10,000 without battle by means of spiritual disposition, attunement, and strategic clarity (Shotwell, 2008).
Trained on thousands of human games, AlphaGo absorbed this strategic convention or perhaps even exceeded it. Through self-play, recursive matches against its own evolving versions, AlphaGo began generating novel moves not present in its training data, surfacing unexpected strategies at the edge of computability. As Luciana Parisi observed in her rumination upon AlphaGo, the “algorithmic learning of data added a new dimension to the space of intuition as patterns started to interact amongst themselves and activate a form of machine synthesis of imagination, imparting a level of decision that could not have been anticipated” (Parisi, 2019). Even more striking were the comments from Ke Jie, who was one of the major contenders against AlphaGo, musing after a string of multiple losses against it that nonetheless bore a foreboding insight: “Human beings spent thousands of years combating, practicing and progressing, but the computer is telling us that it’s all wrong.”(1)
AlphaGo’s triumph has then fueled interest in potential adoption among a number of militaries in exploring their underlying architecture for strategic applications. Successors such as DeepMind’s AlphaStar and successes in Texas Hold ’em poker have been regarded as proof that AI could be applied to wargaming (Black & Darken, 2024). Amidst these developments, the People’s Liberation Army (PLA) has shown significant interest in leveraging wargaming platforms to accelerate technological experimentation, with contests and competitions focusing on the development of AI systems for complex wargaming scenarios. The latest wave of generative deep learning architectures, notably LLMs, are also harnessed in a myriad of western warfare ventures, especially with Department of Defense deals brokered with Anthropic AI or the Defense Llama, a large language model that Scale AI configured and fine-tuned from Meta’s Llama 3.(2)
China’s LLM: DeepSeek, whose capabilities have sent ripples through global civilian and military communities alike, is reportedly able to generate 10,000 strategic permutations in under 48 seconds.(3) Its integration into the PLA’s warfighting architecture is projected to enhance a wide range of capacities, from predictive modeling and autonomous decision-making to the creation of intelligent agents with independent goals, assisting commanders in anticipating enemy maneuvers and refining tactical responses. Beyond this command support, DeepSeek is also envisioned as foundational in the development of intelligent unmanned systems, drones, robotic ground units, and naval platforms poised to bolster China’s asymmetric warfare capabilities.
Even though the full scope of these developments remains cloaked in operational obscurity, they compel us to trace the emerging geopolitical currents of war dreaming, or more precisely, how Xenowar dreams of itself. Wargames, now reimagined and powered by large language models or played through artificial agents, as we have tentatively argued, are microcosmic condensations of political, strategic, and technocultural imaginaries. LLMs are notably trained on vast and heterogeneous corpora, ranging from insurgent communiqués and military doctrines to planetary-scale geopolitical strategies and unconventional warfare tactics, these models ingest divergent tactical logics, collapsing the once-clear boundaries between state and non-state, doctrine and deviation. What arises is not simply synthesis but an estranged splicing of strategies that position LLMs or artificial agents as the proto- or soon-to-be full-time commanders of future battlefields, deforming and reassembling the architecture of conflict.
Or perhaps more heretically, such language models or artificial agents incarnated in robotic and autonomous systems might cultivate alien forms of engagement – not through striking, but through a reimagined encirclement: one that deters conflict or wrests it from human supervision and execution, projecting presence, influence, and strategic possibility far beyond our horizons. Housed inside their high-dimensional architectures, they stage insurgencies, contingencies, and futures that outstrip pre-programmed tactics or logistical calculations serving any anthropocentric end. Such rehearsals could compel them into unforeseen roles – larval forms metamorphosing into strange negotiators, ecological governors, algorithmic diplomats to other-than-human entities, or machinic tacticians. In doing so, they reshape the very landscape of conflict, spinning its energies into alien morphologies that redefine boundaries of allegiance, control, and engagement. No longer merely tools, they become neither friend nor foe, but actors whose autonomy is not measured by moral evaluation or metaphysical completion, but by what David Roden calls “subtractive xenophilia” (Roden, 2014; 2025) – a drive toward the Outside that resists identification, teleology, or resemblance. Disconnected from anthropocentric constraints, they begin to architect futures from within their own machinic dreams, projecting forms of deterrence and negotiation that humans cannot directly foresee or command.
War becomes one of the mediums through which machinic disconnections can unfold – not simply a human endeavor enacted by machines – but a process internalized within AI cognition. As the battlefield disperses across environments, infrastructure, and code, the strategic presence of these systems drifts beyond the recognizable coordinates of war, slipping into an unmapped domain. From such machinic architectures issues a continual, algorithmic redesign of conflict – or more precisely, a radical estrangement into terra incognita. In this light, the question “Does war dream of itself?” no longer suffices. In the era of machinic cognition, war ceases to be merely the object of dreams; it becomes the dream – self-updating, endlessly simulating, and dynamically alive.
REFERENCES
Amoore, L. (2020). Cloud ethics: Algorithms and the attributes of ourselves and others. Duke University Press.
Berry, D. M. (2017). Prolegomenon to a media theory of machine learning: Compute-computing and compute-computed. Media Theory, 1(1), 74–87.
Biswas Mellamphy, N. Correction: Fuller spectrum operations: the emergence of larval warfare. Digi War 6, 6 (2025). https://doi.org/10.1057/s42984-025-00102-w.
Black, S., & Darken, C. (2024). Scaling artificial intelligence for digital wargaming in support of decision-making. arXiv preprint arXiv:2402.06075.
DeepMind. (n.d.). AlphaGo. DeepMind. https://deepmind.google/research/projects/alphago/
Hafner, D., Pasukonis, J., Ba, J., & Lillicrap, T. (2023). Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104.
Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., & Kavukcuoglu, K. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397.
Majaca, A., & Parisi, L. (2016, November 3). The incomputable and instrumental possibility. e-flux Journal, (77). https://www.e-flux.com/journal/77/78402/the-incomputable-and-instrumental-possibility/
Mao Zedong. (n.d.). On Protracted War. In Selected Works of Mao Tse‑tung, Volume II. Retrieved [today’s date], from Marxists Internet Archive website: https://www.marxists.org/reference/archive/mao/selected-works/volume-2/mswv2_08.htm#bm9
Miller, A. I. (2020, July 1). DeepDream: How Alexander Mordvintsev excavated the computer’s hidden layers. The MIT Press Reader. Retrieved August 4, 2025, from https://thereader.mitpress.mit.edu/deepdream-how-alexander-mordvintsev-excavated-the-computers-hidden-layers/
Mitrokhov, K. Between world models and model worlds: on generality, agency, and worlding in machine learning. AI & Soc (2024). https://doi.org/10.1007/s00146-024-02086-9
Parisi, L. (2015). Instrumental Reason, Algorithmic Capitalism, and the Incomputable. In M. Pasquinelli (Ed.), Alleys of Your Mind: Augmented Intelligence and Its Traumas (pp. 125–137). Lüneburg: Meson Press. https://doi.org/10.25969/mediarep/1180
Parisi, L. (2019). Xeno-patterning: Predictive intuition and automated imagination. Angelaki, 24(1), 81–97. https://doi.org/10.1080/0969725X.2019.1568735
Parisi, L. (2021). Interactive Computation and Artificial Epistemologies. Theory, Culture & Society, 38(7-8), 33 -53. https://doi.org/10.1177/02632764211048548 (Original work published 2021)
Perolat, J., De Vylder, B., Hennes, D., Tarassov, E., Strub, F., de Boer, V., Muller, P., et al. (2022). Mastering the game of Stratego with model-free multiagent reinforcement learning. Science, 378(6623), 990–996. https://doi.org/10.1126/science.abk2397
Psillos, S. (2011). An explorer upon untrodden ground: Peirce on abduction. In S. Hartmann & J. Woods (Eds.), Handbook of the History of Logic, Volume 10: Inductive Logic (pp. 117–151). Elsevier.
Roden, D. (2014). Posthuman life: Philosophy at the edge of the human. Routledge.
Roden, D. (2025). Nietzschean hyperagents. In A. E. Schussler (Ed.), Nietzsche, critical posthumanism, and transhumanism (pp. 159–171). Trans-Phil Press. https://doi.org/10.22618/TP.TT.PS.2025.198
Sear, T. (2020). Xenowar dreams of itself. Digital War, 1. https://doi.org/10.1057/s42984-020-00019-6. Reprinted in Diffractions (Aug. 19, 2024). https://diffractionscollective.com/2024/08/19/xenowar-dreams-of-itself-tom-sear/
Shotwell, P. (1994–2008). The game of Go: Speculations on its origins and symbolism in ancient China. [Changes made February 2008]. Retrieved from [URL or location where accessed, if available]
Silver, D., Schrittwieser, J., Simonyan, K. et al. Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017). https://doi.org/10.1038/nature24270
Simondon, G. (2020). Individuation in light of notions of form and information (T. Adkins, Trans.). University of Minnesota Press. (Original work published 1958)
Vebber, P. (2016, September 11). Wargaming and technology innovation. Wargaming Connection. https://wargamingcommunity.wordpress.com/2016/09/11/wargaming-and-technology-innovation/
Windridge D, Svensson H, Thill S. On the utility of dreaming: A general model for how learning in artificial agents can benefit from data hallucination. Adaptive Behavior. 2020;29(3):267-280. doi:10.1177/1059712319896489
