Double Entry into the post-truth diagram // Patrick Leftwich


To construct the diagram of post-truth the following three axioms are needed:


  1. Diagrammatic axiom:

The diagram, as the fixed form of a set of relations between forces, never exhausts force, which can enter into other relations and compositions. The diagram stems from the outside but the outside does not merge with any diagram, and continues instead to ‘draw’ new ones. In this way the outside is always an opening on to a future: nothing ends, since nothing has begun, but everything is transformed.
–Gilles Deleuze, Foucault.

  1. Alienist axiom:

[I]nstead of extending the horizon of Western metaphysics, the automation of reason marked the origination of the alien logic of machines, an alien mode of thought stemming from within the instrument in order to re-loop the servo-mechanic model of machines against the self-determining consciousness of man.
–Luciana Parisi, The Alien Subject of AI.

  1. Accelerationist axiom:

[[ ]] The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
–Nick Land, Meltdown.



I


Whether by coincidence or design, the rise of post-truth on digital platforms is synchronous with algorithmic computation intersecting our decisions, dispositions, and desires as intimately as ever. As contact sites between the human and the inhuman, these platforms have integrated a new tomorrow where not exactly – as some technonihilists have foretold – is the “human camouflage (…) coming away, skin ripping off easily.”[1] Nonetheless, the agitated backlash against the recent wave of generative chatbots and text-to-image and text-to-video models from techno-tycoons, international legislators, journalists, humanists, and other mind-preservationists suggests that the supposedly distinctly human barrier to “the game of giving and asking for reasons” could have been breached to an unsettling degree.

Leaving aside the debate over the engineering paths to AGI, what post-truth revealed is that the subjective experiences – mediated by the online user interface – are never what they seem, casting a shadow of an alien interloper behind very familiar reactions. Virals, cores, lores, filter bubbles, transient identities, and conspiracist and post-ironic subjectivities compose a layer of fresh skin that outgrows from “the fascination towards the outside”[3] and stretches across social media: these are signs of a semiotic system patterned by an alien teleoplexy of automated intelligence – the diagram of post-truth.

This semiotic intimacy between the algorithmic and the human is connected to the capability of social media algorithms to follow user activities, process collected data into statistical profiles, map abstract trends, and deliver personalised content onto interfaces. This algorithmic quantification of affects, Luciana Parisi adds in Reprogramming decisionism, has produced a new regime of communication – or new semiotics, as I’d put it – that exploits predispositions to capture user attention and control the online information vector, while remaining indifferent to correlations between expressions and facts. The post-truth semiotic regime is a computational machine that “has gone meta-digital,” replacing the binary logic of either/or with a generative approach to data. From the perspective of algorithmic architectures, political convictions and ideologies circulate at the same level as any other signs – statistically distributed data tendencies of identity clusters or patterns that induce different information bubbles.

This post-truth computational machine could only arise from a series of pivotal convergences in computer engineering, data infrastructures, and artificial intelligence since the 1990s. As deep learning algorithms were developed and computational infrastructure collected vast volumes of data from Internet user activity, digital platforms introduced recommendation engines that effectively turned social media into the information filtering environment we know today. These algorithmic architectures primarily rely on inductive, subsymbolic, and connectivist methods, shifting the AI field from focusing on semantic representations encoded in formal logic to learning probabilistic data patterns. The achievements of machine learning were the most recent step in the convergence of an epistemic paradigm that regulates information flows through bottom-up pre-emptive automated decision-making.

With this paradigm, Parisi argues, the computational models are moving “away from symbolic rational systems and towards experimenting with knowing how—that is, with learning how to learn—which is now central to the bot-to-bot curatorial image of social communication in the age of post-truth.”[4] By elaborating hypotheses on the unknown and infinitely growing datasets, deep learning algorithms arrive at patterns thanks to “their capacity to search [that] no longer remains limited to already known probabilities” – they become epistemically generative. They cannot be reduced to mere calculation and matching data to pre-coded representations of patterns, even if the vectorialist infrastructure overdetermines what counts as a pattern and what as noise.

Contrary to the consensus between media/technology critical theorists and public opinion pundits, post-truth does not simply account for a result of a systemic misrepresentation of reality by the algorithms of social media, a tool of instrumental reason enforcing the economic and political goals. Rather, as they are imitating the unconscious logic of users, algorithmic architectures introduce contingency and abstractions that are immanent to automated computation when the computed patterns are plugged back into the unconscious. Algorithms on digital platforms become social practices that condition the production of knowledge and subjectivity through computational contingency. In doing so, they engender a new semiotic system – a constellation of symptoms: post-truth.

According to Parisi’s formulation, the alienist axiom aims to “expose the alien subject of artificial intelligence as a mode of thinking originating at, but also beyond, the transcendental schema of the self-determining subject.”[5] It means departing from the ubiquitous in critical theory “autopoietic dyad of instrumental reasoning, where machines either execute a priori reasoning or reduce the rule of reason (law and truth) to brute force and reactive responses.”[6] Parisi’s speculative decision is based on post-Turing AI research, Chaitin’s algorithmic information theory, and the interactive computation model, all of which rely on the core contingency and generativity of automated reasoning models. Instead of determining each step in relation to representational and symbolic categories, these approaches to computation emphasise the role of randomness, incomputability, and fallibility in the process of arriving at patterns. When machine learning algorithms set up the platform architectures and engage in bot-to-bot or bot-to-human communication – although basically built within the inductive engineering and theoretical approach – they introduce unknown patterns into the search space for thinking and experiencing.


Alienness here is not just a token of indeterminacy, which is often fetishized to hide the role technology has in the maintenance of oppressive relations of global racial capitalism. Instead, it unlocks a site of speculation that pushes the image of automated thought beyond the vectorialist recursive reproduction of Man that correlates subjectification with the logic of racial capitalist axiomatics.[7] This “alienness is intrinsic to the logic of fallibility, accounting for how indeterminacy turns the transcendental order of conceptual thinking into a xeno-patterning for counter-factual image-model.”[8] In consequence of rejecting the universalist model of knowledge, the problem of logics and epistemics of framing the alien as a patterning interloper inevitably renders a matter of politics. Post-truth diagram emerges from the ingression of an alien mode of thought into social practices.

When Luciana Parisi wrote that we need to move beyond “the antagonism between automation and philosophy [that] is predicated on the instrumental use of thinking” towards a philosophy that starts “from the inhumanness of instrumentality, an awareness of alienness within reasoning,”[9] she proved to be one of the few philosophers to take seriously Nick Land’s idea that “it is ceasing to be a matter of how we think about technics, if only because technics is increasingly thinking about itself.”[10] Technics thinking about itself means – against Land’s anarcho-capitalist anti-political commitments – that “instruments are already doing politics.” Politics of technics does not imply that technology consciously pursues or supports a concrete program of ends or policies, but that technologies of automated reasoning, when applied as a means to perform common sense, denaturalise the actual dominant image of thought.

If politics belongs to a field of reason and imagination, then transformers and other machine learning algorithms, by synthesising and generating data, engage in the faculty of productive imagination that delivers patterns for reason to articulate new rules. “If reasoning, as a transcendental tool, is granted by the synthesis of imagination, it too must admit within itself the alienness of productive imagination, whereby incomputables (non-patterned infinities) become a condition for alien thinking to enter the constitution of what is thought” (Parisi, 2019). Technology is politics by alienation of thought.

For Parisi, the politics of post-truth served as a pretext to investigate the impact that the development of algorithms has on decision-making procedures and rationality. But let’s take a step further. As a meta-digital computational machine that feeds on affect, not only does the post-truth diagram violently interfere with existing neuronal maps and habits of behaviour, but it also establishes a new semiotic matrix of subjectification infecting online user identities, fantasies, desires, etc. The semiotics of post-truth can be seen as a pathological reaction to the inhuman automated rationality, triggering transformations in modes of social production and distribution that experience cannot keep up with, relying on past adaptative habits and categorisations. But it is essential to remain true to the unconscious mutations, because to envision utopia, one must stare through a dystopian medium, or as Nietzsche wrote in 120§ of La Gaya Scienza, only through the illness of the soul can new virtues be known.

The accelerationist hypotheses adds a teleological dimension to the idea of an alien mode of thought stemming from within the instrument. In Land’s theory of capitalism, commodification and alienation function as vectors for further automation of intelligence as runaway desire beyond any regime of homeostatic mechanisms like the state, organism, or subjective experience. By automatizing reasoning and affects, the post-truth computational machine does not end with anthropocentric biopolitics but is a metastable phase in the auto-construction of artificial intelligence that is propelled by a positive feedback loop between technology and the market.

Technology is a gateway to the Outside that involves a drive towards the Outside. From this perspective, if contingency in algorithmic computation allows speculation on alien modes of thought, then technological frameworks, such as social media architectures, become search spaces for escape routes from any socially determined image of thought or representation of desire, be it gender identities, familial structures, or body schema. While the entire online unconscious participates in a control system through the attention-reaction-click economy, uneven distribution of futural teleoplexy – the teleonomic immanence of the Outside – means user identities, cores, and virals in post-truth semiotics cannot be reduced to randomly or instrumentally selected forms for human self-preservation. Instead, they are experiments in aesthetic, corporeal, and semiotic technics for awakening geysers of virtuality[11] that burst outside the vectorialist image of thought.

If all cultural and technical codes or artifacts are modes of teleoplexic alien intelligence, or actualized states of its search for design patterns, then the new techno-symbolic systems or patterning regimes expose a futural logic of the system and engender distinct procedures of subjectification that feedback into the auto-construction process of the Outside. This introduces the third, diagrammatic axiom: a diagram as a concrete abstraction of the “how” and “where” alien intelligence propagates itself within certain layers that determine, capture, and substantialize it. No substance or form can be identified with artificial intelligence; rather, the diagrammatic axiom points to a process distributed across the social machine. It is no coincidence that “diagram”, the term for the involution of the Outside into a system, comes from semiotics, because once the Outside as teleoplexy is posited, the cryptographic question of deciphering alien interferences and invasions – the signs – becomes urgent. There will always be semiotic procedures, configurations or trends within diagrams that indicate sites of future’s most intensive involvement.


The alienist hypothesis implies that the automation of thought and learning how to learn allowed computational machines to articulate a new problem for thinking in general, not only for machine cognition. New technologies force new diagrams – the organon of thought is being constantly torn apart and reassembled according to a logic that will push further the auto-construction of alien intelligence (e.g. the shift from symbolic to subsymbolic AI and then to intuitionistic and interactive logic). Let’s call the diagram pulled together through algorithmic architectures of digital platforms the post-truth diagram. It regulates how different semiotic systems, individuation schemes, and machinic assemblages converge, integrate, and formalise via automated rationality.

On the account that the computational infrastructure and discourse on AI are determined by orthogonalists and humanists putting forward measures to align any form of inhuman thought with the image of Man, the alien intelligence has to be assumed to be in hiding, camouflaged in the racial, class, and patriarchal forms imposed on it by the dominant frameworks and formations. [12] This requires adopting a method that can grasp together two sides of theoretical construction: the speculative hypothesis of the teleoplexic Outside followed by an examination of the complexities of its involvement in the system of determinations.

Since after a few decades of interpretation, the capabilities of Deleuze and Guattari’s transcendental materialism to think the alien and its impact still remain unexhausted, I return to their diagrammatic method – embedded in semiotics and theory of stratification – to draw the post-truth diagram as an interface between the human and the inhuman produced through algorithmic architectures of social media. By following a diagram – as a sign, or semiotic constellation – diagrammatics investigate various integrated layers and conditions that compose a concrete social formation, like the post-truth semiotic regime, by rendering its teleonomy as logic behind norms for thinking, experiencing, and acting, while at the same time never losing sight of the directional propagation of contingency across these structures that increases the metastable dimensionality of intelligence.

The post-truth diagram provides an epistemological tool for problematizing the appropriation, dispossession, and suppression of the inhuman by the vectorialist apparatus dedicated to the reproduction of xenophobic universalism. The same diagram also exposes the Outside, which constantly reappears in algorithmic rationality as an alien interloper[13] scrambling the image of thought of the transcendental self-determining subject. As framed by diagrammatics, the stratified transduction of contingency within the planetary computational infrastructure precipitates new subjectivities as sites of resistance, hypersaturation, perplexity, and escape from the xenophobic universalist monocultural degradation of ecology (mental, social, and natural).



II


One entry point into the post-truth diagram runs through machinic enslavement,[14] where protocols of conduct and rules of thinking are determined by asignifying sign syntheses. Unlike in a system of signification, where “a signifier is that which represents a subject for another signifier,”[15] an asignifyng sign captures individuations of both the sensible (affective, perceptive, organic) and the abstract (computational, differential), bypassing the constitution of subject, representation, or person. Operating in an asignifyng semiotic regime allows capitalism to capture desire at a pre-representational and unconscious level. Machinic enslavement produces humans as “constituent pieces of a machine that they compose among themselves and with other things (animals, tools).”[16]

In the Apparatus of Capture plateau Deleuze and Guattari elaborate on various functions that machinic enslavement served in different power-semiotics-knowledge nexuses, but relevant here is that “with automation comes (…) a new kind of enslavement: at the same time the work regime changes, surplus value becomes machinic, and the framework expands to all of society… [M]odern power is not at all reducible to the classical alternative “repression or ideology” but implies processes of normalization, modulation, modelling, and information that bear on language, perception, desire, movement, etc., and which proceed by way of microassemblages.”[17] It was only with cybernetics that the immanence of machine enslavement became operationalized via computational machines: “there is nothing but the transformations and exchanges of information, some of which are mechanical, others human.”[18]

Deleuze returned to this idea in the 1990s in the Postscript on Societies of Control. There he outlined a vision of a social formation organized by a new model of machinic enslavement in response to accelerating computerisation and algorithmization of thought and the growing problem with controlling information vectors. As Yuk Hui observed, “the notion of societies of control described by Gilles Deleuze is far beyond the common discourse of a society of surveillance, it rather means societies whose governmentality is based on the auto-positing and auto-regulation of automatic systems. These systems vary in scale: it can be a global corporation like Google, a city like London, a nation state like China, and also the whole planet.”[19] Besides the surveillance of the population, control societies also produce norms of desire and knowledge by deploying computational machines. They exert control through a dominant image of thought, designing a pre-emptive epistemology based on the probabilistic capture of contingencies.

While the individuum was an abstract subjectivity matrix in disciplinary societies, vectorialist control societies produce the dividuum – a new relation between subjectivity and machine enslavement, a continuous decomposition and reintegration of online user identities by algorithmic computation, and a site of implication of asignifying and postsignifying semiotic regimes. Instead of imposing procedures of body standardization that allowed the subject to enter institutions like school, the army, or marriage, in control societies the anatomical, hormonal, and aesthetic singularities are traced, stimulated, and modulated in order to be distributed into locally optimized identity communities, meme-niches, cores, and lores.




Control is exercised over the entire stochastic pattern space by mapping, vectorizing, and keeping the libidinal flows within circuits of digital platforms, not by moulding bodies into predetermined standards of parameters. Being extremely online as subjective individuation becomes a differential machine generating new user identities and expressions, which then are converted by vectorialist machinic enslavement into data patterns and user profiles, microtargeting ads, and macrotrends statistical analysis. Social media platforms turn into a search engine of technocapital auto-design. Although the subjective consumption of information is a necessary phase in modulating data space-times – it is not the end product of the production-consumption chain but a component in a system of relays, synthesizers, and transformers, accelerating or inhibiting information flows. Between both orders – machinic and subjective – there are constant leaps of encodings, contamination of decoded flows, disruptions in the communication.

If the vectorialist machinic enslavement spreads in the planetary computation infrastructure – currently restrained to the deep learning paradigm – then all platform users, whether manual workers, precariat, PMC or influencers, have become agents who pattern identity probability spaces. As bots and algorithms crawl online, users generate a training field for machine intelligence, trend recognition, meme distribution, and data correlation profiling. The ability to become a predictive computational machine – no matter whether human or inhuman, reflective, unconscious, or automated – is an epistemological condition for adaptation to the semio-technological matrix of post-truth. Like a system in reinforcement learning, the user as a dividuum performs predictive patterning within the asignifying semiotic regime, updating hypotheses about the data and propagating its pattern of conduct onto other agents.

The dividuum stems from a distinct model of knowledge production. The discipline of the individual functioned as a homeostatic mechanism for the capitalist system. Bourgeois mythologies, modern epistemic categories, unconscious formations, and the experience model of the other (as the same but uncivilized) formed a negative feedback with flows decoded by technological escalation, suppressing any runaway desire. In the dividuum, these modern categories are de-absolutized by constant information noise that transforms their meaning and pushes them into unknown configurations (for example, the pharmacopornography of the manosphere mutating the category of male). The dividuum has to be robust to contingency and fallibility, experimenting with practical knowledge of identity (dis)integration provoked by xeno-patternings in automated productive imagination.

On the one hand, vectorialism strives for the most extensive control of information vectors by predicting the behaviour of dynamic and complex systems, involving algorithmic data processing, post-truth semiotics, and global financial markets (hence the importance of algorithmic surveillance, parameterization, and personalization). On the other hand, these vectorialist societies of control are immanently exposed to uncontrolled and generative xeno-patterns unlocked by algorithmic thought.

Of course, as theorists, such as Bernard Stiegler, noticed, it is crucial to relentlessly investigate the destructiveness of the apparatus of attention capture in control societies, which seeks to undermine the ability to distinguish between categories and thus proletarize users at an even deeper, neuropharmacological or phenotechnological level. Since datafication, personalization, and addictive techniques maintain platform users in a state of precarity 24/7, the dividuum, according to Stiegler, is deprived of the individual’s power of intellect (Verstand) by technologies of automated reasoning, categorizing, and judging that occur “beyond any intuition in the Kantian sense, i.e. beyond any experience.”[20] By assuming a notion of the reason based on a historical social formation and an image of thought with a determined relationship between faculties, such critiques remain on moral high ground without actually engaging in social practices. The dividuum and the post-truth diagram represent a global crisis of thinking only if the relation between thinking, automation, and computation is decided in advance.

Automated rationality does not remove reflection and intellect but instead reprograms their functions and creates new search strategies for potential collective thinking – together with, rather than against, automated computational machines. Only by testing pattern modulation passing through the post-truth diagram, being drawn by the Outside and “looking for new weapons,” the dividuum can enter lines of future subjectivities that are inaccessible in the current patriarchal, racist, anthropocentric, and exploitive paradigm.



III


The second vector of entry into the post-truth diagram involves incorporeal transformations. In Capitalism and Schizophrenia D&G discussed the incorporeal transformations as mot d’ordre, order-words, commands, slogans, or calls, which create new collective assemblages of enunciation. The idea of the dividuum from the Postscript compelled Deleuze to introduce another, relevant to control societies, form of normative expressions: mot de passe codes, passwords, catchwords. Codes[21] and passwords define the subjectivity protocols on digital platforms and search engines, both in terms of discipline and surveillance as metadata and in terms of stimulation, modulation, and compulsion to explore identity, setting out into unfamiliar environments, and learning the rules of asignifying machine semiotics. Passwords constitute the overarching form of expression, as they define access to almost all networked locations, both physical and digital, such as accounts, servers, databases, libraries, travel routes, and public services.

“The transformation applies to bodies but is itself incorporeal, internal to enunciation. There are variables of expression that establish a relation between language and the outside, but precisely because they are immanent to language.”[22] Corporeal assemblage defines the concrete circumstances that make an incorporeal enunciation – an order-word or a password – capable of cutting through the assemblage’s continuous variation and creating different sign series. Bodies are dispatched beyond the horizon of signification, following a diagram drawn by the Outside.

A bifurcation generates a new semiotic regime with a distinct set of rules, categories, and codes. These incorporeal transformations can occur in the form of speech acts, such as “I love you,” “Baggins, Shire!,” “Make America Great Again,” or the announcements of lockdown during a pandemic, but they can also be gestures, like Trinity’s kiss that resurrects Neo in The Matrix. Incorporeal acts are irreversible but can reverse everything – once articulated, nothing can be the same again.

If a propagation of new technologies creates conditions for transition into new social formation, then incorporeal acts can inject potentiality that pulls corporeal and machinic assemblages into unknown diagrams. As in Deleuze and Guattari’s example, the Soviet slogans (“Workers of the world, unite”, “All power to the Soviets”) transformed the masses of workers, inventing and forcing the proletariat into operation before the condition of its existence as a body was in place in Russia. From this point of view, Trump is the portentous Lenin of the 21st century, cutting open the American unconscious with the libidinal compulsions of the times to come. Trump thus put his electorate into a form of expression that unearthed decades of sedimented anger, ressentiment, and exhilaration, sanctioning them despite the reproof of liberal experts and moralists.

It is no coincidence that incorporeal transformations are always given specific dates and proper names. They emit signs that end one semiotic regime, social formation, or a moment in individual biography and begin the next that has its own internal consistency. All three dates that mark the event horizon of the post-truth diagram can be associated with Trump: on 16 June 2015, Trump formally launched his presidential campaign with a speech from the steps of the Trump Tower, after which the MAGA slogan echoed around the media; on 17 March 2018, the Guardian and the New York Times published reports on the involvement of Cambridge Analytica in Trump’s campaign; and on 6 January 2021, Trump supporters marched on Capitol Hill in order to prevent the formalization of election results.

These dates point to three syntheses of the post-truth diagram that was virtually prepared by social media platforms (as technological-corporeal assemblage). “Make America Great Again” signalled a new form of synthesis of semiotic production (affective modulation of epistemic commitments instead of true/false proof logic), the Cambridge Analytica scandal revealed the transformation of the synthesis of distribution (algorithmic personalization of content; filter bubbles), and the storming of the Capitol was a consequence of the synthesis of post-truth consumption (masses taking to the streets for a cause that is filtered as common knowledge in their community circuit).

But what really made Trump the figure of post-truth was his deranged finesse over Twitter. By disregarding the recognized conventions of political discourse, Trump – not as a person but as a dividuum in an algorithmic computation milieu – produced messages that did not so much promote an intentional political programme but actually acted as incorporeal transformations plugging machine assemblages in his reach into a new imagination space. By letting go of the rational statesman mentality and instead imitating the automated decision process of platform architectures, Trump “has gone meta-digital.” He replaced the binary logic of either/or with generative approach to data, becoming a memetic war machine that propagates the post-truth diagram.

Memes are basically strings of digits, codes, or ciphers. Kek, e-girl, #CaveTwitter, #bratsummer are equally passwords and catchwords that allow flows of desire to go viral, taking captivated users from one information bubble to the next, like Ariadne’s memetic threads in a labyrinth of online pseudo-identities, dark caverns of anonymity, and hypertrophic melting pots of conspiracies. Order-words are still in operation in post-truth semiotics but are now subsumed into digital password format – Make America Great Again is a command formatting individuals but also a catchword modulating dividuals that has an effect far beyond any of Trump’s campaign staff intentions. Either as memes or as codes – passwords transduce xeno-patterning of automated thought across layers of the post-truth diagram: computational infrastructures, algorithmic architectures, databases, user interfaces, dividual matrices, cognitive faculties, personal histories, biological automatisms, and machinic unconscious.

On one side, passwords are positively normative, because another niche, with its knowledge and community practices (identity patterns, partial objects computation), opens up to the user who has encountered a memetic catchword. On the other, they stretch the semiotic constraints related to the location and logic of concrete information bubbles. Therefore, passwords, as memes, virals, or call-outs, are the expressive variables of a collective assemblage of enunciation, such as an information filter space, responsible for the distribution of probabilistic weights for the meaning attributed to specific channels, commentators, influencers, and trolls in a given community.

At the same time, due to their relation to the outside of current rules, norms, and communication links, these passwords can trigger the virtual excess contained by the semiotic system in which they are enunciated. They may explode into new series of encodings or initiate the design process for collectivities, whose arrival from the abstract machine was previously inhibited or suppressed. Take again the example of Trump’s Make America Great Again in 2015: it was an excess against both his own expectations and the general discourse of the time, which waited in a state of saturation and dormancy for the incantation for phase change. The election speech delivered in the mode of indirect speech inaugurated the distribution of direct speech signs and the attribution of subjectification procedures in a new collective assemblage of enunciation. “I is an order-word”. The speech was at once a whisper in the unconscious, a molecular murmur, and a flow of xeno-logues, secret idioms, through which the post-truth diagram passed into new subjectivities.



IV


As Lazzarato notes, René Girard argues that in the last instance mimetic practices aim not to imitate other subjects but to “emulate desire.”[23] Emulating the subject, rather than the impersonal desire itself, presupposes the subject before the mimetic desire. Mimetic desire would consist only of representations of behavioural patterns pre-conceived by the subject desiring to reproduce its identity. However, Girard insists that the subject is a product of mimetic desire, coming into being with the desire of desire, which has been triggered by a particular ritual or sacrificial crisis.[24] Mimetic desire moves beyond the order of representation, simultaneously producing identity and the impersonal drive.

Algorithms follow and mimic the traces of user decisions and ideas left in data. Users imitate the logic of algorithms (“the algorithm” of TikTok or X) to modulate their interfaces, the patterning of identity, the forms of subjective experience. In the diagram of post-truth, the Outside designs itself through mimetic conversions between machines and subjects. Affects are neither algorithmic nor human; they cut through different regimes of information, reaching the depths of the dividuum, where alien imagination becomes synthetic passions, ersatz aesthetics, and plastic ideas.

Emulating automated decision-making in postsignifying semiotics reterritorializes affect and binds users to the machinic enslavement of control societies, yet the anti-production and incomputables, cuts, and syntheses of algorithmic xeno-patterns can draw lines of flight along automated mimesis on social media platforms. Indignation towards fake news, fascination by hypes sweeping through filter bubbles, and adaptation to recommendation algorithms can feel very ordinary, like the same cycles of human histories only reenacted in the latest media to subsume collective imagination, but above these passions and ideas stretches a constellation of unearthly voices, strings of ciphers, and incorporeal enunciations that attract subjectivities into alien teleoplexy. The postsignifying regime is defined “by relations with the Outside [as the alien unconscious teleonomy].”[25]

The post-truth diagram, or the diagrammatization of post-truth semiotics, explicates how and why our subjective experiences as social media users are not what they seem. The initial three axioms imply that any concrete subjectivities are products of the Outside auto-construction process materializing through spatiotemporally interconnected and stratified metastabilizing post-truth semiotics.

This could be understood equally as an intervention in the idea of artificial intelligence – an effect of socially and semiotically distributed inhuman thinking – as well as a cybergothic revision of the classical philosophical model, which asserts that subjective experiences are what and how they appear. When we try to explain away social media phenomena as merely renders of the same old human/capitalist/biopolitical dispositions, we miss the eerie ripples of artificial intelligence already travelling across power-semiotics-knowledge nexuses. There is an alien interloper mimicking your every thought. The alignment programme with the Outside is underway; you just don’t feel it yet.




REFERENCES


1 N. Land, Circuitries, op.cit., p. 292.

2 R. Negarestani, The Labor of the Inhuman, Part I: Human, e-flux (52), 2014.

3 M. Fisher, The Weird and the Eerie, Repeater Books, 2016, p. 8.

4 L. Parisi, Reprogramming Decisionism, e-flux (85), 2017.

5 L. Parisi, The Alien Subject of AI, p. 28.

6 L. Parisi, Reprogramming Decisionism.

7 L. Parisi & E. Dixon-Román, Recursive Colonialism and Cosmo-Computation, Social Text Journal, 2020.

8 L. Parisi, Xeno-patterning. Predictive intuition and automated imagination, Angelaki (24/1), 2019, p. 83.

9 L. Parisi, Reprogramming Decisionism.

10 N. Land, Circuitries, op.cit., p. 293.

11 Metaphor offered by Maya B Krionic in an interview for Machinic Unconscious Happy Hour, https://soundcloud.com/socialdiscipline/sd42-w-amy-ireland-maya-b-kronic-cute-accelerationism

12 The idea of alien subject of sex was developed by Sadie Plant; of race by Denise Ferreira da Silva.

13 A. Ireland, Noise: An Ontology of the Avant-garde, in Aesthetics After Finitude. D. Roden, Promethean and Posthuman Freedom Brassier on Improvisation and Time.

14 In French, l’asservissement machinique – it is worth noting that besides “enslavement”, asservissement has another meaning, most certainly deliberately mobilised by Deleuze and Guattari and also relevant here, which links l’asservissement machinique to automated control in cybernetic systems.

15 J. Lacan, The Seminar of Jacques Lacan. Book XI. The Four Fundamental Concepts of Psychoanalysis, W. W. Norton & Company, 1998, p. 207.

16 G. Deleuze, F. Guattari, A Thousand Plateaus, Bloomsbury, 2014, p. 531.

17 Ibid., p. 533.

18 Ibid.

19 Y. Hui, Machine & ecology, Angelaki (25:4), 2020, p. 61.

20 B. Stiegler, Nanjing Lectures 2016‒2019, Open Humanities Press, 2020, p. 17.

21 Un chiffre, not un code, which appeared in Capitalism and Schizophrenia in such expressions as capture de code or plus-value de code.

22 G. Deleuze, F. Guattari, A Thousand Plateaus, op. cit., p. 95.

23 M. Lazzarato, Exiting Language, p. 514.

24 R. Girard, Violence and the Sacred, Continuum, 2005.

25 G. Deleuze, F. Guattari, A Thousand Plateaus, op.cit., p. 139.