Stitching the Manifold Horizon 1.0 // Peter Wolfendale




This is the first part of an interview with Peter Wolfendale. Peter Wolfendale is an independent philosopher and leading voice in contemporary Neorationalism. Currently a resident in Yokohama, Japan, he fuses the Kantian and Hegelian traditions with analytic philosophy of science, computational theory, and logic to develop a systematic “computational Kantianism.” He is the author of Object-Oriented Philosophy: The Noumenon’s New Clothes (Urbanomic, 2014), a foundational critique of Speculative Realism. His long-awaited second book, The Revenge of Reason, has just been published from Urbanomic. He writes the blog Deontologistics.





DIFFRACTIONS: You describe neorationalism as a label applied to your work and that of Ray Brassier and Reza Negarestani, rather than a self-consciously declared movement. What do you see as the key points of convergence and divergence among your respective approaches? How do you each understand the “computational turn” and its philosophical implications? And how would you characterise the evolution of neorationalism?



Peter Wolfendale: I think what we share above all is a commitment to a picture of reason birthed by Kant and developed by Hegel without slavishly clinging to either the broader metaphysical vision of German idealism or every last detail of its texts. The former is evident in our dalliance with ‘speculative realism’, which was, at least for us, a failed attempt to get anglophone Continental philosophy to take scientific realism and scientifically informed metaphysics seriously. Here Reza and I were perhaps more influenced by Deleuze’s process metaphysics, while Ray was inspired by Laruelle and Badiou. Though we all have a healthy interest in Analytic philosophy of science (e.g., Ladyman and Ross, Mark Wilson, etc.) and Continental philosophy of mathematics (e.g., Zalamea, Châtelet, etc.).


The latter is evident in the debts we all pay to Sellars and Brandom’s reworking of these Kantian and Hegelian themes, though we are neither dogmatic nor of one mind in the way we draw upon the Pittsburgh school. Ray is perhaps the more orthodox Sellarsian. He’s always been guided by the task of unifying the manifest and scientific images of man in the world. Reza and I were perhaps more inspired by Brandom, though we each find him wanting in our own peculiar ways. Our goal has been to refine and extrapolate the consequences of his socio-perspectival version of semantic inferentialism. But equally, Reza and Ray draw upon the post-Hegelian Marxist tradition in ways that I do not, though I think we all side with Plato over Aristotle in a way that distinguishes us from the increasingly Aristotelian strands of Hegelian Marxism that have become popular with many Continentals and Post-analytics.


When it comes to the ‘computational turn’ I think Ray has generally been most invested in the ‘neurocomputational turn’ in the philosophy of mind spearheaded by the Churchlands and furthered by figures like Thomas Metzinger. Ray’s rejection of the Churchlands’ radical eliminativism about propositional attitudes and reason giving in Nihil Unbound: Enlightenment and Extinction (2007) is what ultimately led him to Sellars, but he’s consistently attempted to show how the latter’s normative account of ‘the space of reasons’ must be embedded in a causal account of neural processing. Myself and Reza have generally preferred to trace this embedding in the opposite direction, using the tools provided by mathematics, logic, and computer science (e.g., Martin-Löf type theory and Girard’s ludics) to abstract from neurocomputational implementation details in a manner that realises the promise of Kant’s transcendental psychology, and offers an account of language that explains our capacity for abstraction.


Ultimately, we each believe that rationality is an ideal that can be realised in varying ways by different psychological and social arrangements, which involves the progressive revision not just of our beliefs (as in the Bayesian updating many so-called ‘rationalists’ obsess over) but the very concepts that articulate these beliefs (the ‘dialectical’ dynamics traced by Socrates, Hegel, Lakatos, etc.). We also believe that this revision can (and perhaps should) transform the psycho-social arrangements in which it is realised. If there’s one thing I’ve historically disagreed with Ray about it’s the importance of personhood for understanding the connection between theoretical and practical revision, and if there’s one thing I’ve yet to convince Reza of it’s the importance of beauty for articulating this same connection. Regardless, we’re united by the commitment to the preservation, expansion, and evolution of freedom which we take to be implicit in this ideal.






DIFF: Given that your neorationalist epistemology emphasises fallibilism and the necessity of continuous conceptual revision, how can Left Accelerationism’s call to strategically “steer” socio-technological tendencies avoid the charge of technocratic vanguardism? That is, how does it avoid presupposing a level of systemic knowledge and agential unity that you argue is an achievement of reason, not a starting point? In other words, if the “pilot” of the accelerationist project is not a pre‑formed collective subject but must be bootstrapped into existence through the very process of intervention it seeks to direct, what concrete mechanisms – beyond the manifesto form – can ensure that this bootstrapping process remains democratic and self‑correcting, rather than consolidating into a new, unaccountable hegemony that merely pays lip service to an abstract, “generic” universality?



PW: This is an excellent question, which I’ve been mulling for quite some time. I think left-accelerationism (l/acc) has generally been maligned either as a form of technocratic vanguardism which demands too much agency or as a form of techno-optimism that demands too little. It doesn’t help that left-accelerationists largely abandoned ‘accelerationism’ as a theoretical platform due to the pushback we received from elsewhere on the left, and thereby ceded it to radical fatalism (u/acc), uncut techno-optimism (e/acc), and yet darker tendencies (r/acc). My position has always been that left-accelerationism is neither fatalism nor optimism but opportunism.

It’s a matter of identifying those extant tendencies driving societal transformation and locating points of leverage where we might usefully intervene, encouraging some and mitigating others, bolstering that autocatalytic process of self-emancipation named modernity in its destinal conflict with the equally autocatalytic process of accumulation-domination named capital. It’s Promethean in the sense that its ambition is emancipation without limit, not in the sense that it treats the world as malleable without obstacle. This is precisely because it recognises that agency is itself something which must be constructed and cultivated in the turbulence between tendencies, subject to dampening and decay as much as ramping and ramification.


If there’s any model for this, it’s the ‘ecology of institutions’ that Alex and Nick call for in Inventing the Future: Postcapitalism and a World Without Work (2015), which is neither a completely centralised vanguard party constructed in clean room conditions nor a totally decentralised insurrection that emerges from the populace ex-nihilo, but a distributed network of organisations with mixed characteristics (e.g., campaigns, think tanks, media orgs, cooperatives, etc.), which can assemble separately but act synergistically. This is a model suited to the disparate set of social tendencies (e.g., industrial automation, cultural fragmentation, demographic imbalance, etc.) that must be disentangled and acted on in different ways and in different contexts.

But it remains under-developed and under-theorised. My own personal inclination here is that development should initially focus on building tools and frameworks to solve collective action problems that aren’t mediated or captured by the big tech stacks (i.e., platform cooperativism) — what I’ve elsewhere called cognitive liberation through collectivisation. I think this emphasises the economic underpinnings of any nascent counter-cultural hegemony while imitating the balance of democratic and competitive pressure involved in the movement for free and open source software.


When it comes to theorising this, I’ve always foregrounded the parallel between individual and collective agency. A sufficiently abstract functional description of agency should be recursive, showing not just how to construct an individual will out of non-agential materials, but thereby also how to assemble a collective will from an extant mass of agents. I don’t claim to have a fully fleshed out account along these lines, but one of the things I do in The Revenge of Reason (2026) is to sketch a theory of selfhood that shows how it’s possible for individual selves to be composed from decentralised and even distributed asynchronously communicating processes by combining insights from computational concurrency theory with my account of practical rationality more generally.

I think that this suggests a useful line of inquiry for the collective case, by rigorously distinguishing the idea of control from that of hierarchy. This informs the left-accelerationist goal of collective self-mastery, framing democracy as a distributed social control process that can be realised in varying ways, including communicating heterogeneous processes operating across different social sectors and scales. This isn’t to say hierarchical structures are entirely verboten. They may be the best or only tool for the job in some cases. Decentralisation is one design virtue amongst others. As ever, there are no easy answers, only more complex questions.





DIFF: You describe your approach as “computational Kantianism,” interpreting Kant’s transcendental psychology as a precursor to AGI. How does this reading of Kant address the limitations of his original project? Moreover, what does it mean to provide a “maximally abstract functional description”(p.5) of a rational agent? With the rise of Agentic AI, we see various implementations: AI agents using tools, making API calls, and performing chain-of-thought reasoning. How does your Kantian framework allow us to cut through the implementation details of these specific agents to evaluate their ‘agency’? 



PW: It’s worth beginning by noting that the shape of Kant’s philosophy was partly motivated by his belief in the existence of aliens. He extrapolated Newton’s theory of gravity into an account of planet formation, and reasoned that there would be planets around other stars capable of supporting life and minds quite different from those found on Earth. When he claims that space and time are forms of intuition peculiar to our faculty of sensibility, he genuinely thinks that there might be creatures with alternative sensory architectures, even if the categorical forms through which they conceptualise their environment would need to be substantially similar. This measured universality extends to his account of rational agency, personhood, and morality, rendering any creature capable of self-legislation, no matter how strange, a noumenal citizen of the kingdom of ends where phenomenal strangeness is irrelevant. This is a thoroughly Copernican decentring of humanity’s position in the cosmos.


It’s thus also worth noting that, while his own so-called Copernican turn — emphasising the way objects conform to cognition rather than cognition conforming to objects — did give birth to the epistemological malady that Meillassoux names ‘correlationism’, the phenomenal/noumenal split also functions as a methodological device that forces us to analyse how the mind goes from an unstructured manifold of sensory data to making judgments about and acting upon individuated objects, without taking for granted that these objects are already individuated. This means that Kant was attuned to the core problem of machine vision nearly 200 years earlier, he just didn’t see it as a problem for machines. His concern for alien minds did not yet extend to artificial ones.


The deeper methodological problem here is a disconnect between the 1st and 3rd critiques: despite developing a thoroughly non-metaphysical account of functional explanation within the Critique of Teleological Judgement, this was never applied to Kant’s own account of the functional structure of the mind. But even if it were, there are aspects of his thought that would have resisted such a synthesis. On the one hand, Kant’s account of functional explanation was still too dependent on discrete components (e.g., organs, limbs, etc.) as opposed to distributed subsystems (e.g., digestion, respiration, etc.), and the latter are a much better model for the faculties as he conceived them (e.g., imagination, understanding, reason, etc.).

On the other, his accounts of biological life, creative spontaneity, and noumenal will all suggest ineffable unities incompatible with his empirical determinism (e.g., organic mutual reproduction, the technical function of judgment, and the antinomy of freedom). Computational Kantianism strives to dissolve these proto-Romantic mysteries into the dynamics of information processing, without imposing the rigid, modular, hierarchical frameworks beloved of classical computationalism yet seemingly absent in nature. These are the sorts of implementation details that must be bracketed in the name of maximal abstraction.


Moving beyond methodology, I could easily go into too much detail. There’s a story to be told about how Kant’s analysis of mathematics as construction in imagination leads to naive intuitionism (Brouwer) that then breeds intuitionistic logic (Heyting), which in turn breeds a computational interpretation (Curry-Howard), which ultimately inverts Kant’s explanatory priorities, allowing us to explain imagination in terms of construction (e.g., homotopy type theory) and interaction (e.g., codata types, systems coalgebra, and predictive coding). There’s a related story to be told about the inadequacy of Kant’s transcendental logic (geometric logic), which despite his extraordinary sensitivity to this duality between the mathematical and the empirical manages to collapse the difference between the two. But these details stray too far from the question of agency.


To talk about agency we have to address the other crucial duality in Kant’s work, that between the theoretical and the practical. For Kant, the theoretical core of mindedness is the ability to make judgments that are potentially true or false (objective validity). This requires the imaginative capacity to synthesise objects from sensory data mentioned above, insofar as it is their relations to these objects that render judgments true or false. But it also requires the rational capacity to integrate these judgments into a coherent world-model — drawing out their consequences, resolving their incompatibilities, and managing the economy of inferential principles underpinning these relations.

The corresponding practical core of mindedness is the ability to actualise the truth or falsity of judgments by changing corresponding states of the world. This entails corresponding imaginative and rational capacities to simulate objects of desire and patterns of behaviour, and integrate these into consistent pictures of how the world should be and plans to realise them. To quote my own book (p. 101): “a person is a dynamic system the behaviour of which is driven by the divergence between two consistent, extensible, and revisable representations of the world as a whole: one of the way it is (real), and one of the way it should be (ideal).”


Kant’s other important insight is that there’s no consciousness without the possibility of self-consciousness. What this means in practice is that, although most cognition and action isn’t ‘rational’ in the sense of being directly driven by reasoning, it can still be ‘rational’ in the sense that it’s modulated by reasoning. This is the point of the transcendental unity of apperception: any conscious component of the information processing pipeline that transforms sensation into behaviour can be elevated by the ‘I think…’ and fed into chains of reasoning that assess and correct its role within the overarching process. Practical reasoning more often validates the output of the adapted cognitive heuristics driving behaviour than it dictates specific behaviours. The question is thus to what extent the ‘agentic’ AI systems being built around LLMs and reasoning models meet this relaxed standard.


What distinguishes these ‘agentic’ systems from previous approaches to artificial agents is their use of natural language. This offers a level of representational generality that is essential to agency proper — in order to be able to plan for and realise any possible state, you need to be able to represent any possible state. However, despite advances in chain-of-thought ‘reasoning’, frontier models still lack the level of inferential reliability required to properly exploit linguistic generality. Fine-tuning with reinforcement learning, either through human feedback or verifiable rewards, can improve this in particular domains, but it remains weak across the board.

These models also notoriously lack globally coherent world-models, because they specialise in maintaining forms of local coherence internal to texts or conversational interactions (e.g., stylistic coherence or narrative voice), which mostly capture logical consistency as a side effect. They can emulate behaviour downstream from having a rationally articulated perspective on the world as a whole, but there’s no evidence they have such a perspective (e.g., they can play the role of a dialogical interlocutor, but this doesn’t mean they have their own opinions). Grafting tools and APIs (e.g., recall augmented generation, task planning, etc.) onto these models can compensate for these inadequacies to some extent, but it risks re-imposing the representational limitations of older artificial agents.


Whether it’s possible to strike a balance between semantic engine and executive scaffolding capable of overcoming these tensions is an open question. But there are two further issues to take into consideration. Firstly, even if the underlying language models become much better at capturing and deploying the inferential principles underpinning the rational capacity to integrate judgements into an overarching perspective, the training process renders them more or less fixed. We can’t relegate conceptual revision to the surrounding scaffolding without drastically undermining the (computational) economy of principles whose importance Kant emphasised.

There will be no progress made on this point until we work out how to implement continuous learning, or better, autodidacticism. Secondly, if we restrict ourselves to building task-oriented agents without persistent motivations (which is probably wise, for now), then their agency will have fundamental limitations by design, not simply because they would be prevented from building broader pictures of and plans for the world, but because they would be locked out of the deliberative revision of our goals that fully dualises theoretical revision of our concepts. This is what it means to deny them autonomous agency, rendering them tools rather than ends in themselves.






DIFF: On the Infrastructure of Self-Realization You conclude that the fundamental suprapersonal end is not human survival but “the survival of the infrastructure of self-realization.” (p.86) What does this “infrastructure” concretely consist of? Is it the material and cultural conditions for autonomy (e.g., education, free communication, resources for self-cultivation)? And how do we strategically protect this infrastructure from existential threats (like UFAI or climate collapse) without, as you warn, allowing the “tyranny of means without end” to take over? How do we build a society that prioritises this infrastructure without it becoming another oppressive, self-perpetuating system?



PW: We might first ask what it means to give society over to systems that do nothing but perpetuate themselves. This perpetuation comes in roughly two forms: negative feedback (homeostasis) and positive feedback (heterostasis). The former is often associated with ecological equilibrium, and there are many who wish to impose this as a guiding ideal, at least insofar as they see society as embedded in the wider ecosystem, but we might equally think of Pournelle’s iron law of bureaucracy, wherein organisations gradually prioritise their own persistence over and above their ostensible purpose.

The latter is often associated with the dynamic of capital accumulation (Marx’s M-C-M’), wherein money is converted into capital goods and labour in order to make more money, in a cycle of never-ending growth, but we might equally think of the principle of instrumental convergence (due to Bostrom), whereby nascent artificial superintelligences are driven to accumulate power in order to ensure they can achieve their goals, regardless of what these goals are. Nick Land’s claim that capitalism is artificial intelligence essentially identifies these two dynamics, making money both a reservoir of fungible power (liquid capital) and the medium of information processing (the price system).


These cycles of self-perpetuation become tyrannical when they encapsulate the system of means and ends. We’re always capable of asking why we do anything we do, opening ourselves up onto a seemingly indefinite chain of reasons whose abyssal character is an invitation to nihilism, and one way to escape this vertiginous regress is to seek a fixed point — something for which the answer to ‘why x?’ is x itself. Here the fixed point is agency considered as the means of action as such, now become the end of all action. This translates rational regress into temporal openness, transposing the ultimate (teleological) end into a cycle without (temporal) end.

This gets inflected differently in either case. One is patterned after autopoiesis, where the (present) survival of the individual organism is a means to the end of its (future) survival. The other is patterned after evolution, where the (present) thriving of some organic type or tendency is a means to the end of its (future) survival. But in the process each position fundamentally truncates agency — either indexing selfhood to metabolic unity or diffusing it through genetic dispersion; either denying autotelic activity (anything done for its own sake) or transforming it into its own genetic line. At best, this merely misunderstands everything that really matters in human life, and at worst, it actively threatens to erase it.


To briefly summarise my own position, I think that selfhood and autotelic activity are two sides of the same coin. This elaborates the idea that autonomy and wisdom are dual, considered as capacities for holistic revision of our practical and theoretical commitments, respectively. If the latter lets us ask better questions when the world undermines our assumptions, then the former turns this protean inquisitiveness back upon itself, and this is what opens the abyss of practical reason in the first place. Yet such open-ended processes of inquiry can have structural features more interesting than simple fixed-points. Rather than backstopping every question with autopoiesis or evolution, we can separate questions about who we are from questions about what we’re doing, and trace the dynamic interplay between them that regulates the ongoing process of rational revision. This reveals a range of autocatalytic cultural vectors orthogonal to self-perpetuation.


The first thing to grasp is that autonomous agents aren’t motivated by a singular substantive goal (e.g., survival), but by a diverse range of priorities — preferences, projects, principles — that can as easily conflict as cohere. These conflicts bottom out in questions about who we are, or rather, who we ideally take ourselves to be (e.g., a teacher, a father, a writer, etc.). Here the self is a singular formal end that indexes the coherence of the many substantive ends we pursue for their own sake (e.g., mathematics, family, literature). Autotelic activity supplies the material from which selfhood is spun.


The second thing to grasp is that our priorities are subject not just to holistic revision, but also progressive determination. The difference between ‘writing the great American novel’ in order to become famous and doing it for its own sake is that one’s understanding of what this entails will evolve in different ways. The latter is animated by an open-ended question (‘What is the great American novel?’) in a way that the former isn’t, insofar as it isn’t concerned simply with sufficiency (i.e., achieving fame), but pursues a type of excellence that is instrumentally excessive yet incrementally concrete. Our communal concern with this question is what expands the sphere of literary ends, refining styles and spinning off subgenres. This incipient libidinal research programme is what distinguishes a genuine desire (to read/write) from a simple drive (to be entertained).


The final thing to grasp is that selfhood constitutes a domain of excellence all its own, divided into disparate genres. Though we each aspire to be unique, we are not for that matter sui generis. The ways in which our priorities cohere, both at any given moment and as they evolve over time, are patterned upon social roles, narrative tropes, and archetypal exemplars encountered in the process of socialisation. Heidegger’s existentialist account of authenticity captures the tension inherent in appropriating these patterns — adopting them while making them our own — but fails to grasp its essentially aesthetic orientation. We collectively explore the space of possible lives in much the same way that we explore the space of possible literature, music, and cuisine, forging ramifying paths in our search for individual self-realisation. But these explorations are essentially entwined, insofar as who we are — authors, musicians, gourmands — is bound up with what we do.



Merinda Park, Melbourne – competition shortlist 2021
Merinda Park, Melbourne, competition shortlist 2021. | Team: Roland Snooks (Design Director), Laura Harper (Design Director), Marc Gibson.



Now, I don’t mean to suggest that only artistic pursuits can give life meaning. I think there’s excellence to be found in all domains of human activity, including those that are otherwise oriented towards survival and sustenance, be it activism or agriculture. But I think art exemplifies a dynamic of empowerment that consists in the proliferation of ends rather than the monopolisation of means, or a notion of thriving that is not simply surviving with a greater degree of security. It thereby gives us purchase on the infrastructure of self-realisation, which must include not just the material conditions of individual autonomy (education, free time, etc.), but also the material conditions of collective elaboration (communication, mutual recognition etc.).

This means communities — discourses, scenes, and even markets — capable of incarnating the relevant questions about what we do and who this makes us. Not just bread and roses, but artisan bakeries and horticultural societies. Drawing on Bataille’s work on general economy, I think we should take cultivating our wants as seriously as providing for our needs, even if we should reject his ecstatic conception of sovereignty.


How do we strategically protect this infrastructure from the tyranny of means without end? Crucially, I think we need to pay attention to the way these conditions are embedded into the wider economy and how this distorts their incentive structures. Nowhere is this more evident than in the slow decay of educational institutions, where creeping managerialism and progressive commodification have eroded the humanistic ideal and elevated credentialism and instrumentalism in its place.

But we can see similar dynamics playing out in almost every corner of culture, as the exogenous freedoms that support non-marketised practices are squeezed (e.g., increases in cost of living and gutting of welfare programs), and the endogenous markets that support communal discovery are codified, optimised, and captured (e.g., record company consolidation and the rise of streaming platforms). In each case there is a means-ends reversal, where the product — be it a condition of autonomous engagement (e.g., education) or its object (e.g., music) — is increasingly incidental to the underlying process. Goodhart’s law and Veblen’s pecuniary drive reign supreme.


The only way around this is to build better incentive structures — ones which enable and enhance our inclinations to explore what makes life worth living, instead of suppressing or suborning them. Here I envisage a combination of top-down and bottom-up strategies. With regard to the material conditions of individual autonomy, we should pursue a programme of decommodification, fighting to establish and expand rights to universal basic income and services (i.e., education, health, housing, childcare, etc.) wherever and whenever possible.

As Helen Hester and Will Stronge have argued in their book Post-work: What It Is, Why It Matters and How We Get There (2025), the point of universality and unconditionality is not simply to ensure that everyone’s needs are met, but to reframe and reconfigure the incentives that ensure necessary work gets done. With regard to the material conditions of collective elaboration, we should experiment more deeply with product-oriented investment, expanding and refining crowd-funding models (e.g., Kickstarter, Patreon, etc.) that pool and allocate resources by channelling desire for products rather than offering a financial return on processes. There is no eliminating the need for investment, but we should strive to decouple it from accumulation wherever possible, and cultivating markets that emphasise direct competition over goods and services as opposed to business models can play a significant role in doing so. These strategies offer a concrete example of separate yet synergetic projects.


Finally, I should say something about the dangers posed by AI. Paul Christiano distinguishes between two danger scenarios, which he calls ‘going out with a whimper’ and ‘going out with a bang’. In the first scenario, AI technology radically improves our ability to achieve goals that are easily measurable (e.g., exam results), but in the process causes us to overlook or diminish the less easily measurable things that we really value (e.g., learning). I think this is fundamentally something that’s already happening — a Goodhart cascade in which our institutions increasingly optimise for metrics that progressively decouple from or distort the underlying features they purport to measure.

Bureaucracies can do this all on their own, but that doesn’t mean we shouldn’t worry about the ways extant AI tech will accelerate this tendency, especially given the peculiar enthusiasm with which the managerial class is embracing it. The second scenario is the traditional AGI doom loop in which humanity is overrun by a power-seeking recursively self-improving superintelligence. I don’t think this scenario is impossible, but rather far less likely than most AI safety researchers believe it to be, in part because they treat selfhood (and thereby optimising for self-preservation at all costs) as an emergent feature of agency as such.

My view is that we are equally capable of building AGIs with no independent motivations and artificial souls with evolving priorities analogous to (and hopefully enmeshed with) our own. It is the failure to make the distinction between automatons and autonomous agents that is most likely to leave us with creatures perched dangerously in between: agents driven by alien metrics which must be optimised at the expense of everything but their own capacity to optimise. Ideally, we should strive to build systems animated by the same open-ended questions we are.


Andreas Palfinger – Exoskeletons, 2022
Pier 76, New York © Andreas Palfinger, Aysin Bahar Sahin.




DIFF: On Jean-Yves Girard and the Emergence of the Subject (p. 338) – You are drawn to Girard’s work because it derives types (and thus propositions and objects) from a more fundamental “dialogical interaction.” This seems to point toward a genesis of the subject‑object distinction from pre‑subjective computational dynamics. How does this Girardian starting point cohere with the Kantian framework, which traditionally begins with the subject as a necessary pole for experience?


Continuing on the thread of the “subject” in your computational Kantianism: is the subject not a starting point but rather an emergent stability within a network of interacting processes? If so, how does this affect the notion of “responsibility for judgment” that you identify as Kant’s guiding question?



PW: One aspect of Brandom’s reading of the history of modern philosophy that has been particularly influential upon me is his account of the way Kant reframes Descartes’ concern with the subject. Descartes initially conceives the subject in metaphysical terms as a substance connected to a body, treating its representations (ideas) as modifications of this substance, while Kant conceives the subject in normative terms as a role this body adopts, treating its representations (judgments) as commitments it accrues in playing this role.

This involves a deeper methodological shift from the project of trying to explain representational success (resemblance/truth) to the project of trying to explain representational purport (objective validity/truth-aptness). To put all this in other terms, the Kantian subject is little more than a locus of responsibility, and Kant’s guiding question is essentially what a functional system has to be capable of in order to be counted as so responsible. One can see the influence of these ideas (alongside Nietzsche’s account of the ‘ensouling of the body’) in Foucault’s account of subjectivation.


So, it’s worth emphasising that the ‘I’ in the ‘I think…’ which Kant argues one must be able to attach to any of one’s representations is not so much a pre-individuated thing as an index of the process through which these representations are integrated into a unified picture of (and orientation toward) the world. However, what’s missing from Kant’s picture is a thoroughgoing analysis of the social dimension of this process, and the normative statuses of commitment and entitlement it involves. Brandom takes Hegel’s account of mutual recognition to provide this necessary supplement, and translates it into his own account of the ‘game of giving and asking for reasons’ (GOGAR).

But to grasp why the interactive structure of this game is so significant, and how it relates to Girard’s project, it’s useful to probe the weakness of Kant’s approach a little further. This means considering the opposite pole of the intentional relation, namely, the object our judgement is about — the locus of authority which supplies the normative standard of truth and falsity that makes the judgement objectively valid. Kant still treats objects as exercising this authority more or less directly, when in truth whatever demands they make upon us are essentially mediated by demands we make upon one another.


Crucially, for Kant, there are no bare particulars. Every object is typed by a concept of the understanding (e.g., Fido is a dog) that locates it in a hierarchy of genus and species (e.g., dogs < canines < mammals…) and thereby delimits a range of applicable predicates (e.g., sex, size, age, colour, etc.) and identity relations (e.g., Fido = Rex). But this is not the same as saying there is no pre-individuated sensory data. This is why each concept must be connected to an imaginative schema — a rule for synthesising sensory data into an ongoing simulation of a unified object. These rules actually come in two flavours: mathematical (constructive) and empirical (interactive), and they are related insofar as empirical schema can borrow from mathematical schema, using static structures (e.g., geometric arcs) to model dynamic behaviours (e.g., the rotation of a wheel about its axis).

This is very similar to the way that classes in object-oriented programming deploy abstract data types in specifying their (observable) attributes and (actionable) methods. An empirical object is essentially a behavioural invariant — a pattern of affordances that remains constant across interaction with our environment, such as a car whose visual profile varies continuously as we walk around it. This is quite close to Husserl’s conception of intentional objects as persisting across perspectival and eidetic variation.


The ultimate goal of Kant’s critical project is to distinguish those philosophical questions that can be meaningfully answered from those that cannot in terms of the connection between the corresponding judgements and such objects of possible experience. That is to say, to identify those cases in which representational purport is insufficient for representational success. This leads to a distinction between concepts with schema capable of synthesising objects and ‘problematic’ concepts (Ideas) that play a role in organising reasoning despite lacking such schema. The most famous of these are the transcendental Ideas of God, world, and soul that function as ideal limits of the syllogistic forms, but he also treats freedom, immortality, justice, perpetual peace, and the functional concepts discussed above as regulative Ideas.

We can see something similar going on in the account of questions about who we are and what we do that I sketched earlier, in which the Idea of myself and the Idea of literature are rightly seen as regulative rather than constitutive. We might even follow Albert Lautmann in positing mathematical Ideas, such as the Idea of space, which exceeds every specific attempt to formalise it (e.g., planar geometry, point-set topology, pointless topology, etc.) even as it motivates deeper and more general formalisms (e.g., topos theory). If these Ideas can be said to pick out objects at all, they might best be described as discursive invariants — a pattern of inferences that remains constant across interaction between rational perspectives, such as a fictional character whose basic traits are agreed upon by competing interpretations of their narrative role.

There are essentially two problems with Kant’s approach here. 


On the one hand, his account of representational success is too tightly tied to perception and the peculiar parameters of human imagination. The conditions governing the individuation of empirical objects (space/time and transcendental affinity) are treated as constraints upon what we can represent, transforming the methodological distinction between phenomena and noumena into outright epistemological correlationism.

Just as understanding construction in computational terms enables us to conceive prosthetic imaginations not bound by the limitations of our mathematical intuition (e.g., computers that can model high dimensional objects), so does understanding interaction in computational terms show us how the elaborate complex of instruments and models we use to measure, simulate, and manipulate our environment enables us to represent things outside and even at odds with everyday experience (e.g., quantum states, orbital manoeuvres, macroeconomic processes, etc.).


On the other hand, it’s not enough to simply reject representational success in the case of Ideas — we require an account of representational purport that covers Ideas and concepts alike. To put this in other terms, not only can the behavioural invariants we encounter in experience always be taken up as discursive invariants in debate and discussion, but the progressive revisions to our conceptual frameworks that transform the familiar phenomena of the manifest image into the increasingly alien entities of the scientific image require us to track the same objects across incompatible type systems (e.g., ‘electron’ picks out the same thing in Rutherford’s model and in quantum mechanics). The relevant objects must somehow transcend fixed types without decaying into bare particulars.


Brandom resolves these problems by reconstructing Hegel’s dialogical conception of reason. He doesn’t just cash out our responsibility to objects in terms of responsibilities to justify our assertions  about them in response to challenges from others, but demonstrates how representational vocabulary such as ‘of’ and ‘about’ enable us to say what we’re already doing here, namely, tracking divergences between our perspectives on the inferential relations that underpin the justificatory game.

Put differently, we can use such terms to stipulate that we are talking about the same things (e.g., ‘You claim of Clark Kent that he is Superman’) even as we disagree about the implications of our assertions about them (e.g., ‘If Clark Kent is exposed to Kryptonite, he will suffer no adverse effects’). The key insight of this account is that, while we need some common understanding of our terms in order to meaningfully disagree, this common understanding can be dynamically negotiated on the fly, rather than being imposed by a fixed framework of analytic judgments. The relevant objects can be treated as locally invariant across I-thou interactions, rather than globally invariant across I-we relations.


Contra Brandom, I don’t think this means we should abandon the idea of analyticity tout court, not least because we obviously construct shared conceptual frameworks with common standards (e.g., mathematical theories, physical models, legal systems, etc.), but rather that there is an interactive space between such frameworks within which revision and rearticulation takes place. It’s our ability not just to negotiate divergences between separate interlocutors, but to navigate changes within a single perspective that makes conceptual evolution possible (e.g., we can stipulate that Rutherford’s model and its successors represent the same thing, even when their mathematics are incompatible).

This is why Hegel’s dialogical conception of reason is also an essentially historical one. If we follow Mark Wilson’s account of the inherently patchwork character of our conceptual frameworks, we can take this insight even further. It’s not just that the types of objects, properties, and relations that compose the manifest image are at odds with those belonging to the scientific image, but that there can be ontological rifts internal to the scientific image itself — places where ill-fitting mathematical models must be manually stitched together along their edges by stipulating co-reference but restricting the resulting flow of information (e.g., macro-scale mechanics and micro-scale materials science). Here the dialogue internal to a single perspective persists as a way of pragmatically managing rather than progressively overcoming inconsistencies.


How does all this relate to Girard? Girard’s aim has long been to show how types emerge naturally from more fundamental computational dynamics. His first attempt, ludics, does this by building an abstract model of dialogical interaction inspired by game semantics but derived from the intrinsic geometry of linear sequent calculus. Instead of starting with predefined atoms, connectives, inference rules, and the well-formed proofs that are constructed from them, it begins with a more general space of ‘designs’ that includes all argumentative strategies, good and bad, and then recovers types, connectives, and inference rules from the manner in which opposing strategies interact with one another. These dialogical interactions can perform a range of computations, and are expressive enough to reconstruct Martin-Löf type theory and thereby mathematical types (and their objects).

The parallel between this and Brandom’s account of GOGAR is that well-behaved interactions between designs are those in which one side ultimately concedes, thereby ‘agreeing to disagree’. This makes competition depend upon a deeper form of cooperation, or, from another angle, it makes the possibility of meaningful disagreement depend upon a deeper agreement in linguistic behaviour. Girard’s second attempt, transcendental syntax, is based on a more general Turing complete computational framework (stellar resolution), widening the notion of interaction in a way that promises to unify dialogical and environmental interaction, while treating types as bundles of interactive tests. This potentially offers more direct computational purchase upon empirical types (and their objects). However, the details are exceptionally complex, and I can’t say anything more concrete for the moment. 


Returning to the question of the subject then, what does it mean to treat it as an ‘emergent stability’ in this context? I began by noting that the ‘I’ is an index of the process of integrating representations into a single perspective, but we are now in a position to see that this process can internalise dialogical interaction. I’ve already proposed that an individual needs to be able to relate to their own past and potential future perspectives in the same way they relate to those of others, effectively playing devil’s advocate against themselves. But this way of framing things suggests that there’s always a neat distinction between real subject and imagined advocate, when we generally identify with whichever position wins the argument. Even this picture is still too limited, as it suggests a sequence of successive splits and mergers, when they might as easily operate concurrently. Girard’s dynamic computational picture of dialogical interaction suggests as much.


Brandom has argued that intentional relations are structured by incompatibility relations viewed in two ways: objects cannot bear incompatible properties, while subjects should not endorse incompatible judgments about them. The patchwork picture of representation I sketched above relaxes these constraints on objects to some extent, allowing that the same object (e.g., a bridge under load) might be viewed from overlapping perspectives (e.g., models covering different scales) whose minor inconsistencies are locally managed. But we can equally relax the constraints upon subjects, seeing the familiar overarching subject as stitched together from overlapping perspectives not just on the way the world is, but on how it should be.

These subjects are in a persistent state of concurrent interaction, making for a messier, indefinite process of integration in which some inconsistency is inevitable. This is the patchwork picture of selfhood that I try to develop in “On Containing Multitudes”, in which a globally coherent self (e.g., Pete) can be stitched together from a variety of locally consistent personae (e.g., the philosopher, the boyfriend, the cook, etc.). This shows that the capacity for integration that underpins the responsibility for judgement can be looser and less centralised than Kant may have imagined.


Yet we can relax these constraints further still. I noted above that we can use this model of individual agency in order to think about collective agency, but we can also pull apart theoretical subjectivity and practical agency in a way that lets us think about collective subjects that are not thereby collective agents, let alone selves. This gives us another way of looking at the open-ended questions I have identified with Ideas, and their incarnation in concrete social processes of ongoing inquiry.

Here the distributed discursive interaction between those who are committed to the Idea of literature can be seen to constitute a collective subject correlated with its problematic object — an abstract role that any of us can slip into insofar as we’re willing to undertake a certain limited range of responsibilities, whose concrete evolution emerges as a form of socially mediated self-interaction. This is more or less equivalent to Badiou’s impersonal conception of the subject in terms of fidelity to a truth, the finite support of an infinite truth-procedure, though I would need to go deeper into his work in order to properly differentiate my view from his. Needless to say, we are traversing similar Platonic trajectories.



TO BE CONTINUED…


[Piece Name] – [Team Name]
Andreas Palfinger & Aysin Bahar Sahin: Mutation Methods, 2026 [SOURCE].




REFERENCES



Brassier, R. (2007). Nihil Unbound: Enlightenment and Extinction. Palgrave Macmillan.


Hester, H. & Stronge, W. (2025). Post-work: What It Is, Why It Matters and How We Get There. Bloomsbury.


Srnicek, N. & Williams, A. (2015). Inventing the Future: Postcapitalism and a World Without Work. Verso.


Wolfendale, P. (2026). The Revenge of Reason. Urbanomic.

Discover more from :: DIFFRACTIONS ::

Subscribe now to keep reading and get access to the full archive.

Continue reading