Over at An Und Für Sich, Joshua Ramey weighs in with his concerns on the Acceleration movement. Like everything Ramey writes, its passion is sustained and his rhetoric sincere. You’ll need a coffee for this one.
I have to admit that over the last couple of weeks, I’ve written and drafted multiple posts (one about 2000 words long) about my problems with Accelerationism, but I ditched them, primarily as – a.) above all my gripes, I actually quite like these guys, and admire their ambition (by their own standards, Accelerationism is fantastic, and ambitiously demented project): b.) with one conference and a handful of essays, it isn’t very charitable not to give a movement room to breathe: c.) others have blogged far superior comments and critiques on Accelerationism than I could ever muster, and d.) with time becoming less and less spare (for various reasons) – the last thing I need, is a drawn out ‘speaking past each other’ blog conversation. Yet, even as I’m beginning to type this, my fetishistic disavowal is potent – lets bare these points in mind (especially the first two).
So with regards to Joshua’s piece (and to Wolfendale’s reply) – I’ve commented HERE. However I feel I should mention one or two additional things about the deployment of computation in the Accelerationism project (especially as Ramey kindly mentions my Liverpool paper from last year: there, I argued why computation’s absence within Continental Philosophy commentary on the undecidable, is indicative of its anti-realism and anti-constructivism).
The basic aim of Accelerationism, as I see it, is to reignite the Laplaceian project of rational self-understanding in a post-Laplaceian world of complexity – which is fine in one sense. In terms of it’s overall philosophical scope, it’s not that much different from Brandom’s attempt to reconcile genealogical Naturalism with Hegelian rationalism (hence Wolfendale’s membership). Yet the moral, and ethical leftist imperative which Accelerationism seeks to call its own, is to literally commandeer late-capitalism’s technology, establish a political literacy within complex systems and accelerate revolutionary change within it. And for the most part, it’s rather hard not to disagree with this (especially the literacy part) – yet for good reasons, I think the rub emerges with point 21.
“We declare that only a Promethean politics of maximal mastery over society and its environment is capable of either dealing with global problems or achieving victory over capital. This mastery must be distinguished from that beloved of thinkers of the original Enlightenment. The clockwork universe of Laplace, so easily mastered given sufficient information, is long gone from the agenda of serious scientific understanding. But this is not to align ourselves with the tired residue of postmodernity, decrying mastery as proto-fascistic or authority as innately illegitimate. Instead we propose that the problems besetting our planet and our species oblige us to refurbish mastery in a newly complex guise; whilst we cannot predict the precise result of our actions, we can determine probabilistically likely ranges of outcomes. What must be coupled to such complex systems analysis is a new form of action: improvisatory and capable of executing a design through a practice which works with the contingencies it discovers only in the course of its acting, in a politics of geosocial artistry and cunning rationality. A form of abductive experimentation that seeks the best means to act in a complex world.”
Now my broad position is to advocate computationalism, not as a Landian in-humanism, but as a universal pancomputationalism; one that eschews universal knowledge. Simply put – every autonomous entity – whether natural, cognitive, technological, bureaucratic, is of course procedural – but not through an absolute project of rationalist self-knowledge, where we can know or hunt for one ‘natural’ iterative set of compressed rules; the configurable operations of which, give us some sort of Hegelian discursive cocktail of tyranny. Instead we need a pancomputationalism which offers a pragmatic account not just of function, but also of its relationship to anti-function – uncomputability (why else would my blog be called Algorithm and Contingency?)
In other words, what happens to computation, when we realise the failure of epistemic computational reason? It shouldn’t be too hard, considering that computation’s discovery arose from the failure to epistemologically decide all of mathematics. This is often the problem with the discourse of evaluating computational reason: it’s so infatuated with the epistemic pragmatics of cognitive mapping that the underlying pragmatic reality of hacks, bugs, exploits that will impede its journey, are swatted aside with annoyance – or brought together under the free-play of collective reason anyway. It only offers a forced choice on the level of knowledge; either uncomputability remains at the borders of human knowledge or it can be known. Well no – not if one can maintain that any process of computation, cognitive or technical has a concrete, in-built form of contingency within it, which is required for it to function (and not the other Hegelian way around as the inferential normative story would have it). What side of the facile nature/culture divide would they have computation subsist in?
And this is the problem with Accelerationism’s relationship to contingency which Ramey nails so well – if you want a scenario where collective, rational human self-mastery has largely failed, because of contingent, uncomputable problems – look no further than computational commerce and neo-liberalism. For late-capitalism thrives on and yet despises the notion that the architecture of a universal flexible machine fails to execute a procedure which fully decides on denying access to hidden files, or malicous viruses. Closed machines are also designed to implement a probable range of outcomes which attempt to aid functional competence, yet that this hasn’t made computation any less secure.
Every complex system must have enough complexity to be both functional and yet ineffably fallible. In fact, the very feature which makes computational systems so complex (or Turing Complete), cuts off epistemic self-mastery at the knees – because once the threshold for complexity is breached, there simply is no systematic mastery available to fully comprehend its functional consequence – whether it is a mathematical / classical logic problem or a new software update to stop security parsers. Protocols and CPU’s are so complex and independent, that the level of systematic intervention is never rational qua functional reason. Once computational power is unleashed, it’s contingencies cannot be overcome, at least not permanently – not because self-mastery doesn’t exist, or remain progressive, but that this power forever remains at an ontological discontinuity.
Short version: don’t mistake function for complexity, they are not the same – if it won’t work for computation, it isn’t going to work for cognitive progress.