So about a week ago, Ben Woodard kindly posted some notes on Reza Negarestani’s talk “Abducting the Outside” at the Miguel Abreu Gallery in NYC. Reza recently published the first set of notes on his blog.
As with anything that Negarestani writes, trying to synopsise any part of it – cleanly – gets you nowhere. So one should set about paraphrasing points of interest and move on from there. Obviously these are based from notes and not published material, so my comments are second hand at best.
Negarestani’s new project intends to systemically (dis)orient the project of rationalism and knowledge of reason into something that is entirely unreasonable, a mode of new knowledge which will; “accelerate the dislocating and renegotiating power of the modern system of knowledge by which the human is humiliated at each and every turn.” In other words, a “genuine project of inhumanism.” To get there, Negarestani must first expose the ontological ‘deep access of the concept’, find out what exactly constitutes the genesis of it, and understand how the epistemological icing of reasoned knowledge is sustained and endorsed. Only then can modern systems of knowledge be re-routed to rediscover a new methodology of accelerationism.
As far as Negarestani is concerned, getting there requires a tripartite critique. First he wishes to critique Nick Land’s philosophy of inhumanism conditioned by a machinic computational efficacy (Negarestani’s post is predominately about this – which is why I’m commenting on it) Second he will critique the ultra-normative rationalist project of pragmatic inference processes (Brassier, Brandom, etc) and lastly the “axiomatic decelerationists or variants of classical Marxism that I charged with local myopia,” although this wasn’t covered.
So Negarestani’s attack on computation and ‘the regime of the discrete’ is where it starts and right away we notice a problem as far as the notes are concerned. (As far as commenting on Nick Land’s work goes, I quite simply haven’t read enough so I can’t comment from that angle).
“We started to work on the genesis of the machinic efficacy by way of examining its deep roots in certain forms of metaphysics and revolutions in mathematics and logics, namely, Fregean logicism (derived from a Newtonian metaphysics of the absolute normativity) and Hilbertian formalism (derived from Laplacian classical determinism and his famous conjecture). Both of these constitute the foundations of the works of Shannon and Turing. We examined how digitalization is a form of classical determinism via the idea of approximation-perservation or digital rounding (a form of perturbation-preserving system) that makes the machine effective.”
Except Frege’s logicism and Hilbert’s formalism doesn’t constitute the foundations of the works of Shannon and especially not Turing. at least not their total foundation. It’s not quite right to simply drag the philosophical goals of Hilbert’s formalist project into Turing’s work without first understanding how subversive Turing’s 1936 paper ‘On Computable Numbers’ was, not just technically, but also philosophically.
Negarestani is correct in asserting how Hilbert’s absolutist formalist project is ultimately derived from a Laplacian classical determinism (this he gets from French logician Giuseppe Longo) – in other words, how the laws of thought could reduce the phenomena of the world into a finite set of absolute rules, which can explain everything, everywhere. For both Hilbert and Laplace, understanding equals compression. Consider Laplace’s famous topic of interest, the solar system – one does not need to observe its behaviour to predict its future configurations – one simple reduces it to a iterative rule, plugs in a query and expects a result. Hilbert’s astonishing project was to formalise all of mathematics into this type of system – where every mathematical and logical query plugged into it would produce a result, a fully deterministic predictable mapping of knowledge and matter. The mind could conceive an infinite combination of formal string symbols and these was the primary modus operandi of the finite rule applying to the phenomena in question. This is the rationalist impetus that feeds into every question concerning the computability or un-computability of procedural processes.
The problem is that Negarestani’s critique fails to undertstand that Turing’s 1936 paper, which introduced the theoretical basis of modern computing, already subverts Hibert’s absolutist logic. Negarestani simply takes Hilbert’s project wholesale and thus his critique of computing is basically the same as critiquing the deterministic knowledge of formalism. The functional basis of computing emerged from the failed attempt to formalise all of mathematics, but this should not be matched onto the limits of modern knowledge.
“The discrete (causal) and the iterative (conceptual) regimes of the computational dynamics are responsible for the machinic efficacy but this efficacy mutilates the intelligible and prevents genuine conditions for the mobilization of the inhumanist drive of knowledge, namely, encounters with different conceptions of time and space, normative improvisation, processing of information on the basis of the generic space that parametrizes them, contingent epistemic mediations and understanding of the landscape of knowledge as interweavings of continuity and contingency. All of these effectuate as irreversible renegotiations and dislocations of the human sphere. Capitalism investment on the so-called machinic efficacy and global digitalization, in this sense, is completely in line with its axiomatic preservation of the local ambit of thought and restricted knowledge generation (to degenerate the true scope of the global further and further).
Universe as a computer is a cheap metaphor at best, and at worst steps backward in the project of knowledge. The idea that the limits of computation (and accordingly, machinic efficacy) demarcate the limits of knowledge (and hence, a project of inhumanism conditioned by the modern system of knowledge) is the epitome of myopia.”
This makes it sound like I’m knocking an open door. Negarestani critiques computing for the very reason that it impedes knowledge (inhuman or otherwise), and I’m in favour of endorsing it for this reason that it is impedes knowledge. But there are important reasons as to why, and simply conflating Hilbert’s absolutist project on Turing’s proofs exposes why this is the case.
Turing’s 1936 paper On Computable Numbers did more than just transform the intrinsic computational problems of Hilbert’s system into a technical affair as Negarestani assumes: Turing also undermined the philosophical, absolutist consequences of it. Negarestani is right that Turing did attempt to show how Hilbertian finitist psychology underscored computational thinking, and could be decomposed into a independent machine. But neglects to mention that Turing did so by explicitly stating that his ‘Turing” machines could operate without any link or reliance to rational thought or knowledge at all. This in effect is the point of computers – they don’t solely exist as pockets, or regimes of knowledge primarily associated with the laws of thought – they do things outside of it. Quoting another French logician, Jean Lassègue;
‘Turing was reversing Hilbert’s philosophical axiom: it was the written symbols that generated states of mind and the not the other way round. Therefore, the mental act was secondary in comparison […] what was at stake was only the mapping of a discrete set of symbols with a set of behaviours in a computing machinery and not the “reality” of some states of mind that were only postulated.’ (Lassègue 2009:158)
This doesn’t support my claims of course, as Lassègue also conflates ‘mind’ as the written symbols of execution, but he is helpful in articulating Turing’s break with Hilbert – mind does not preclude the limits of machines, its the other way round – and a set of behaviours which take place in a CPU without the reliance of thought generating them explains the autonomy of these processes. They take place outside of ones knowledge, not through it or because of it.
Another claim which Negarestani makes is that computing can’t deliver new or non-classical modes of reasoning which might preserve or mitigate ignorance’. Turing’s halting problem is used to suggest that “there is no algorithmic way to say ‘I am ignorant’ or ‘I don’t know’. Computational algorithms work with the truth-perservation kernel of classical logic.” This is a half-read at best – Turing’s decision problem is simply the failure for a Turing Machine, universal or otherwise to fully decide on the output of another Turing Machine. It’s about paradox not ignorance. It’s about the ontological lack of total knowledge in independent processes, not a set of procedures to be rejected in part because it cannot bestow inhuman ignorance. Sure, if one already has in mind that computers are ‘meant to know’ in the first place, one is bound to belabour the decision problem under these conditions. Turing’s decision problem is about the recognition that an independent system will never decide on a input – not whether it ‘knows’ it.
So whilst I agree with Negarestani that conceiving computation as a global project is, at best, a demented one, I can’t endorse the argument that it should wholly rejected because it limits new modes of knowledge. There are new modes of interrogation within computing that don’t rely on the regime of universal knowledge (in fact its inception is in effect a result of its failure).