Latest Furtherfield offering HERE. I should say that, despite my concluding comments, I really enjoyed Stern’s book – and you should definitely seek it out.
I’m at the Aesthetics in the 21st Century conference, – I gave my paper yesterday; a haggard link between Turing and Fried, that hopefully made sense (not that most of the stuff I come out with makes sense). Along with meeting the rest of the Speculations team (minus Fabio), Paul and Jamie at continent, I took part in joint panel on para-academia, which dealt with the current state of things.
Maybe I’ll do a write up or something like that – the paper itself will appear somewhere…..
Anyway – nevermind that – the conference has, what is clearly, the best table of books I’ve ever seen. Speculative Realist naysayers can’t complain now, look at this.
My guess is that they’re building on the same expropriated network (free and open) they used for the R15N project to create a miscommunication platform for video and messaging networking. If you use the latest browsers for Chrome and Opera, video seems to function too. Plus it seems they’re building on some sort of instant translation system.
Yeah I’ll be there… No link as such, but it’s for a workshop entitled Participatory Materialities. I’ll be talking about undecidability actually in software.
It’s nice to think that I’ll be spending the Queen’s Jubilee, talking about issues which are fundamentally against the Queen’s Jubliee. Heres the blurb.
“The continual development of a networked pervasive computing culture raises important questions about mediation, materiality and participation that integrate aesthetic, design- and HCI concerns. It becomes evident, that questions around how we perceive software interfaces and how they in turn structure our participation is not just a question of functionality but also of perception, understanding, creativity and the potential for participation. The compatibility and incompatibility of interfaces is not just an important functional question but also a cultural and political discussion and these dimensions are furthermore interrelated. This workshop will aim at developing a cross-disciplinary language for addressing such issues in contemporary computing culture.
Complex mediation structures in new and evolving fields of IT-use, liquefy the distinction between tool and material. Tools turn into materials, and objects of interest become instrumental mediators in the process of developing innovative use of software and in reshaping software itself. By changing perspective from triadic subject-instrument-object mediation to complex meditational structures, an analytical sensitivity for the evolving material qualities of software in use is opened.
This materiality may be addressed at many levels: as the tangible materiality of software established through physical interaction devices and through simulated materiality of the graphical user interface, or as a mouldable materiality of software with changeable and reconfigurable functionality, or as materialized when somebody uses the software, when it is interpreted and interacted with – from the low level of code and algorithms, over the interface, its metaphors and interaction, to the domain specific products resulting from use. The processes and products of software become sensuous form with aesthetic meaning. Metaphor may be the historic master trope of the direct manipulation interface, while other tropes, such as metonymy, may better accommodate appropriation and participation in the broader sense of cultural and aesthetic engagement beyond the pre-planned.
The concept of participation in IT-development was coined in the early trade union cooperation projects of the seventies and eighties, when focus was on co-determination in technological change at the workplace. By broadening the scope to include cultural and aesthetic perspectives, the concept of participation can be re-actualized.
While, historically the focus of design methods and intervention was on co-determination, theoretical and empirical work pointed out that use qualities are constituted in use, and that un-anticipated use occur in most situations of IT use. Subsequently, that led to work on tailoring and end-user programming, and to a theoretical interest for appropriation. Still we seem to miss frameworks for understanding and designing technical substrates accommodating fluid appropriation, extending into the collectively mouldable. Thus, in enabling participation beyond the strategic level of work arrangements, and beyond the level of DIY concepts like the Dynabook, we suggest that ideas of digital materiality, based on complex mediation and alternative tropes, could be a fruitful starting point.
Materiality and participation could be addressed in a wide range of fields and applications, extending beyond the workplace, including electronic music composition, live coding, remix, mash-up, locative media, urban computing and networks where it is difficult or politically charged to define the participants, the goals and the limits of the systems.
On this background, we would like to discuss the relations between materiality, appropriation, and participation.
Keywords for contributions includes: software creativity, materiality, appropriation, participation, instrumentality, representation, code, artefacts, tools, infrastructures, software reception, interactivity, cultural computing, open soft/hardware”
So what’s Actor Recursion Theory then?
Actor Recursion Theory is a name I’ve given to a specific intervention outlined in my dissertation. It’s not fully developed yet, and as such links or presents my haphazard state of mind during a PhD thesis. It’s reflects an opinion, but I leave plenty of room for others.
The name for it is influenced by Latour and Callon’s Actor Network-Theory, and it upholds most of the methodologies that accompany it, but with one crucial difference; in Actor Network-Theory, the actions of discrete actors can be traced by a synchronic heterogeneous network of relations. In Actor Recursion Theory, actors can be traced and furthermore, executed by diachronic recursion and the repetition of finite rules, which have the capacity to refer to themselves and act on other recursive actors. Recursion is most commonly used in mathematics and computer science and it defines a procedure where a function is applied within its own definition. To put it another way, it is the process a procedure must go through in it’s executant state when one of the steps of the procedural rule involves repeating the procedure itself.
Every theoretical and pragmatic hallmark of Actor Network Theory is carried over, local production, translation, alliance, the naturalisation of culture, the equality between humans and non-humans, (the principle of generalised symmetry) hybrids, the agency of the actors producing time and space, the concreteness of actors in specific situations etc; all with the exception that [ART] actors are constructed and enter into relation with the central mechanism of recursion as the mode of execution and translation between actors. As you can imagine, this may change the dynamics of what an ‘actor’ is constituted by and what it can do. To say that an actor operates by recursion fundamentally revises these hallmarks.
Another important thing; Actor Recursion Theory aims not to totally replace Actor Network Theory nor subsume it – it has a different intervening application linked to its primary concern. It simply provides a different way of understanding the pragmatic capabilities of creating and producing an aesthetic piece of work (‘work’ in the ANT sense of translation is also carried over). Recursiveness is not only just tied to forms of computing and computational art, but to many types of art production. How do actors express themselves aesthetically?
Any other reasons for coining that term?
Well one major part of it happened to rest on the abbreviated acronym of Actor Network-Theory (ANT). I’ve always quite liked the allusive metaphorical capabilities of it. As Latour once stated, “ANT” is “perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler.” (Reassembling the Social: An Introduction to Actor-Network-Theory [2005: p.9]); not to mention that an ‘ant’ constitutes a discrete unit of action in the dynamic network of an antfarm, which we can trace through observation and testing. A perfect encapsulated fit all round.
Well, the acronym for Actor Recursion Theory is “ART” – and this nicely encapsulates the generative and enumerative aesthetic capabilities of how an actor ‘acts’ in its recursive iteration. Plus on the unlikely chance it got accepted as a major field of research for science and technology studies, perhaps computer science studies, academics would have to always-already deal with ART in its recursive deployment…
In fact, this bemusing play with a movement’s acronyms (OOO notwithstanding) is also recursive. It comes under the rather unimaginative name of recursive acronyms, or ‘metacronyms’. These are acronyms which self-refer to themselves (usually humorously) in the expression for which they self-express. For example, CAVE [Cave Automatic Virtual Environment] or sometimes PHP [jokingly referred to as Personal Home Page]. In this sense, as a preliminary discipline, Actor Recursion Theory can only refer itself as ‘art’ and remain independently self-refering, every time someone wants to use it (if it ever needed to be used in the future and in other fields).
But surely “ART” is epistemologically undefinable, unlike an “ANT” which we can define easily?
True. But Graham Harman’s critical reading of Latour has given us some ground into understanding that something as simple as an ‘ant’ is just as undefinable and complex as ‘art’, whilst also remaining specific, autonomous, discrete and concrete. Whilst this may seem like a retrograde step for aesthetics, I take it as a necessary conjecture in suggesting that aesthetics has only just begun to realise it’s capacity for change in the concrete specificity of determinate things. Aesthetics is no longer confined to sign games, creative malaise or the hopeless drudgery of insular commercialisation within a market trend – instead it has the capacity to lurk in-between real-world structures of all kinds, albeit whether humans are beholden and bewitched by it or not. We just need a theory that not only ferrets it out, but can understand why some works of art have greater qualities than others.
So how does ART ‘Recursion’ contrast with ANT ‘Networks’ in light of aesthetics?
For some reason, there has always been some ambiguity as to what Latour means by a ‘network’. We’re used to conceiving a network as some massive, equivalent ‘syncing’ web of collective communication, which flows together across vast global stretches of nodes, servers and lines. But this would undermine the difference between a network and an actor, (for Latour a subway, or ‘the Internet’ are actors, as well as their components). As Latour states;
“Recent technologies have often the character of a network, that is, of exclusively related yet very distant element with the circulation between nodes being made compulsory through a set of rigorous paths giving to a few nodes a strategic character. Nothing is more intensely connected, more distant, more compulsory and more strategically organized than a computer network. Such is not however the basic metaphor of an actor-network.”
A network in the ANT sense, is a collection of heterogeneous dynamic interactions between discrete units of all kinds, at all levels where new translations can emerge. The network is more than a system, it is a historical collection of meetings, translations and gatherings where action has taken place in one place only. It is the actions which define the network, not the other way round through a representation of patterns. Staring and theorising at the network only produces a translation of it, mostly as the thing that is ‘doing the staring’ is acting upon it, through its own stare. Networks need to be explained through the participation of actors, and are not to be used to explain away the participation of actors. To this end, Latour has earned his following by justifying an approach which analyses the pragmatic empirical networks that contingently bequeath a specific event to emerge at a particular time, and the alliances peculiar to those bunch of actants.
However when one wants to use this theory of dynamic networks for the discussion of a current or future aesthetic event, object, thing or production, we suddenly become unstuck. For one thing, actors and networks are a very good toolkit for understanding and analysing the history of ‘whats come before’ (This is a criticism which Graham Harman has made elsewhere). An actor network theory of art history is perfectly credible, (in particular, see Michael Zell) but the problem comes with justifying and differentiating where the necessary aesthetic action must take place. Specifying the networks which have brought together actors in one place can, in principle, explain specific alliances which demystifies artistic genius (much like ANT has tried to do with scientific geniuses) and makes non-linear the typical linearity of ‘Western Art history’. But can ANT supply a toolkit which provides the means to understand expression itself, and the constructive configurations which may or may not show it? Can ANT explain certain capacities for the future production of art?
There are a few theorists who have attempted to understand the production of aesthetics through ANT, not least through a revised form of substance in OOO, where aesthetics can be located between the withdrawal and sincerity of objects. But it all depends on how one views the aesthetic existence of an ‘artwork’ and the observer reception of that artwork. The explanation of this gap takes on many forms. Many trends of art theory (Bourriaud 2002; Groys 2010) have repeatedly dropped the idea of a discrete work, which is of determined aesthetic quality in-itself, in favour of discursive indeterminate relations, usually occurring in-between human communities and power structures (although ANT isn’t guilty of these two factors, for it’s rightful refusal in separating natural actors from cultural ones).
Clearly it would be mindless and moronic to suggest that Actor Network Theorists encourage the destruction of aesthetic quality, if one simply concentrates on the networks surrounding and composing it. But the question is a simple one: what are the operational requirements for an unified entity to forge an aesthetic effect in the first place if actors are not constructed (nor contingent) on human observers? If aesthetics can be shown as being a historically contingent relationship between two actors, which are contingent and traceable on a network, then why bother to create art at all? How would one distinguish an artwork under such terms if not human ones?
In the realm of ART then, we seek to supplement the contingent, stochastic exposition of the network with the necessary execution of automatic recursion. Here, we are suggesting that artwork is a mechanical and durable repetition of a finite actor which is executed over and over again, over one pieces or a collection of pieces. The durability of an entity is traced not by the networks of neighbouring localised environment, but by the execution of its acting state as a rule repeated over and over within a formal system irrespective of that system. It is the actor which defines the recursive action. Recursive actors can be non-linear, yet have the bizarre behaviour of being intrinsically auto-determined despite the impact of any contingent environmental influences.
So the Sim City developers have just released a video showing off the simulation behaviour of the new reboot SimCity coming next year. It’s quite rare for any developer to show a demonstration like this at an early stage, irrespective of graphic ability, so they must be fairly proud of their engine.
The Glassbox engine is a major overhaul to the previous SimCity games and goes someway into establishing itself (in my opinion) as some sort of functioning procedural unit ontology, worthy of something Bogost would have founded in Unit Operations. The first thing that jumps out (to me anyway) is the dependency of ‘agents’ in the simulation of the game. Agents can be any discrete thing, a person, a car, a unit of water or pulse of power travelling across a path to a destination. Furthermore each building unit (which the developer calls a simulation unit), is constructed out of further independent unit agents, boxes, coal, workers which enter and exit the unit of the building, and travel to other parts of the city. Each agent triggers further simulation rules as they reach another simulation unit, but not in transit.
It wouldn’t be fair to do an expository analysis of this for a game which isn’t out till next year. There are of course many academic sources linking the ontology of Latourian ANT to video games like Sim City, the parallels are obvious (and what academic interested in procedural technology doesn’t love SimCity?) But rummaging around in my thoughts, it seems to me that this new simulation has more in common with Bogost’s ontology of Unit Operations; the one-to-one representation of a unit’s workings and transformations; the procedural configurations that take place according to set rules. This reboot seems to be explicitly coded towards a Bogostian ontological understanding.
Replying to my last post on Chaitin, Noah Horwitz (who is working very closely with Badiou and Wolfram as am I) comments on the potential similarities and differences of our position. It’s an important conversation, so its worth elevating to its own post as it were. His original comment is here, but I’ve directly copied what Horwitz thinks regarding OOO and computation.
We are also different in that I think OOO is essentially a dead-end and misguided approach. Objects cannot be sets of irreducible complex bit strings in the OOO sense. I do not think OOO can even admit the notion of the bit. OOO is transposing Husserlian intentional objects onto being itself in act of reification (with the caveat that they ‘withdraw’–which just means their unity is not perceivable and only intended ultimately– and perceive each other–although there is no phenomenological analysis of the analogy from human perception that could flesh out such a claim in the way Husserl gives such an analysis in Cartesian Meditations relative to subjectivity). If an OOO object were a set of non-compressible bits, then the object itself would be the computation of those bits. That means one has a formula that captures the object itself as such. The conception of the thing and the thing would be the same. It’s then just a matter a la’ Woflram to see how that computation plays out. The bit I argue involves the existence of the void, numbers, pure differentiality, etc. Admitting that things are bit strings means admitting that at bottom there is an ‘atom’ and that atom is relationality in itself. It’s not surprising that OOO has nothing to my knowledge to say about numbers as objects or the nothing/void as non-object.
It’s a fairly decent criticism I think, especially in light of the subject he’s talking about (Computation and theology) But from this statement alone (I’ve not read the book). There are (at the very least) three things going on here; let’s list them.
1.) Defining information
I do not think OOO can even admit the notion of the bit.
I think OOO can admit the notion of the bit, but it lies in a different ontology from the recursive formalism upon which the bit is normally derived – ‘objects’ in OOO are just indifferent discrete finite units in my mind, so there is a clear link there. It depends on which OOO-theorist you wish to refute, but I’d imagine the same reply would surface: there is no co-determination between mind and bit, where the two are basic equivalencies of thought and Being. This is something OOO strictly denies, whilst Horwitz whom Iknow is also committed to Badiou, would probably follow (I’m presuming, not stating directly). I would personally uphold the notion that information can only be related to by input and output; my own position (which I’m tentatively calling Actor Recursion Theory) suggests that the recursive procedures, built on symbolic logic and physical computational infrastructure (of all actual kinds, past, present and future) are themselves real, not the bits they calculate or spit out.
The only possible link refuting this are the uncomputable real numbers which perhaps favour a different mode of withdrawal.
2.) Defining computational equivalence in an object oriented ontology.
If an OOO object were a set of non-compressible bits, then the object itself would be the computation of those bits. That means one has a formula that captures the object itself as such. The conception of the thing and the thing would be the same. It’s then just a matter a la’ Woflram to see how that computation plays out.
I disagree. This isn’t about the ontological execution of non-compressible bits. Because seeing how a given program plays out or executes is the OOO unknowable – epistemological act par excellence. One may have the formula, the rules, the axioms, symbolic language, the number of N bits – but the actual work executed by the thing is not the thing in the mind, but the computation executing it ontologically in it’s own specific actual execution. One cannot have any direct knowledge of what these determinate rules will offer, (a la’ Hume) other than repeating the rules again and again under the same initial conditions, but even then the information produced is still structurally random – and the key insight here is the discovery that no algorithm has the capability or computational sophistication to always-already reduce the output to its formula whether it is another algorithm or a human mind.
This is where Chaitin’s Algorithmic Information Theory and Wolfram’s mining of the computational universe meet – although there is one key difference between Chaitin and Wolfram in this regard, Chaitin’s level of complexity is vastly different from Wolfram’s. Chaitin’s theorems start out from Universal Turing Machines and get more complex from there, while Wolfram believes that no entity can reach a higher complexity than a Universal Turing Machine. This is where Wolfram’s ‘Principle of Computational Equivalence’ draws it power by setting a maximum limit of function available to an entity and thus making it equal by default of computational sophistication. Again, I’d argue that this chimes with the OOO statement that every computational-executant entity is on an equal footing and apprehends it in a similar manner, albeit my own take on Wolfram is different from Harman’s Husserl influence.
3. Badiou’s Anti-contrucivsism
The bit I argue involves the existence of the void, numbers, pure differentiality, etc. Admitting that things are bit strings means admitting that at bottom there is an ‘atom’ and that atom is relationality in itself. It’s not surprising that OOO has nothing to my knowledge to say about numbers as objects or the nothing/void as non-object.
This I feel is where the crucial difference lies and it has to do with a particular viewpoint (which Badiou has unfortunately started) that conflates any philosophical discussion involving pure mathematics (and hence computation) with anti-constructivism. Stating that OOO has nothing to say about numbers or the void in set theory depends on which side of mathematical analysis you reside: constructible or immanent. I don’t think OOO has anything to say on numbers and the void as non-object mainly as it resides in a mereology which submits to a constructive ontology. This is the Leibniz relation.
Before you actually start to take Badiou seriously in his endeavour which submits an argument that ZF set theory discourse explains what is expressible ontologically, you have to realise what Badiou’s philosophical intention is. And bluntly put, his intention is to privilege an efflorescent, immanent conception of number and set theory which is always-already unbounded in presentation. In so doing, Badiou despises and ridicules any discussion involving the construction of numbers outside of presentation, in the form of rules and general syntax. Why? There are a ton of reasons, but suffice to say two will do for this post.
The first reason is to do away with the importance of language and grammar in mathematical construction, because (understandably) he despises the Wittgenstein idea that numbers are constructed in a post-structural or analytic sense – for this destroys the potential for generic truth. The second reason is that constructible numbers don’t fit into his anti-Heideggerian schema in setting up an eventual politics for deciding upon the undecidable, because it submits a line of praxis whereby;
‘it is precisely around the exclusion of the indiscernible, the indeterminate, the un-predicable, that the orientation of constructivist thought is built. If all difference is attributed on the basis of language and not on the basis of being, presented in-difference is impossible.’ (Being and Event p.319: Mediation 30 on Leibniz)
But Badiou doesn’t hold a monopoly on the undecidable just for subjective truth procedures. Just because constructible thought has different goals concerning the political mode of thinking ( in particular the ‘state’ succeding the situation, and the basis of artificial symbolic language), it doesn’t mean it can be surpassed by ridiculing Leibniz and/or ignoring the vast historical work of mathematicians working with the concept of computers, where surprises do occur in recursive rules. Turing repeatedly stated this (for reasons derived from Hilbert’s Formal axiomatic systems no less!) Badiou seems to think that submitting to a constructible universe posits a static, decidable, discernible mode of thinking which fails to measure up to the novelty of militancy. Ask any computer scientist, or programmer worth his/her salt if this is the case, I’m sure you won’t get a positive answer back. Constructible recursive rules do produce invention and novelty, and they do so without being reducible to thought or counting.
In my opinion, I’ve seen no evidence (apart from his very early essays) of any attempt from Badiou to actually take computation seriously – and this is probably why; because Badiou does not like the idea that a computer can actually execute logical statements equivalent to human thought. But this can’t be ignored, because formalism and ZF theory relies on recursion. Axioms often don’t work out the way mathematicians want them to.
Tim’s post just reminded me to write something on Gregory Chaitin. I’ve been writing about this area a fair bit in my thesis, in light of his relationship to Wolfram’s research. Often enough, if you highlight differences between two thinkers in one specific field, you can transport or unpack those differences in a completely different field. And the impact of computer science needs to have a greater efficacy in media theory and art production than it does at present.
Now in my eyes, Chaitin is less a mathematician or a programmer or a scientist, but a philosopher who happens to do mathematics, programming and science. How many philosophers would write and publish a book with a bundled copy of their proof coded in LISP? (see the Limits of Mathematics) (LISP is a procedural computer language for mathematicians). I would also say the same for Wolfram, but Wolfram probably thinks of himself as unlike anyone else that has ever come before and anyways, he is far more of a rationalist than Chaitin. Chaitin is fully on the side of contingency, Wolfram is on the Leibniz – rationalist side (but I don’t adhere to that side of Wolfram personally as we will see).
The reason Chaitin is a philosopher, is that he aims for a type of metaphysical commitment one rarely sees in the field of pure math. Thats the biggest reason why mathematicians have never liked what he is doing and why his work is controversial; because he exposes something that can never fit into the long term projects of the field. A field which fully believes that mathematical theorems of reason are achievable by finding the stable axioms and deducible theorems.
The easiest way to explain Chaitin’s result is to explain what Godel and Turing did before him. Godel completely destroyed Hilbert’s problem (or the 2nd problem) by showing that you can use Formal Axiomatic Systems and Godel Numbering of the Primes to encode a paradoxical statement which undermines the completeness and consistency desired by those systems within the systems themselves.
Alan Turing, kind of made it worse, not by making the math anymore paradoxical, but by first showing that you can convert objective Formal Axiomatic systems of logic into equivalent computational statements, the mechanical programs of which produce decidable or undecidable outputs, based on recursion and programmable procedure. You can then use these theoretical computers to then show how there can never be any general purpose algorithm which will tell you whether a given program will halt or not (Kleene later defined it as halting, not Turing). This was Turing’s proof reductio ad absurdum, because if there was such an algorithm (a totalizable algorithm, similar to the impossibility of non-totalizable sets), you could then check all programs in size order and check if they obey the rules of system. Checking whether a program halts or not is the easiest thing ever; you just need the patience in waiting for it to halt or you choose to give up. But a more general algorithm that computes this for you cannot exist.
So as well as discovering computation, Turing also discovered the fundamental limit of computation at the same time. And Turing made it worse, by transforming this metamathematical paradox into a type of physical law within objective mechanical procedure. We all know what happened to the status of the computer after that.
Well Chatin (as I understand it) makes this even worse. His two incompleteness theorems go into this area in more depth. He transports the situation again, but this time into randomness and probability theory.
The basic theorem is this; instead of looking at one program, put all possible programs in a box and blindly pick out a program. What is the probability that this program will halt? One can express this probability as a real definable number but Chaitin chooses to express this number in an infinite binary, base-2 arithmetic expansion native to information theory and relative to Borel’s paradox (an infinite number between ‘0’s and ‘1’, where each independent ‘bit’ is an independent fair toss of coin at each stage: either 1 or 0).
So whatever this real number is, it has a definite value, just like π or the number ‘45298’, and it is in binary notation, so the number must be greater than ‘0’ and less than ‘1’. It is defined as Chatin’s Constant and because there are infinitely many possibilities, Chaitin just uses the notation Ω.
Ω can be defined as an infinite sum, and each N-bit that halts contributes precisely 1/2N to the sum. Basically each N-bit program which halts adds a 1 to the Nth bit in the binary expansion of Ω. Add all the bits for all programs that will halt, and you will get the specific precise number of Ω. This makes it sound as if it were a real number which is computable, but it isn’t. Ω is perfectly well defined and it is a specific number, but it is impossible to compute in its entirety. In fact, Chaitin’s proof shows that Ω is irreducible, maximally unknowable and cannot be defined by any finite mathematical theory no matter what the computer language. The output cannot be condensed into a finite number, which is simpler than it is. The point being that the first N digits of Ω cannot be computed using a program shorter than N bits – that’s the irreducible part.
In information terms, it is structurally random, with little to no structure, and no rational way of reasonably reducing it to something shorter than it is. Mathematics therefore is, at base, irreducibly complex and random according to Chaitin. Ω is place where mathematics has no structure and no reducible reason – in short rational numbers are true for no reason. As Brassier argued (and used it to critique Badiou’s subject) it’s almost like an incoherent noise with no structure to it all and from where a narrow rational layer of normal rational mathematics can be found.
I favour Wolfram’s undecidable approach, which doesn’t get caught up in probability theory, but at a price, for one must adhere to deterministic pesudo-complexity, albeit a pseudo-complexitiy which is more pragmatic. Unlike Chaitin, Wolfram thinks that there is one fundamental program to explain all complexity, which I don’t follow at all, for the reason that Turing discovered. However that does mean we have an either/or situation, with rational truth on one side and irreducible complexity on the other. One can simple state the OOO wager (and this would be my wager) that the recursive execution of rules from axiomatic systems are irreducible in themselves, with no undermining or overmining. And this, somewhat unexpectedly, is where creative art has something important to add.
Most of you have probably read Jussi Parikka’s latest piece on some Object Oriented Questions about OOO HERE; the comments are well worth a read if not for the usual can of worms OOO usually opens in the blogosphere. Paul Caplan replied HERE and Levi HERE, Graham’s also just replied with THESE TWO posts. But rather than repeat other responses, I thought I should offer my own specific thoughts on why I find OOO to be particularly vital (not in that way) at this moment in time and more specifically within my own research. That way, the efficacy of OOO can be explained, rather than pandering to commentary that haven’t read the material or purposely intend to just debunk for debunk’s sake.
That said, objections to and questions about OOO need to happen; you can’t just write blog-posts and sit at conferences acting as if you’ve ‘seen the light’, whilst waiting for everyone to get it. There have been some solid criticisms at OOO (depending on which variant you wish to have a pop at), Shaviro on Harman have usually been the more consistent and I think we can add Jussi’s questions here as well.
Other criticisms though either miss the target by conflating OOO with a Latourian ANT (relations are on an equal footing, but do not constitute a substantial, withdrawn discrete entity), ridiculing it for being an ‘anything goes’ flat ontology (which is pertinently false, because whilst Ian Bogost and Levi subscribe to an indiscriminate flat ontology, Tim Morton and Graham subscribe to a two-fold layer ontology of real and sensual entities) and lastly I’ve heard the same criticism over and over again from a multitude of sources that OOO is politically moribund and a relational ‘evental’ ontology is far superior, which I think are lazy swipes (on that note, here again is another ‘axis split’ between the fourfold of OOO; Graham and Ian aren’t particularly bothered about an emancipatory use for OOO, whilst Tim and Levi clearly see some potential).
The two other criticisms I’ve heard don’t point to much. A lot of people have told me that they don’t like it because it’s ‘trendy’ – don’t really get that if I’m honest – thats NME music journalism: overly self-aware, debunking mentality, the statement of which amounts to no more of a “sell out!” mentality. The other issue is that some academics, (particular those in media theory if I’m honest) just plain don’t like the word ‘object’ for largely superficial semantic reasons. Personally speaking I’ve found Bogost’s use of the words ‘discrete’ and ‘unit’ to be better in explaining whats going on.
So what does OOO offer for my field (the computational arts) which I find helpful and genuinely new? Well it returns to that question which has been removed from the arts which deals with the autonomy of the artwork itself. This is the most important question I feel, the only question worth mulling over, not just aesthetically, but also politically. For the most part, the idea of an artwork having any form of autonomy has been flatly rejected for the last 50 years or so. The philosophical tradition which accompanies this question is never about the work itself, but about the human social encounter of the work, and where (if at all) the autonomy is located in such an encounter. If you want to move this encounter to the level of non-human, then you basically have relationalism.
So this is about materialism, but to critique materialism you have to pick which one; there are many to choose from of course; (dialectical materialism and the ‘blind spot’, the virtual pre-individual ‘process’ materialism, Marxist materialism of social ties, etc)
One of the things that I point out in my thesis is that most these ‘materialisms’ can be compressed into a form of artistic production which relies on a relational contingent encounter (which in the contemporary European tradition of art theory can actually be traced back to Althusser in fact). This mode of production has completely taken over any dialogue concerning the arts; People construct artworks = artworks are for cultural participation, the outcomes of which fail at autonomy again and again, because materialists aren’t interested in the thing, but are absorbed with the critical recalcitrant encounter.
Clearly we are now seeing a resurgence in media theorist thinkers talking about the materiality of what they’re looking at (Sean Cubitt and Jussi particularly), but my feeling is that the efficacy of understanding non-human materiality immediately dispenses with the autonomy of the work itself. It’s the trans-individual primacy of encounter which is to blame. Any ‘thingly’ power of an individual unit cannot be sustained, because materialist theory is consistently engaged with the contingency of that thing and the fact that it can always be other according to who or what encounters it in some sort of contextual, discursive network (here I realise I might conflict with Levi). The import of preliminary computation in European (particularly Zagreb) 60’s/70’s art practice had a more than decisive hand in this matters.
OOO has been instrumental in rethinking the ontological autonomy of discrete units.My own particular way of thinking computational autonomy is through recursion and algorithmic behaviour. Usually this is also looked at as a contingent ‘process’, from some pre-individual potential, but Bogost’s work has given more than enough ground to discredit this, and rightfully posit these procedures as actual operational procedures rather deny any autonomy away as a vaguely virtual process of event or flux. The test then is to argue that non-computational processes are different types of formalistic procedures themselves, different only by degree.
A recursive thing does not emerge from it’s environment, because the environment itself is jam-packed full of recursive things. Contingency is never exterior to the thing, the thing produces the contingent relation.
God, I need to get this thesis finished.
There were some great responses to my post on hacking and OOO last week. Tim Richardson responded HERE and Nathan Gale just posted his response two days ago HERE. I’m doing this in the middle of a thesis session write-up, so I’ll have to be brief and numbingly inarticulate.
So lets go through it a bit. Heres Tim;
“What Jackson seems to be getting at (for my skewed eyes) is that code is always/already contingent, that code is already rhetoric. If so, then (per Aristotle) I would read his two points to indicate that (1) coding is deliberative rhetoric only insofar as it is a structure generated toward a goal (a final cause; it isn’t created ex nihilo) and (2) that our encounter with the output of such code is forensic. Maybe hacking is forensic rhetoric? Taken together, maybe this is the coding version of Lacan’s future anterior?
BTW, when Jackson writes “Gale” after quoting Harman, is that an accident? That would make more sense, since Harman is guilty of the same authorization problem (someone uses this broken hammer, since it’s deliberate) and more reduction or simplification.”
*R.J – it wasn’t really an accident, but I’ll explain that in a bit*
and Gale’s take on it,
“What I am talking about here is a sort of operation that works on potentiality and contingency. Such an operation isn’t interested in predicating unitary objects or reducing them to their parts or qualities, but is instead focused on uncovering (in an ontological sense, rather than an epistemological one) the unknown, subterranean object. In other words, an operation whose final cause is allusion. If such an operation could exist, then this is what I’m suggesting hacking (and maybe object-oriented rhetoric) might be considered. Wouldn’t this also be in agreement with Jackson’s two points about code: 1) that code is already contingent and 2) the output of code can only be experienced and not known?”
The only problem I see here, though, is that it does bring up questions about language. For like code, isn’t language just as contingent and unknowable in its outcome? And if so, is something like deconstruction already a type of language-hacking? This is where I think it’s important, like Richardson points out, to move beyond thinking of hacking as directly related to code and see it as possible in other material relations.”
I’m liking the tone – but disagree with the result, and its almost as if I don’t want to disagree, because I really like what’s being said – I’m not a debunker. This is an open question, so my disagreement here is sort preliminary as such and it’s derived from something I’ve hypothesised while wrestling with the conclusions of my thesis.
First off – I can understand why Gale and Richardson suggest that I think code is always-already contingent, I kind of alluded to that. I am never going to be an expert in rhetoric like Nathan and Tim, so I can’t really talk about that side of things.
Heres the thing; the execution of code is weird. I’m interested in what Gale and Richardson mean when they posit (from my post) that code or a program is contingent. For something to be contingent, (and I’m following Meillassoux on this definition) it has to have the capacity to be other than what it is. Code-as-written and code-as-executed are two very different modes of engagement, one is simply an input, the other is a deterministic output. The experience of this, as I have stated is unknowable to the observer who tracks the results.
In my thesis, I ask what happens between the input and output? Does nothing happen in the immanent sense? – is it just what it is without further concealment? Or does something transcendent happen, and if so, where does the concealed execution go? Now this sort of question is usually fielded by the ‘black box’ treatment – usually by Wiener’s cybernetic theory and Latour’s treatment of black boxes in science. You kind of treat the execution as a fact, in the sense that one need or even cannot ‘know’ how something functions or executes as long as it does it properly without any tinkering or hacking. But for Latour, black boxes are always openable, and science does it regularly. Right now, the black box which is the speed of light as a constant universal limit, is currently being opened up by numerous scientists studying or debunking the speed of neutrinos: who knows what other black boxes will be forged from this? For Harman of course, every real object is like a permanent black box – never to be opened up by any other black boxes forever more.
The execution of code is weird in this way, because the logics and operations of programs are perfectly understandable as computations even when they have been heavily encapsulated (i.e the proprietary software inherent to Apple’s IOS products). But the fact that the output of the algorithm is unknowable does not mean it is contingent, that is, it can be other. I believe that the inputs and outputs are contingent, but the execution of the algorithm is completely necessary.
An concrete example is in order, and its one that I think supports these criticisms of contingency. Because I’m now finalising and beginning to polish up my thesis, I can now briefly discuss some of these conclusions. And although it is in the field of computation and the arts, the outcomes apply for this discussion.
Firstly I do not hold that contingent processes are the main sources of explaining reality. I’m totally with OOO on this, that is to say, (and this is my main complaint of process and pre-individualistic philosophy) that there is an exterior process which explains away the necessary logics of entities, through some sort of flow or contingent randomisation. My wager if one wants to call it that is a rigorously deterministic one.
For me, I’m starting to think that the execution of rules lie at the heart of ontology.
So here’s one of the many algorithms studied in Stephen Wolfram’s A New Kind of Science (2002), its part of the Wolfram Numbered, Elementary Cellular Automata field (Wolfram loves his personna so much, he named it after himself) and its called Rule 30.
The rule is very simple and it states that if the top three cells have one black cell in either cell or a white cell on the left then the corresponding cell below will be black. In all other cases the corresponding cell will be white.
Can you hack this piece? No not really, because the rule is there, the program does not hide anything to hack as such. But when the rule is executed, by mind or by computer – it is the execution of the rule which hides – and this is the only explanation (which is vaguely OOO ish) that I can provide which explains the following phenomena.
Now you can have many other starting conditions than one pixel, but Wolfram keeps it simple at the start of the book and allows for one single cell. And the pattern which is determined by this procedure (and without any further input) refutes any complete human knowledge of it. The gif below shows a few hundred steps;
A thousand steps;
The left side is clearly decidable – and thus human knowledge of it is granted some truth. But the right side of the pattern is completely random, even though the historical structure of the output is produced by a throughly deterministic recursive procedure. Ok, well what if you execute the same rule with different collections of pixels in the initial condition? Well surprisingly not much actually changes.
The calculated results seems to ‘us’ completely random and undecidably complex, yet there is no unpredictable shade of vitalist ‘life’ hiding away in the execution. No theological import which indexes a higher cause. And whats even worse than these benchmarks – is there seems to be no ‘ exterior contingent process’ hidden inside the program, no guiding hand of an absolute diachronic process that explains its operations, no random perturbations that contingently disrupt the necessary procedure offhand (there are only different procedures).
There is no contingent ‘potential to be other’ because it is deterministic.
Other computer scientists like Peter Gacs, have proven that some elementary cellular automaton which can simulate other cellular automatons are not hindered by any random perturbations. You have to let the rule execute, because human knowledge and the best human capabilities of computation cannot tell you anything more about it than its simple execution. In Heideggerian terms: you have to let the thing thing.
It’s what Wolfram calls ‘Intrinsic randomness generation‘ and Rule 30 is the smallest pesudo-random generator rule capable of producing it. Every form of mathematical formula has been thrown at this rule in an effort to produce some compressible, rational, explanation. But none has been found. None of the right side is repeated, it is structurally random in every way, it cannot be compressed or reduced. Why does such a determined understandable rule seem so complex? Because of the abstract fact that the rule itself is what does the work, not the product of human engineering.