Non-Modernist Formalism: Form instead of Material

So as far as conferences go, this year has been quite weird.

Last year I attended roughly nine or ten events, in about eight or nine places – this year it has been about eight events in three places, cramming four in March and four again in October. Although it is certainly efficient (it left me with an entire summer to write and write), I wouldn’t recommend the workload involved.

One of the side effects is that you forget stuff in-between the events, and that’s a problem: the whole point of this blog is to publish stuff, conversations, etc., in the aid of reminding myself – in writing up the thesis and subsequent papers – to capture how and what I was thinking. So with that in mind, this post is simply a update on a talk Graham gave to AIAS, Aarhus, a couple of weeks ago (of which I was the respondent) solidified by talking to Tim, who keynoted at performing objects Falmouth, last week. I can’t cover everything that went on in Aarhus, suffice to say that both Graham’s paper and my response will be published in the Nordic Journal of Aesthetics next year.

One of my worries with is that, quite often, ANT and New Materialism, are bundled into various streams of OOO literature, and usually employed to denote some sort of change in affairs. And that’s fine in one sense: not every paper needs to provide a prior disclaimer for distinguishing these positions for the sake of signalling a change in their disparate discipline: such ecology, architecture or the arts. But often enough, there are deep differences in how OOO or a new materialist framework might apply those ideas into practice, and moreover, how their close proximity might blur different practices irrevocably. This is not simply a framing issue with regards to terminology.

Nowhere is this change of practice felt in the difference between form and material in the visual arts. And this is important, because in my eyes, OOO is not a materialism, or a return to the material, but about the pluralistic endorsement of substantial and/or actual form. There are clear similarities between OOO realism and the materialist approach, not least their rejection of the transcendent privilege of the ‘human’, and the surprising, incomplete production of non-human things – but there are important differences too. This is especially pertinent in light of Levi’s repeated distancing of OOO towards an orientation of emergent physical objects/units of matter, rather than primordial discrete form.

Nowhere was this more clear than in Graham’s paper, ‘Materialism is not the Solution‘, in which he distances OOO from materialism (specifically, Bennett’s, Levi’s and Garcia’s) in a number of arguments. ‘Matter’ is rejected for something like ‘formalism’, but clearly this ‘return to form’ (excuse the pun) is entirely different from the crusty methods of formal analysis, historically replaced by a mechanistic understanding of organisation. Even deploying terms like ‘mechanism’ can be largely insufficient for describing the types of materialism espoused in new materialism, so one needs to approach this tentatively.

As many others have noticed, materialism is now an utterly confused term, entirely beholden to any trendy concept going. Matter can be applied to almost anything, if you work hard at it enough; cognition, strings, history, the Real, capitalism, process, dust, the social, time (even ‘deep’ time), field-systems, relations, praxis, context, networks, movement, duration, embodiment and (the most frustrating one, used indiscriminately in the arts) encounter.

But it doesn’t matter what sort of ‘matter’ is deployed in materialism, its deployment is always against form. For Graham, philosophy has historically managed ‘matter’ into two areas; it is either some ultimate ‘stuff’ or physical ‘structure’ upon which all derivative forms can be broken down, or else, matter lies in the absolute formlessness of primordial emergence, which spits out derivative forms within its endless differentiating movement. Graham calls this second one, the “amorphous reservoir”, of matter, focusing on Bennett’s indeterminate wholeness or a throbbing, pulsating movement of matter-energy. I prefer to call it an invisible framework.

The invisible framework of relations is many, and yet, form is not. Why aren’t invisible frameworks plural? Because they are shapeless. Anything which pertains to hold durability and autonomy, fails to approach ‘matter’ and only leans more towards form.  Why should an invisibly grounded framework approach a material excess of movement; because it is invisible? Or because it purports to be the impersonal holistic framework which explains movement and change?

Maybe then it might be useful to actually think of all these materialisms as productive outcomes: that there are so many types, buttresses Harman’s realist position: if we are faced with a choice of multiple materials to choose from, then OOO starts to get its teeth in rejecting the primacy of one type of material. And that’s not to reject these terms outright, but to account for why they become an issue. Why are these abstractions used to account for the changes of things, rather than abstractions resulting from things causing change?

And these are the Leibnizian problems which OOO challenges; how does materialism account for entities which aren’t grounded into the formless apeiron, from the very start? Why are discrete regions of ontology, not left as discrete regions? Why is the invisible structure of materialism simply asserted rather than accounted for? Why should the indeterminate wholeness be formless anyway?

How does a materialism account for a form’s durable independent basis, yet not reducible to a physical, natural structure. Why is it incapable of offering a better account for the status of immaterial things, rather than to eliminate them outright in favour of the material? In other words, how do we approach excess? In materialism, there is always an undermined (substance), or overmined (correlated) abstract formless excess which gives rise to forms, but in OOO, the abstract excess itself is always formed by the substantial discrete thing.

———-

Moving quickly, how might one begin to approach form, or formal analysis in artistic practice, without being quagmired into the historical rejection of the morphological (I’m thinking here Joseph Kosuth’s Art After Philosophy)? How might contemporary art be understood, not by understanding its production through an impersonal, invisible framework, but through its own form, and experimentations with that form?

One way of approaching this is to ask what a non-modernist formalism might look like. It might not look like anything. It might be utterly impossible even. And if it was conceivable, how might it adopt certain OOO-ish features? How might it fail in doing so? What would formalism look like in art praxis, if the separation of culture/nature were applied: that is to say, the removal of modernist teleological commitments?

I’ll finish by sketching out some, very heuristic points (I’m writing this largely ad-hoc, as I’m heading out the door)

  • Non-modernist formalism may not, in any method, claim that forms can be known or self-mastered.
  • Formalism is about the tension and incompleteness between autonomous forms, as well as the forms themselves.
  • It might not privilege a purity of form, nor an allocation of the artist (or critic) as a sole bearer or attainer of that form. Thus form is different to ‘structure’, and should not be tainted or overwritten with structuralism, as it was in the 40s.
  • Formalism shall no longer be mired with the ‘explore the nature of the universe’ twinge. Instead non-modernist formalism might approach the exploration of things, not nature.
  • If there is no pure form, then form exists as a tension within the existent of the determinate thing. There is no ‘form’, only ‘the forms’.
  • There is no difference between a thing’s inner, essential, ‘actual’ form and its ‘significant form’ (contra Fry)
  • Forms are not platonic, but their constructed effect appears deceptive and will remain so. The work may not depend on the output, or the result, but neither is materialism smuggled in through the back door with unhelpful pangs of ‘process’ – instead forms are just banal configurations unified into a durable unit of cause.
  • Forms are timeless, not in old-fashioned sense of ‘grace’, or whatever: instead they are timeless, because the abstraction that is arrived by encountering them produces time, as opposed to time producing forms.
  • Non-modernist formalism might not dispense with artist intentionality, or the social production pertaining to it, but then again, it does not privilege it either: not in the form of speech or performance, nor of politics. All forms are relevant, and are built from many different types, and require different approaches in different configurations. If praxis is only reducible to an invisible framework of material, then it remains unclear how change and rupture is attained in the first place.
  • Forms are discrete and non-relational, but aesthetic value is not.
  • There is no predetermined path using forms, only ruptures in various disciplines, and those ruptures are products of the forms themselves.
  • The appearance of forms is never literal, only metaphorical.
  • Following Greenberg, the role of artists is to ‘test’ certain forms and find out what is essential/non-essential, but in the absence of privileging human intentionality, this is a decentralised understanding of ‘testing’ – form-to-form, object-to-object. Like some scientists, artists test things in order to be surprised by forms, not to ‘know’ them.
  • The relationship between beholder and object is a temporary absorption between two or more forms, forming a new actual object or performance.
  • The role of ideas within forms, are perhaps marks of translations and transcriptions between different forms and the decisions and judgements that take place. An artist can trans’form’ something according to a principled ideal, but therein, so too can an inanimate form trans’form’ the artist and beholder in turn. Ideals, concepts and decisions must be accounted for in this realism/formalism, and not drained to the dregs with matter (perhaps being speculative, how does a realism account for ideas, without becoming idealism?).

 

Bad at Sports: “What is there”, An Introduction to OOO and art

Starting from today, I’ll be writing a regular blogpost/article/piece on OOO for the Chicago-based arts blog ‘Bad at Sports”

The first text is already online. It is the first part of an Introduction to OOO and its possible relation to art and aesthetics – called ‘What is there?”

Release the objects

Most of you know the news – but for those unaware, Levi Bryant’s long-awaited book The Democracy of Objects has now been released on the Open Humanites Press website.

You can read it directly on the website HTML stylee – or if you’re like me and abhor reading complex texts on a screen, you can wait for the paper copy or a Kindle-friendly PDF.

Response to Levi

So Levi wrote THIS lengthy but necessary reply to my last post. In summary, Levi noted Ranciere’s equality of human spectator is wholly contingent on human criticism, and acknowledges that Ranciere’s aesthetic leanings could easily apply to non-human actors (which I completely agree with). Here’s Levi, who nails it as articulately as always:

“As a consequence, the claim that politics is restricted to the human is itself both a way of begging the question and itself an operation of a particular police order that counts in a particular way. Yet thinkers such as Jane Bennett and Bruno Latour have shown us how nonhumans can be understood to speak. This opens the possibility of a form of politics where nonhumans participate.”

However Levi disagrees with my second criticism that Ranciere upholds an aesthetic intention entirely dependent on relations.

“Discussing the way in which spectators might relate to an artwork is entirely different than claiming that art works are nothing but what they are for spectators. Ranciere’s thesis is that art is one way in which the order of the police or the distribution of the sensible can be contested and reconfigured.”

Just to clarify here, I am entirely on the side of the argument that artworks are entirely different from what they are for spectators: however from what I’ve read of Ranciere thus far, I don’t see him defending this notion whatsoever (clearly I am always aware that from what I have already read, I might have misunderstood, as well as accept conflicting arguments from publications I haven’t read yet).

It’s important here to clarify Levi’s views on the structure of the artwork, which are clearly different from Ranciere’s, because Levi is already ahead of the game by acknowledging artworks are entities in their own right irreducible to its relations. This means that Levi is, to some extent, already saying something vastly different to Ranciere, by exploring the way artworks are independent and have an impact on other entities. Because Ranciere views artworks as contingent on human meaning, he can only ever admit to presupposition that the work in question isn’t really an artwork in the discrete sense that Levi affords.

Also, the fact that Ranciere says art is one way of contesting the naturalness of the sensible, is not a deal breaker in boxing off aesthetics as different method of doing politics, or instilling the rupture of a regime. He’s already emphasised the democratic nature of aesthetic art by noting that ‘anything’ is a potential theme. This can only delay the question of distinguishing artwork from non-artwork, by appealing to an unpredictable rupture in the existing order/situation or a divorce from their prescribed functions, rather than a direct confrontation with a discrete entity. I’m not emphasising that the entity in question must be this or that, or exhibited by virtue of this or that (I’m not like Fried in that way!), but I am dissatisfied that no distinction is made between an object and an artwork. Ranciere is not interested in entities, he is interested in emancipated aesthetic experiences separate from the order of labour. He is not interested in the aesthetic depth in objects, but workers who become aesthetic spectators, who then go on to perturb their given distribution of the sensible of which they are a priori fitted. If there is any aesthetic break given, then it is not something that operates on an object to object level, but a contextual affair. If this isn’t made explicit in The Politics of Aesthetics, then it certainly is in The Emancipated Spectator.

Like I say, this isn’t a criticism of Levi, who advocates a entirely different premise – that presumably objects can also disrupt regimes of attraction and forge new modes of relation. Although the problem of distinguishing artwork from non-artwork always exists in the background, not just of Levi, but of OOO and any philosophy of aesthetics in general.

The truth is, any philosopher who chooses to discuss the structure of the artwork post-Duchamp, has to account for the crisis that Duchamp opened up. When Duchamp exhibited Bottle Rack (Egouttoir or Porte-bouteilles or Hérisson – left) in 1914, he knew that he wasn’t exhibiting a discrete artwork, but a simple object. The aesthetic point came from the anxious, rupturing situation the artwork forced to open up – and it is because of this, that the artwork became, not object, but relation. It was the web of relations that Duchamp made explicit, not the object. Like Ranciere, Duchamp also knew that the artwork invited the spectator to contribute in the creative act, the artwork could never be autonomous anymore. The artist does not intent to do anything anymore, but rather create indeterminacy (and this is done precisely by affording the spectator individual meaning in the first place).

 

Therefore OOO and politics

There’s been a ‘discussion flurry’ on the topic of OOO and politics. Intra Being has a decent summation on it, riffing on Morton’s posts (HERE) and Levi’s posts (HERE) and (HERE). Can a egalitarian position be articulated in OOO, can a restoration of democratic values be reworked by the plurality of object behaviour? I for one, sense that the scope should not be focusing on units completely, but focusing on the situations inherent to OOO that emphasise determinacy over indeterminacy.

The first thing about OOO, which strikes me as being politically useful is that it is an altogether different type of determinate metaphysics. Whilst Levi’s mediations on Ranciere’s political schema are noteworthy and promising, the outcome of Ranciere’s manoeuvres, (particularly in his, very current and influential visual aesthetic criticism: see The Emancipated Spectator and The Politics of Aesthetics) are not only contingent on human behaviour (which Levi notes) but are also contingent on a wholly relational indeterminate system. This isn’t directed at Levi; it’s an aesthetic gripe with Ranciere, who actively mingles with relational aesthetics. For those who aren’t familiar, relational aesthetics is term given to a contemporary form of aesthetics which exists (and are critiqued) as an active relational, ‘hands on’ form of art production. There is no work, in the ‘traditional’ form of painting, sculpture, video, performance, but the work is entirely contingent on audience participation in the form of communities, actions, happenings and dramatisation.

Ranciere’s mapping of relational aesthetics and relational politics is conflated in an attempt to not only democratise the spectator, but to also simultaneously dismantle, what he calls the a priori ‘distribution of the sensible’ (e.g. the dominant structure of the police to keep the sensible in order). Like politics, Ranciere tracks the notion (both historically and critically) that art makes new communities and emancipatory situations: which is all very well and good, but I sincerely detest the idea that the artwork is nothing but relations between communities – a dominant ontological strategy in contemporary aesthetics that OOO manages to healthily dispatch with.

Getting back to the politics question, I will rehearse an argument from a previous paper [HERE -PDF] (which is likely to be book chapter – more on that when it is confirmed), I think the correct road here is to analyse political thinkers who have critiqued the metaphysical form of politics as being insidious. Heidegger’s critique of onto theology is a starting example here of course, but the main thinkers who are particularly good at analysing the stance of metaphysical politics are Gianni Vattimo and the relatively unknown Austrian media theorist Wolfgang Sutzl. For both of these thinkers, the operation of a dominating political regime, works by negating exterior perspectives in so far as, the metaphysical stance focuses solely on one entity as an explanation for everything else.

In fact, Sutzl is noteworthy here, for he suggests that negating exterior perspectives is the main preoccupation behind the Western infatuation with ‘security’, post 9/11, along with a rigorous update of Heidegger’s critique of technology [He argues this in the essay Languages of Surprise (2009), which is a must read, PDF HERE]. And as a media theorist, he also suggests (following Agamben) that the metaphysical security stance (one that secures presence) plays into the hands of every day mediation in general; police troops measure and ‘secure’ dissident public demonstrations, governments (such as Mubarak’s) perpetuate ‘state of emergencies’ to secure criminals in exceptional circumstances, security software secures malware code and suspicious keywords into a state of ordering. Politics doesn’t have to be a straightforward “take down capitalism at all costs” kind of situation, which the humanities seem to be particularly adept at repeating (thats not to say it shouldn’t be like that), but it could be as simple as questioning the reasons behind the increasing presence-ing of security.

But the key element behind OOO, especially Harman’s variant, is the commitment to a metaphysical stance that does not negate exterior perspectives whatsoever. In fact, it is the total opposite; OOO actually exemplifies exterior perspectives and seeks them out, wherever they may be.

What does this mean? You get something similar to Latour’s compositionist manifesto [PDF] – and in particular, something similar to his ethical quip that one should search for universality, without presuming that universality exists in the first place. From my own perspective, I think that if an OOO politics is worthy of the task, it should track the heterogenous composition of determinate regimes like security, but it should also be aware that some serious work is needed if one wants something like democratic justice. Like anything in this world, to compose something tangible takes an enormous amount of work to compose, even if its for evil. Evil structures have an essence, along with gooseberries, milk and nail varnish. One could take Sutzl’s challenge of searching for security structures in the world of objects – boards securing windows during a hurricane, chains securing dogs outside shops, the sun secures planets around its orbit. A true OOO politics would find issues such as security to be fairly ubiquitous in ontology.

And crucially, one should stress that political regimes, or even the ‘distribution of the sensible’, take an enormous amount of work to upkeep. There is nothing more alien than thinking that a regime just exists as a grounded thing. More and more, the correct approach to this, and I relate to the composition of artworks as well, is not to compose a political situation by way of indeterminacy, but precisely the opposite; one should compose an determinate unitary entity of execution and submit to it fully, and not only that, it should ward off any indeterminate outlook.

 

 

Sandcastles and sand piles: Levi on Entropy

HERE’S an illuminating post by Levi on Entropy, OOO, societies and systems.

“Entropy, rather, is a measure of the order present in a system. A high entropy system is a system in which there is equal probability that an element will be located anywhere in the system. Such a system is characterized by having a high degree of information. This entails that such systems have low message value. By contrast, a low entropy system (what I call an “object”) is a system in which given the position of any particular element, the position of the all the other elements is readily determinable. Such systems have high message message value due to the low probability of their elements being organized in this way.”

I’m loving the way Levi continues to effortlessly map entropy on to social political theory using von Foerster’s work, very promising indeed, and it’s the right sort of questions to be asking rather than the usual forms of ‘locating agency’ somewhere vaguely vague.

My only question would be for Levi to offer a more detailed explanation of a high entropy system, as I gather from the paragraph above, he wants to distinguish low entropy systems ‘as’ objects (thus, ordered systems) distinct from high entropy systems.

An example is needed: (and yes, BBC fans I know this is an example from the popular TV physicist geek of the moment: Brian Cox – buts it’s a good example nonetheless).

Take a pile of sand. It can be said that this object has a high degree of entropy. Why? As Levi states, it’s elements have an equal probability of being located anywhere in its system. In other words, as the infamous ‘arrow of time’ moves from one period to the next, many different types of forces can impact on the pile and sand, and it will retain the same structure. Or to put it another way, the billion or so sand particles in the pile of sand, can be rearranged in many different ways and still retain the basic structure; a pile….of sand.

Now take a sand castle. (Obviously, its not an ideal example – you wouldn’t ‘find’ a sandcastle in the middle of the desert without presuming someone had made it (unlike say a rock or a pebble), but from the stand point of the second law of thermodynamics, this still stands.) Lets say the sand castle contains exactly the same amount of sand particles, (the billion or so number) – this as Levi would accurately deduce is a low entropy system and hence an object. Why? It is ordered and determined into one structure alone. It’s organisation allows it to proceed into a consistent pattern over time. However, it is still subject to the same types of forces as the pile of sand, and as such, the ordering structure of the sand castle will eventually break down into a pile of sand. Low entropy will always, always progress to high entropy.

And bare in mind, this is has nothing to do with ‘wanting’ to be disordered, or wanting to be more ‘disorderly’ in a closed system as Levi correctly points out. This is simply a matter of probability and organisation. High entropy systems are still thoroughly subject to forces and laws, it’s just that when their components and structures are moved around within the system, the overall structure of the system remains the same.

So here’s my question, is the pile of sand an object or not?

If it is, then fair enough, maybe Levi could say that this still retains some low degree of entropy, such that, it will contain some orderly behaviour. But then he must offer some ontological explanation of what a high entropy system actually is, even if it’s still an object.

If it isn’t, then that’s fair enough as well; but within that, there still resides a bigger issue and it depends how superficial Levi is expecting entropy to be.

This is why I’m putting forward the question, and I’ve mentioned it before on Levi’s blog; eventually scientists predict that the whole universe will in time become one universal system of high entropy (the heat death of the universe – although there’s nothing particularly ‘hot’ about it). After every single supernova has occurred, after every single star as become a red giant and then a white dwarf, after every single smaller star becomes a black dwarf, after every single body of mass has come and gone into existence, the universe will become, like a pile of sand, a high entropy system. It will basically be a single space of protons, all subject to the laws of physics. Simply wave after wave of protons never forming any mass ever again (this will roughly take the same number of years to occur, as the number of atoms in the entire universe).

It seems that the second law of thermodynamics does not care for objects, unless a wave of protons is indeed one object.

So how would this affect OOO? From my own standpoint, the harder and more impossible position would be to dispute Entropy altogether (why not? Meillassoux’s “contingency” seems to!!), however this requires one hell of an explanation – and yes Wolfram is perhaps the only reference here I can rely on at this precise moment. You can’t just dispute the laws of thermodynamics lightly.

Irreducible Determinate Entities: Wolfram, Agency and Complexity

WARNING: this is a very long post, but also the beginnings of my position. So tread carefully.

Yesterday, Levi and Tim posted intriguing accounts of theorising OOO agency, without descending into individualist freedom or determined holism.

So here is the problem, as they both see it politically;

“Arguing for agency within individuals risks reiterating a neo-liberal right-wing stance of sovereign independence within objects.”

However…

“The leftist alternative would be to suggest that individuals are only determined by their circumstances and their cultural relations, (the Symbolic realm speaks through us). There is no individualistic agency as such: what we would position as individuals are merely determined through a relational juxtaposition of accidents.”

The former places agency at the individual level, whilst the latter undermines it at the expense of immanent relations. Levi’s answer is to state that objects are individuals, but agency can only arise when its circumstances are strategically modified.

“If the holist is so loathe to grant one inch to individuals, then this is because the stakes are incredibly high. In granting an inch to the individual, the holist is opening the door to a set of policies that authorise ignoring circumstances, relations, and the negative feedback loops operative in the social field.”

Instead of theorising an individual as wholly unable to do anything but reproduce the relations (of which it is always-already constituent), Levi argues that OOO can highlight modifications that perturb and transform those constituents’ relations into new directions. And furthermore – as Graham indicates – agency is defined by the choices one is dealt with, rather than as the false impossible, unlimited choices one cannot make. In short, individuals are determinate, but they are not thoroughly determinate, this is where Levi’s use of topology comes into play.

Tim’s answer is that objects are not individuals. One can understand why he would argue for this, considering his work on Hyperobjects, which aren’t particularly individualistic. They are more like contingent, brooding, real intrinsic powers with real effects that permeate through you and on you. You cannot ‘see’ Hyperobjects, anymore than you can see ‘society’, but nonetheless there it is – you’ve gotta deal with them. Agency is imperative.

So as you’d expect, my contribution is to bring computation into play. What can we learn here? Let’s bring Stephen Wolfram’s A New Kind of Science (2002) into this debate (because I haven’t done so for a while), and hopefully this can lead into some preliminary notions that describe my own position on this. Bare with me, we’ll get there. Most won’t be familiar, so I’ll introduce his thesis.

As NKS is an empirical, systematic investigation of Cellular Automata (CA), we must understand what these programs constitute. To start with they are essentially simple input-output machines running abstract rules. The simplest type of CA, are one-dimensional algorithms with two possible states per cell. A rule must on each line calculate whether each cell is black or white according to the top three cells above it, so there would be 8 possible types of pattern in one rule, and the number of possible rules in total are 256 (this is known chiefly as the Wolfram code, all hail our ruler, etc).

A typical CA starts with one pixel on a line (the simplest input), and executes a given rule to generate the next line and then line after that, producing a visible output over a number of given generations.

Wolfram’s initial reasons for looking into CA were to start looking at computational causality itself rather than the typical framework of engineering or mathematics. When one defines fields of study like ‘pan-computationalism’ or ‘The Computational Universe’, they do not necessarily form a basis to the metaphysical statement: ‘all is computation’ and do not necessarily foreground an intentional study into proving that the universe operates ‘as a computer does’. Instead, these empirical studies enumerate all possible rules within computation itself and study their output. This has more to do with understanding all possible computational systems rather than understanding full blown ontological reality. Nonetheless, this does not rule out the idea that the output of computational models replicate structures found in complexity. As Ian pointed out in Unit Operations, Wolfram’s studies provide important leverage to the notion that similar principles underscore ontological structures in philosophy and computation. There cannot be one rule for computers and another for ontology.

In this case, Wolfram’s study began with very simple programs, on the intuition that if one were to devise programs that could model possible complexity in nature, one would have to start with complex inputs and complex rules, a la’ Chaos Theory.

So to take Wolfram’s example in Rule 22, (below), a simple collection of 8 variables like this creates a somewhat nested fractal output, replicating similar patterns at 30 generations or a million generations. Some fizzle out, some fail to get started at all.

However Wolfram discovered that not all CA behave this way. Rule 30, (below), (which has become rather infamous in computer science), operates rather erratically in comparison, with no discerning pattern on the whole, nor any total replication. And so it goes on for each discrete time interval. (Notice that the rule is exactly the same, but merely one change on the bottom layer, 4th from right). There is no randomised input process, it starts with just one black pixel.

There are three important things to keep in mind here:

1.) All of these 256 rules are thoroughly determined as are all programs, i.e. Rule 30 does not change its randomness each time, the algorithm calculates the same structure each time. Its just random. How can something be intrinsically random?

2.) Complexity and randomness do not emerge as such, they are intrinsic to simple and complex rules without any original disturbances or stochastic perturbations. Wolfram calls this the principle of computational equivalence – basically complexity is not rare. Very simple systems like CA right up to the human brain, heat, the Law, etc are computationally equivalent and just as sophisticated as each other without any teleological purpose. Complexity isn’t special, especially not for conscious activity, its a threshold often reached by simple rules.

3.) One cannot reduce CA complexity to these simple rules even though they are determined. In fact Wolfram thinks this is impossible. You cannot fit programs to behaviour, you can’t predict what it do unless you run it, simple rules just do their own thing.

NKS is Wolfram taking this discovery and running away with it like a dog with a steak. For him, irreducibility is this new way of actually doing science. Wolfram has claimed that simple programs can model complex structures from universal turing machines, prime numbers, leaf patterns, snowflakes patterns, differential equations commonly used in physics, time, space, the entire universe and like the physicist that he is, a possible theory of everything. (One of the more bewildering statements Wolfram comes out with, is that axiom systems in mathematics and quantum theory only favour structures that are reducible – they don’t tend to look at rules that are undecidable.)

But lets return to the individualistic vs. determined issue. We don’t want to afford agency to individuals, but we aren’t ready to say that individuals are totally determined by the wholeness of the systematic holism.

My own position is to abandon any hope of looking between them. In Rule 30 there is a clear tension between the simple rules and the output of the system, even though one causes the other. In that case why do they appear random and complex? My own position takes on Wolfram’s principles but I do not adopt the suggestion that everything can be determined into one simple rule. There are only multiplicities of rules.

The reason is that we are a discrete system looking at the output of another equivalent discrete system encapsulated. An algorithm’s overall actual behaviour is determined, but we are unable to describe it by reasonable laws. It is irreducible.

This is my ‘work-in-progress’ position regarding OOO, and whilst I predominantly think it has consequences for making theorising artworks, the same is true for philosophy as well. The overall behaviour of a unitary procedural system is not agency as such, nor does it just ‘appear’ from somewhere else, like outside the algorithm or an explicit source of randomness. Whilst there is the case that systems operate in this way concerning televisions, rock molecules and API’s, I am more interested in contingent operational systems. Political systems like the law, and security for example. But also, ideas and thoughts are algorithmic and executant. fictional characters, cups, YouTube and catbeds. Companies have determined rules, artworks do too. Everything just executes, forced to encapsulate each other for ever more.

Indeed, what about programs that take external inputs and perform calculations on them? As Wolfram muses;

“But this is not to say that everything that goes on in our brains has an intrinsic origin. Indeed, as a practical matter what usually seems to happen is that we receive external input that leads to some train of thought which continues for a while, but then dies out until we get more input. And often the actual form of this train of thought is influenced by memory we have developed from inputs in the past–making it not necessarily repeatable even with exactly the same input.

But it seems likely that the individual steps in each train of thought follow quite definite underlying rules. And the crucial point is then that I suspect that the computation performed by applying these rules is often sophisticated enough to be computationally irreducible–with the result that it must intrinsically produce behaviour that seems to us free of obvious laws.” (NKS p753)

That’s the point – agency ‘seems’ free to us: it isn’t, not for a second. We can ‘experience’ freedom perfectly well as encapsulation, even when the underlying rules say otherwise. To do so requires an almost psychological account of what it is to be a determinate program, but I’ll reveal more ideas on that at some point.

What is the difference between absorbing and securing?

A lot of blog activity has developed around Levi’s claims regarding objects, properties and relations. (DC has a great summary here)

One can’t judge till The Democracy of Objects comes out (which is out soon I gather), but he gives us a flavour of what to expect in terms of autopoietic and allopoietic machines.

I’m really excited to see how Bryant fuses Luhmann’s work with Onticology, as Luhmann is a notoriously complex read. I told one of my PhD supervisiors that I’m planning to delve into autopoietic theory, to which I got the warning of ‘Don’t go there’ and ‘Prepare to buy some ripped library books’. Luckily I found the curious work of Søren Brier (who has conceived the rather ominous study of ‘cybersemotics‘; a combination (and interpretation) of Luhmann’s theory of social communication with Pearce’s pragmatist semiotics) as a welcome introduction to Luhmann.

As far as I’ve understood Luhmann, the essence of systems lie in social communications that are intrinsic to the form of the system itself. A good example here is ‘The Law’ (save the Lacanian meaning). What exactly is outside the law? Nothing. Nothing is outside the law, according to the law itself. The Law is the Law and so it should it be so, otherwise the law is undermined. The Law systematically bolsters its own law, by making a distinction between lawful and unlawful. If I’ve read Levi correctly, the ‘operational’ aspect of the law perpetually continues the Law as a unit or object. Clambering on and on, devouring event after event to continue itself anew. Anything contingent that emerges outside the law or within the law is translated as a distinction between lawful or unlawful. Autopoetic systems operate on the law of binaries.

One could add the structure of security here as well. After all, what does security do other than secure? As Wolfgang Sützl wonderfully pointed out in The Languages of Surprise (2009) (DATA browser-4) – still criminally ignored – the aim of security is to secure itself. As MacKenzie Wark wonderfully put it, Security is basically, the securing of itself – or the security of security. Again the same distinction applies – you are either a friend or an enemy. You are either a threat or you are secured. (or what Sutzl calls empty talk or telling silence – security neutralises language.)

So when Mark Crosby commented with this;

“It’s not WHOLLY a function of the ‘external events themselves’, but neither is it WHOLLY a function of the ‘internal organization of the object’. It’s a function of BOTH, and the feedback obtained only by their interaction. So, I can see no way to correctly say that they never ‘touch’ each other.”

…and Archive Fire agreed, they are both missing the point. The relationship between the internal communication of an object and the external encroachment (of its own events) is not a reciprocal one. The system can only determine this external event by the very procedures it incorporates ‘to do’ the determining. Or as Tim Morton, puts it succinctly, ‘We cannot un-know what we know‘. For my part, its worth pointing out that – for me – objects do not emerge as such, they are executant.

Mubarak’s regime was one of security; an endless one. A perpetual endless regime of securing its people and making threats manifest, until they became a threat. Sutzl often remarks that security resembles metaphysics purely on the negation of exterior perspectives. Like an autopoietic theory machine, it endlessly repeats itself anew, fuelled by manifesting threats. (he never uses autopoietic theory, but instead opts for Heidegger’s critique of technology and Vattimo’s critique of metaphysics – actually it’s a bit complex than that though, its to do with the independence of of metaphysics and the exceptional independence that grounds security).

As an added point – and hopefully people will see this in an upcoming work of mine – OOO provides a helpful incentive to analyse the structure of the Law and especially security as independent non-relation substance. What does security do aside from securing other objects such as people, guns, bullets, tanks, software systems, macro updaters and property? It does so by either producing them and securing them, or making external objects manifest  (or present) and securing them. Security secures…thats it. Security is not emergent either, but it is an executant system on always-already-existing executant systems. There is no flow in security – only intentional firewalls.

Securing objects is a much bigger task than security itself, and crucially it is not as systematic as security would like to think. It is subject to dissident. I doubt Levi’s taken this to where my mind wants it to go but thats my perspective.

This is where the real democracy of objects kicks in I would gather. OOO offers a metaphysics thats does not negate exterior perspectives but affirms them.

The only issue I have with Levi’s project then, (and it is more of a upcoming gripe of differing positions, rather than a issue) is if he could proceed with autopoietic machines knowing that every object is intrinsically hard-wired to be self-centred. Security must be analysed as an equivalent structure to rocks and tables, no doubt, but my thoughts are leading to an alternative to the securing of parts and properties.

And this is where the difference between the absorption of objects and the securing of objects emerges. As the terms would denote, there is a strategic difference between the two. One is an aesthetic term, the other isn’t so much, although they can and are misconstrued. Objects can be absorbed into the operative mechanism of bigger objects, or the rather insidious alternative is for objects to secure objects into their functioning.

On Agency: Response to Roden’s Reply on my Remarks

I’ve only just picked up on one of Roden’s comments over at Enemy Industry (his own blog) about my remarks on algorithmic artworks such as Every Icon.

“I feel I need to respond to Robert’s remark on algorithmic art at greater length. Discreteness – unless I’ve read him wrong – doesn’t seem to entail complete ontological separate between things. It only entails that things have discrete and independent dispositions by virtue of their causal powers – including their computational powers.Thus the fact that a java program being run in a particular System S is disposed to iterate a loop once started without stopping conditions is a fact about S which obtains as long as nothing outside of S tinkers with its computational structure. It doesn’t entail S’s inaccessibility to relations as such.”

That is a fair shout actually. I’m not even sure myself how far discreteness goes in the ontological separation of things (hence a work in progress). One can easily find agency and discreteness in Latour and Whitehead for example, but these entities are nothing but relations, by the nature that they are only real if they perturb or translate.

The issue of course is how do you account for that agency? For an object to act, and furthermore act on other objects something needs to be expressed that was not expressed before. Endorsing a total relational system risks deferring agency to the pre-individual, or the whole before the parts strategy.

The private, withdrawn object manoeuvre deals with this by arguing that a real object is always more ontologically speaking – a black hole of sheer formal agency, untouched and undisturbed.

This is crux of the matter. To put it another way, objects are clearly contingent on many factors to ontologically exist, both externally and internally. But if you reduce an object down to ‘just’ those contingent factors, it does not seem to resolve the crux.

Heres the issue then – the object is one of three things; wholly contingent on it’s relations (systems), somewhat or partly contingent on them (my position I think) or not contingent on them whatsoever (Graham’s position I think). There could be a case for a ‘slight’ undermining of objects – perhaps being ‘partly’ contingent on relations is Graham and Levi’s position, not sure on this point.

Agency is important here of course. Agency denotes having the choice to act and understanding when you are unable to act. And its in this sense that OOO has a huge role to play in discourses and social constructivism. Levi more than anyone has challenged the idea that human agency never mind object agency is entirely governed by its relations. Badiou, Agamben, Zizek and Ranciere of course are pathologically obliged to find agency literally anywhere – outside of mathematical ontology, the Real of absolute negation, acting by ‘pure means’, the spectre-like subject ‘without’ an object. If the capitalised ideological subject is always-already reciprocally governed by relational networks of power, then agency becomes a real problem, discrete or no discrete.

But heres where I believe algorithmic artworks and algorithms in general play a crucial role. An algorithm is a strange thing indeed. You take one or more elements, CPU, graphics accelerator, hard-drive, RAM, L2 cache’s, bus speeds and an execution can be performed that hopefully produces a calculable end result. Note that the end result is not always achievable. The cake may fail to rise in the oven or even burn, the revolution may be halted by lack of people and the military, the Earth will disintegrate before Every Icon’s algorithm is finished counting.

All of these are factors are contingent on the algorithmic procedure, because the algorithmic procedure is contingent on certain factors. The issue then is, can it be said that an algorithm has agency? Does it possess something inherent in its form that is inaccessible to its contingent relations?

My answer is yes, but it is a limited yes. A paradoxical tension between it’s contingent vicissitudes and it’s vigorous execution. The next step then is to discover what this vigorous execution actually is without sounding disconcertingly vague.

One example to think about. If an algorithmic calculation has a bug that causes it to malfunction, and it is modified so as to achieve the end result and execute properly, is it a different algorithm?

Code and the non-ubiquity of emergence

Its interesting how certain blog-post themes rally around your thoughts and congeal into one blog-post of your own. So heres one on emergence, OOO and computation.

“The argument for a dialectics of software art runs in parallel to the way in which source code is ready for action. A program, like any programme of action, is conceived in advance of its execution, and holds the potential to act even when not executed. Similarly there may be a delay between what is known and what is acted upon, where practice leads to the development of theory (which in turn leads to the development of practice) and so on.”

Geoff Cox, The Dialectics of Software Art, p161.

David Roden has a great post on his worries about emergence in reference to flat ontology. Following Bedau, he defines emergence as;

‘an emergent phenomenon cannot be predicted from its initial conditions (e.g. existence and microdynamics of precursor populations) short of running a simulation with relevantly similar properties’

Roden is right to suggest that, ‘unless emergence can explain how complexes derive their powers from their parts without being reducible to their aggregate behaviour, it is of little value to a flat ontology.’ No quibbles there.

(A couple of OOO caveats before we continue; firstly in so far as Graham has argued his position distinct from Levi’s or Tim’s – Harman’s OOO is not a flat ontology, so we can outright dismiss that sensual objects have the same type of emergence as real objects.)

In reference to Delanda’s distinctions between ‘aggregates’ (the behaviour is the result of the individual behaviours of parts) and ‘assemblages’ (the behaviour is not deducible from its parts) Roden notes an almost ‘totalitarian’ suspicion on assemblages in so far as a flat ontology has to account for the ontological equality of all entities. How can one account for the independence of all entities that command their parts into one discrete unit? He does not mention OOO objects per se, but a common thread of enquiry needs to be addressed of course.

He has a valid point, but it works both ways. Yes, there is the issue of where this extra ‘real’ emergent property comes from, independent of its parts. But its not as simple as criticising the holistic premise. I too intensely dislike the whole above the parts approach, it fails to deal with the problem, and risks posing parts that are completely redundant.

But one of the more interesting suggestions from OOO is an opposite approach – the parts are more than the sum of the whole. One of the more interesting implications that Graham puts forward in Guerilla Metaphysics is that the suggestion that an object is real when it has an inner effect rather than an outer one. Hence, Graham can deduce from this, a bewildering principle that an object can be real even if it does not affect anything – the parts may be visible or at least enter into relations with other things, but the upper layer is completely dormant. In short – there could be objects that do not take part in emergence. (Graham’s recent post on the case for OOO carefully explains the reasons for this).

Computational aesthetics can both add and learn from this intervention – that emergence is something not to be refuted but that emergence is indifferent. As Levi puts it;

‘…among the more interesting consequences that follow from this premise is that relations must be forged. Insofar as relations are external to their terms, insofar as objects are independent of their relations, where relations exists we can begin with the assurance that these relations were built or constructed. That is, things must be brought together.[…] However, while humans are often among the agencies that build or construct relations, there are all sorts of other collectives where entities are drawn together without any human involvement whatsoever’

Relations are only relational as a cause. Thus the capacity for relations can be held back, (I’m not willing to use the word potential here just yet) until an object is capable to convert it into a manifest relation.

This is bolstered by computational code in fact, and its easily verifiable as Geoff Cox argues in his thesis (just published last year). There is huge, long history of computation and emergence, such that simple rules can produce irreducible complex behaviours (Game of Life, etc,), most will be familiar.

However not all code is emergent or generative.

Take John F Simon Jr’s Every Icon (1996) for instance. A simple algorithm that takes a 32 by 32 grid and counts every configuration from the top left to the bottom right, (The speculative artwork par excellence, considering it takes approximately 10 trillion years to complete). As I explained in my paper last year – the algorithm does not actually generate anything. There isn’t any ‘potential’ in fact, there is nothing there to generate in order to be classed as generative.

In computer lingo, the algorithm is enumerable, which simply means it counts, and does so according to how fast the processor is. Nothing is generated, but instead results come in the form of counting. Nothing ‘new’ actually emerges really, just actualised calculations. One could add that this is an aggregate of parts coming together to form a pseudo-whole, but I’m not sure thats the case. The algorithm continues to count and count until it reaches its final configuration.

One of the more impressive features of Cox’s thesis, is the foregrounding of code itself inherent in software art and software in general. For him, there is a tension – a dialectical one – between the construction of code and its execution. Or in his terms; ‘between what exists and what is possible’ (161). The execution of code can have massive consequences from a relatively small input, or it may not. What Cox finds fascinating is the aesthetic ability for code not to work or execute. Code can reject the system it is part of and in some cases cause havoc within it. In short – like an object, code is a inherent contradiction. Algorithmic code can, in itself, encapsulate relations until it is executed, and then even when it is executed may not cause havoc. It depends on the intrinsic structure of the code.

In the wider field of aesthetics then, the speculative enquiry of artworks resides in how relations are forged or made present. Where did they come from – how did they emerge? Emergence is not a pre-given.