I’ve mentioned determinacy a couple of times on this blog, especially in reference to computation, Cellular Automata and OOO. I think its worth explicating a couple of points in relation to determinacy. Indeed, from my own perspective its clear to see that the issue of transformation and topology will iron out some key differences, certaintly between Graham and Levi. Tim and Ian may, and probably have, offered their opinions on this at some point, either directly or indirectly. (I’ve probably missed Tim’s – as he posts about 5 points of intellectual direction a night and its hard to keep up).
Most of you who have read my posts and subsequent work, know that I have no time for an ontology of stochasticism. Unpredictability is an epistemological problem that fails to grasp discrete underlying executant rules. The infinity of algorithmic behaviour, intrinsic to itself, is, for me, all there is. The rules are thoroughly determined and fixed, and any random behaviour occurs as an encapsulated phenomenon, but an important one nonetheless (otherwise the aesthetics of algorithmic art would not be possible). This is not new in computer science, nor complexity theory, nor even the cognitive sciences (Free will is an illusion, etc).
As Ladyman and Ross mention in Every Thing Must Go (and herald it as an example of ‘real patterns’) Jon Conway’s Game of Life is an example of a deterministic system that sheds light on simulating emergence and evolutionary patterns and processes. (My own criticism of L&R, is that they place far too much emphasis on the output-patterns directly before them, rather than on the execution that gives birth to them – in short they place reality on the side of inputs and outputs, and not on the intrinsic execution itself.)
Hence, my view is anti-topological, and marks a distance away from Levi’s conception of what is an object (although I dare say we have similar views on contingency). Its bigger problem of how you account for agency. Levi holds that objects have a transformative power inherent in their virtual proper being, which can be teased out through indirect perturbations, this results as novel local manifestations.
Politically speaking, Levi has an important operational observation, if you push object x by perturbation y, you have a local manifestation output, emergent from the object itself. But I think ontology is more determined that this. Here’s Levi,
“we should not assume that entities only produce new local manifestations as a result of perturbations from the outside. We should hold open the possibility that there are many entities that can undergo all sorts of transformations at the level of local manifestations– all the way up to and including becoming new entities –through the result of their own internal dynamismssans external stimuli.”
The OOO part of Levi’s topology is the Luhmann inspired insight that, unitary structures and systems only interpret perturbations by the mechanism inherent to their functioning. Someone who is anxious about losing their hair will inevitably be perturbed by the contingent interactions of people with a full head of hair. Presumably a financial corporation would interpret contingent perturbations, only where money is lost, not human suffering, localist issues or ecological catastrophes.
And heres my key difference; I hold no such concept of topological transformation, but determinate parameters. Its actually anti-Meillassouxian more than anything else. Algorithms can be contingent but not always; more specifically algorithms can be contingent on other algorithms. There is no ‘in-itself contingency’, but rather contingency is an actualised forceful procedure that either halts algorithms and/or changes their inputs and rules. This is the crux; if you perturb a procedure (like, turning off, shutting a computer down), does that make the procedure non-determinable?
Just because you can halt the algorithm’s functioning, this does not make it random. Rules are rigid because they are rules. If you want to change the behaviour you change the rules or the starting inputs. If you have changed the rules, then you have constructed a brand new executant procedure which is always static and fixed, and intrinsic to those set of procedures. If you change the starting conditions, then you have not changed the executant algorithm as such, you have merely changed its starting conditions.