Tim’s post just reminded me to write something on Gregory Chaitin. I’ve been writing about this area a fair bit in my thesis, in light of his relationship to Wolfram’s research. Often enough, if you highlight differences between two thinkers in one specific field, you can transport or unpack those differences in a completely different field. And the impact of computer science needs to have a greater efficacy in media theory and art production than it does at present.
Now in my eyes, Chaitin is less a mathematician or a programmer or a scientist, but a philosopher who happens to do mathematics, programming and science. How many philosophers would write and publish a book with a bundled copy of their proof coded in LISP? (see the Limits of Mathematics) (LISP is a procedural computer language for mathematicians). I would also say the same for Wolfram, but Wolfram probably thinks of himself as unlike anyone else that has ever come before and anyways, he is far more of a rationalist than Chaitin. Chaitin is fully on the side of contingency, Wolfram is on the Leibniz – rationalist side (but I don’t adhere to that side of Wolfram personally as we will see).
The reason Chaitin is a philosopher, is that he aims for a type of metaphysical commitment one rarely sees in the field of pure math. Thats the biggest reason why mathematicians have never liked what he is doing and why his work is controversial; because he exposes something that can never fit into the long term projects of the field. A field which fully believes that mathematical theorems of reason are achievable by finding the stable axioms and deducible theorems.
The easiest way to explain Chaitin’s result is to explain what Godel and Turing did before him. Godel completely destroyed Hilbert’s problem (or the 2nd problem) by showing that you can use Formal Axiomatic Systems and Godel Numbering of the Primes to encode a paradoxical statement which undermines the completeness and consistency desired by those systems within the systems themselves.
Alan Turing, kind of made it worse, not by making the math anymore paradoxical, but by first showing that you can convert objective Formal Axiomatic systems of logic into equivalent computational statements, the mechanical programs of which produce decidable or undecidable outputs, based on recursion and programmable procedure. You can then use these theoretical computers to then show how there can never be any general purpose algorithm which will tell you whether a given program will halt or not (Kleene later defined it as halting, not Turing). This was Turing’s proof reductio ad absurdum, because if there was such an algorithm (a totalizable algorithm, similar to the impossibility of non-totalizable sets), you could then check all programs in size order and check if they obey the rules of system. Checking whether a program halts or not is the easiest thing ever; you just need the patience in waiting for it to halt or you choose to give up. But a more general algorithm that computes this for you cannot exist.
So as well as discovering computation, Turing also discovered the fundamental limit of computation at the same time. And Turing made it worse, by transforming this metamathematical paradox into a type of physical law within objective mechanical procedure. We all know what happened to the status of the computer after that.
Well Chatin (as I understand it) makes this even worse. His two incompleteness theorems go into this area in more depth. He transports the situation again, but this time into randomness and probability theory.
The basic theorem is this; instead of looking at one program, put all possible programs in a box and blindly pick out a program. What is the probability that this program will halt? One can express this probability as a real definable number but Chaitin chooses to express this number in an infinite binary, base-2 arithmetic expansion native to information theory and relative to Borel’s paradox (an infinite number between ’0′s and ’1′, where each independent ‘bit’ is an independent fair toss of coin at each stage: either 1 or 0).
So whatever this real number is, it has a definite value, just like π or the number ’45298′, and it is in binary notation, so the number must be greater than ’0′ and less than ’1′. It is defined as Chatin’s Constant and because there are infinitely many possibilities, Chaitin just uses the notation Ω.
Ω can be defined as an infinite sum, and each N-bit that halts contributes precisely 1/2N to the sum. Basically each N-bit program which halts adds a 1 to the Nth bit in the binary expansion of Ω. Add all the bits for all programs that will halt, and you will get the specific precise number of Ω. This makes it sound as if it were a real number which is computable, but it isn’t. Ω is perfectly well defined and it is a specific number, but it is impossible to compute in its entirety. In fact, Chaitin’s proof shows that Ω is irreducible, maximally unknowable and cannot be defined by any finite mathematical theory no matter what the computer language. The output cannot be condensed into a finite number, which is simpler than it is. The point being that the first N digits of Ω cannot be computed using a program shorter than N bits – that’s the irreducible part.
In information terms, it is structurally random, with little to no structure, and no rational way of reasonably reducing it to something shorter than it is. Mathematics therefore is, at base, irreducibly complex and random according to Chaitin. Ω is place where mathematics has no structure and no reducible reason – in short rational numbers are true for no reason. As Brassier argued (and used it to critique Badiou’s subject) it’s almost like an incoherent noise with no structure to it all and from where a narrow rational layer of normal rational mathematics can be found.
I favour Wolfram’s undecidable approach, which doesn’t get caught up in probability theory, but at a price, for one must adhere to deterministic pesudo-complexity, albeit a pseudo-complexitiy which is more pragmatic. Unlike Chaitin, Wolfram thinks that there is one fundamental program to explain all complexity, which I don’t follow at all, for the reason that Turing discovered. However that does mean we have an either/or situation, with rational truth on one side and irreducible complexity on the other. One can simple state the OOO wager (and this would be my wager) that the recursive execution of rules from axiomatic systems are irreducible in themselves, with no undermining or overmining. And this, somewhat unexpectedly, is where creative art has something important to add.