Interpreting the Undecidable: Turing’s Weird Machines in an Ecology of Distrust.

Pleased to announce two things:

1.)  A major refresh of this blog’s appearance since its inception (circa 2010). Long overdue of course, and now works with mobiles – because internet. It’s only a quick theme refresh, as I don’t have time to modify the menus to my liking.

2.) I’ll be presenting at the ‘Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles’ – in the Gala Theatre, Durham, 26-27 February 2015. A wonderful event to be sure, and great to catch up with old and new friends. You can book HERE, but from what I can see nearly all the tickets have gone.

See below for the abstract blurb. This paper will, perhaps, be the clearest exposition of my dissertation’s position with regards to the ontological workings of computation and algorithms thus far.

“In the physical world, engineering is based on a solid understanding of unsolvable problems, rendered such by fundamental laws that cannot be solved by ingenuity, hard work or funding. However, academic studies into digital culture, code, sociality, media infrastructure, platform eco-systems and machine learning have been conspicuously ignorant in acknowledging the role of computer science and its own fundamental unsolvable problems. The most well known, Alan Turing’s so-called “Halting Problem” (Turing 1936; 1954) or Rice’s theorem (Rice 1953; Davis 1982) are as old as computer science itself. Both state that there is no general and effective algorithm to decide whether a given program terminates or has any other non-trivial property.

Public perception and silicon valley-led liberalism still associate computation and big data with some degree of utopian magic that can significantly improve any human endeavour when applied with sufficient enthusiasm and gusto: yet symbolic manipulations are subject to natural limitations which are just as significant and salient as physical ones. To what extent do these limitations matter in light of security, machine learning and quantifying data and what impact do they have on cognition, knowledge and criticality?

This paper will argue that such problems have fundamental, yet insightful outcomes concerning research which investigates relationships underscoring cognition and computation: none more so than in the work of N. Katherine Hayles. But what such unsolvable problems add to the literature no longer examines the transformative hybrid capabilities of computation and cognition, but becomes an issue of exploiting their trust.

The fulcrum upon which this issue pivots concerns trusting code, or more specifically the trusting of inputs into code, and how life with computation and internet infrastructures has become a precarious struggle defending against exploitative bugs, securing hidden information, compromising private data within “leaky” programs (Chun 2013). Against the usual view of information as inert data waiting to be interpreted by algorithmic analysis and untapped modes of cognitive knowledge, there is the much closer realisation that information was already deceptive and ungovernable containing latent functionality which given the appropriate input might be hostile. This is what exploit researchers term a weird machine (Flake 2011)

This changes the typical view of code operating on input data, instead resembling input data that operates on code: and as such brings to view the unsolvable limitations of interpretation in-between cognition and code irrespective of their similarities, differences, entanglements and properties.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>