Why Aren’t We Reading Turing?
It’s a testament to Turing’s fascination with nearly everything that 76 years since his first major paper, there’s still so much to write about his work. Expect more events and glimpses into these projects: Neuro-computational studies into the functional basis of cognition. The ever forward march for genuine artificial intelligence. New methods of simulating the complexity of biological forms nearly 60 years after Turing’s paper on the chemical basis of morphogenesis (indeed this area of complexity theory is now an established area of major research). The slippery mathematical formalist discoveries which define what can or cannot be computed. And not forgetting key historical developments in cryptography, perhaps the field which Turing is most respected for. Moreover, Turing wasn’t just one of the greatest mathematicians of the 20th Century, but also one of the greatest creative engineers; someone who wasn’t afraid of putting his ideas into automation, through the negotiation of materials.
So for the positivist sciences and technological engineers, Turing’s curiosity continues to bequeath low hanging research fruit ready to be picked. But what of the arts and humanities? How have they contributed to the Turing centenary? How are they influenced by Turing?
Indeed, its always more deeper than is actually realized. Clearly, in Analytic Philosophy where cognitive science is dominant, Turing’s legacy continues to elevate epistemological, ethical or moral questions about the nature of the human mind – with many offering high bids on the future foundation of a computable brain, by reducing the cognitive understanding into its putative evolved material constituents (see Daniel C. Dennett’s excellent article in The Atlantic) – Others completely vilify the idea that consciousness could ever be fully engineered. Others speculate (naturally).
In the arts, a special Alan Turing centenary committee has been set up to fund art projects which deal with Turing’s legacy in various ways. The “Intuition and Ingenuity” exhibition in particular features work by Ernest Edmonds amongst others. In a more direct link, the mathematician Robert Soare has even argued for the existence of a pseudo-cryptic ‘Turing Renaissance’, establishing a link between classical recursion theory and the classical art of the Italian High Renaissance (which has more than a potent ‘Da Vinci Code’ whiff about it).
Some academics have chosen to focus on the biographical details of Turing’s own short life. In particular Homay King (and I only found out about this in conversation last week) has offered key insights into Turing’s research methods using Queer Theory, not simply because Turing was openly gay and speculated on thought experiments involving gender, but because Turing wanted to understand peculiar sociabilities of ambivalent miscommunication. Evan Selinger has written an article on how social scientists are using the “Turing test” to observe and understand deception in social groups. And also influenced by these same deceptive qualities, Prajwal Ciryam clarifies its influence on psychopathy and predicting when a psychiatric patient should be released. Here, difficult questions emerge as inspired by the difficult questions Turing himself raised, even if the fields wildly diverge from his own.
But here’s a question – could the arts and humanities move further into Turing’s legacy than admirable commentary on specific biographical revelations of his life, or questions which bother cultural ambiguity?
What will actually convince the humanities and the sciences to attentively read Turing’s work? Forgive the disingenuous patronizing structure of the question; I clearly realize that people have read Turing’s work, especially his most famous and readable article, Computing Machinery and Intelligence published in 1950. But think back, have you actually read it?
Written off the back of a report done for the National Physical Laboratory between 1948-1949, Turing’s succinct article famously speculated on machinic intelligence, as well as launching the infamous Turing Test (the moniker “Turing Test” is a bit of a misnomer, Turing himself called it an imitation game). And yet, it strikes me as absurd that more hasn’t been written on how strange, bizarre, hysterical, paradoxically informal and wonderfully written this text is, as a text. Especially since it is a text which is often considered ‘scientific’ and ‘academic’ as opposed to ‘mere literature’.
In a 2008 article, which again, no-one seems to have read (‘Powers of the Facsimile: A Turing Test on Science and Literature‘, written for an edited collection on the novelist Richard Powers) – Bruno Latour makes this strategic point about the ambivalent qualities of Turing’s 1950 article. For him, Turing’s text contains not clear analytic rigour, but “metaphors, tropes, anecdotes, asides and self description”;features of what he calls “matters of concern”. Turing considered the idea of machines having intelligence through the “most bizarre, kitschy, baroque text ever submitted to a scholarly journal“. Latour even jokes that Alan Sokal would no doubt have interpreted it as a hoax. It’s no wonder that Turing’s friend and logician Robin Gandy said of the essay that, “it was intended not so much as a penetrating contribution to philosophy but as propaganda […] He wrote this paper – unlike his mathematical papers – quickly and with enjoyment. I can remember him reading aloud to me some of the passages – always with a smile, sometimes with a giggle.”
“The problem” Latour states, “is that people never read.” Well, by ‘reading’ Latour means that the analytic sciences didn’t really pay attention to what Turing is saying. Even worse, some just ignore the deliberate ambiguity of Turing’s text by reading through it as a transparent window onto whether the mind is computational in nature or whether machines can actually think or not (a question which in the same article and subsequent lectures, Turing considered “too meaningless to deserve attention”, as well stating that his aim wasn’t to give a definition of thinking).
This, for Latour, begs a different type of Turing test; can one distinguish between a Richard Powers novel of realist literature (an exposition of which is given in the first half of the paper) and the scientific realism of Turing? If one is sufficiently fooled, then Latour “will have at least indicated that matters of concern might be best accessible through the joint inventions of literature and science.” And it’s here where the humanities and the arts should claim Turing as a kindred spirit, for Latour’s test terminates any significant difference between “an important scientific text and an important novel” precisely insofar as one cannot distinguish “dealing with demonstrations while the other deals with rhetoric, one dealing with proofs while the other deals with stories[…]” Turing had the nerve to slice and dice allusive, eloquent and creative persuasion together with an empirical understanding of electrical computation. I doubt various scientific proponents of the Turing centenary would take this significance with much seriousness, but I’m convinced artists and literary scholars would do.
So this begs a question, which should resonate loud and clear. What happens when we treat Turing even more seriously; not just as a mathematician, engineer or scientist, but also as a cryptic creative thinker and writer? What happens if we follow his words and not just his conclusions (although certainly the conclusions are worth following, if not extensively by others).
One element which stands out time and time again, is how much Turing was influenced by the reality of unproved conjecture rather than designating reality through deductive proof. This isn’t a quirky coincidence. Turing’s negative solution to Hilbert’s decision problem in his equally famous 1936 paperOn Computable Numbers – the paper which conceived the computer as an abstract mathematical concept – showed that there was an intrinsic paradox at the heart of automation. It’s impossible to systematically predict what a certain function will do before you execute it. So in Turing’s 1950 article, we shouldn’t be at all surprised when he espouses the following point;
“The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.” Computing Machinery and Intelligence, p.49.
Another point of semblance is how uninfluenced Turing was by the work of his peers. Max Newman one said of him that it was a “defect of his qualities that he found it hard to use the work of others, preferring to work things out for himself.” I very much doubt it was a defect. To consider it so, probably undermines what was so particular and unique to Turing’s creativity. Turing wasn’t persuaded by other interpretations of computing machines other than his own personal tinkering, and this leads to some important insights worthy of aesthetic concern.
For instance in a surprisingly frank passage, Turing lays bare how little knowledge he has when it comes to speculating on the constructions he himself had constructed.
“A better variant of the objection says that a machine can never “take us by surprise.” This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. […] These admissions lay me open to lectures on the subject of my vicious ways, but do not throw any doubt on my credibility when I testify to the surprises I experience.” Computing Machinery and Intelligence, p.56-57.
If Turing – the godfather of universal computation – had little knowledge on the output of his own discovery, what does this say about us now? (what does it say about Google’s own recent tribute?) Turing’s famous article lays bare a different interpretation; that he certainly didn’t envisage computers as simple communication tools for human whims, but did he indirectly expose automatons with agency. What the hell is an automaton?
How he came to define what this agency is, was brutally cut short. However it’s clear from his work on morphogenesis (which Turing began at the same time as his theoretical forays into mechanical thought) that some preliminary form of complexity theory was the way forward. It certainly undermines the positivist wing of artificial intelligence research which invests in some predictable form of knowledge where machine intelligence can only be ‘known in advance’ when it passes for sentience. Do real world formal language systems already contain the type of surprising ambivalence which Turing noticed? Furthermore, do machines experience this level of ambivalence between each other themselves? This is a question which the transparency of analytic rigour and scientific provability would remove for being undisciplined. Turing’s own words suggest otherwise.
In a subsequent lecture broadcast on BBC Radio in May 1951 called “Can Digital Computers Think?“, Turing elaborates on this further using some exquisite metaphors and analogies;
“It is not difficult to design machines whose behavior appears quite random to anyone who does not know the details of their construction. Naturally enough the inclusion of this random element, whichever technique is used, does not solve our main problem, how to programm a machine to imitate a brain, or as we might say more briefly, if less accurately, to think. But it gives us some indication of what the process will be like. We must not always expect to know what the computer is going to do. We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do.”
We have here, in the pangs of its birth, a self-confessed weird quality to the construction of computing in its preliminary stages. It’s this otherness, notably absurd quality to the agency of computing which the humanities (and especially the digital arts) could re-appropriate for it’s own unpredictable ends.
And to some extent this hardly exists as new territory for practicing artists who began working with early structural computing systems from the late 1960s – the surprising reality of deterministic rules. I’m sure many artists and programmers, especially those who constantly negotiate their way through complex compilers, stack libraries, code, bugs, glitches and information loss, testify to the same surprises that Turing experienced. For it was Turing who originally suggested that writing and mechanism were almost synonymous.
While I’m sure no aficionado of the Turing Centenary would refute our own particular conjecture, they wouldn’t admit to a conflict between the rhetorical primacy of Turing’s text against the dominant and subsequent scientific uptake. For while the sciences of predictability continue to locate Turing’s legacy as their own, they leave open the chance for the humanities to account for the unpredictable qualities of Turing’s legacy which are ignored. I look forward to the next 100 years.
Robert Jackson is an MPhil/PhD student at Lancaster University.
His thesis involves Algorithmic Artworks, Art Formalism and Speculative Realist Ontologies, looking at digital artworks which operate as configurable units rather than networked systems, and attain irreducible autonomy themselves which are capable of aesthetics, rather than reducing them to tools that communicate in human culture. The principle intervention which bolsters this understanding is one of foregrounding the phenomenon of undecidability; the effect produced when an algorithmic system or machine can never reliably give a ‘yes’ or ‘no’ answer to a viewer or user or even to other algorithmic systems.
The thesis title is; Algorithm, Contingency and the Non-Human: The Aesthetics of Undecidability in Computational Art.