Consciousness, Chinese Rooms and the emergent property of Treyr

When I started researching for Monastery, I found myself dipping back into areas I hadn’t thought about since sixth form.  At the time I ended up choosing English Literature as a university course but, for a long time in my teens, I was intent on studying Artificial Intelligence (AI).  I was one of those indecisive A-Level students that had selected both Maths and English.  One of my teachers suggested that I look into philosophy. Taking his advice, I read Russell’s Problems of Philosophy.  It was short (which is good for a teenager) and I thought I could understand the questions that were being posed. Then I started reading History of Western Philosophy and didn’t last two chapters. In hindsight, I know I didn’t really ‘get’ philosophy.  I like to think I’m catching up now, mind.

I’m not worried that I never went down the path to a philosophy degree, for I wasn’t ready and I would have hated it.  There’s still a little bit of regret, though, for not following the AI track.  I chose English because I repeatedly got good marks in it. After all, we can’t find the principle of the path-of-least-resistance all the time. Plus, the real reason was that I’d always wanted to be a writer.



Researching AI for the book meant I inevitably came across the philosopher John Searle’s thought experiment: ‘‘The Chinese Room’.  Here’s a link to it, so I won’t explain it in detail but, in summary, Searle imagines a man in a sealed room, being handed words on pieces of paper and translating these into Chinese characters using a book or some other system, then pushing them back out.  From the outside the room, it looks like he can understand Chinese.  From the inside, we know that he’s just following instructions.

This thought experiment was Searle’s response to a strand of AI that believes it is possible to produce a machine that can think and understand like a human being (and could be tested using the ‘Turing Test’).  Searle calls this ‘Strong AI’, as opposed to ‘Weak AI’, which is concerned with developing useful tools for pattern recognition and suchlike and using a computational approach to better understand the human mind.  His key point regarding the impossibility of Strong AI is that computers deal in ‘syntax’ (i.e they are ‘coded’) but human understanding is all about ‘semantics’ (they get the ‘meaning’).

Of course, there have been endless responses to the Chinese Room and counter-responses and counter-counter-responses.  However, perhaps because it’s quite an elegant idea, or perhaps because of complex reasons beyond our current understanding of academic debate, it still keeps cropping up.  Having read widely concerning the Chinese Room, I do believe the thought experiment itself is flawed and not reflective of how AI might work.  However, I do believe that an AI would never have consciousness in the sense that human beings understand it.  Simply, they would not be human so they couldn’t.  You could respond (and some have) that this is the point of the ‘A’ in AI but that just takes us into Weak AI territory.

For Monastery, I was in a quandary.  I knew I wanted this to be ‘hard sci-fi’, in the sense that it doesn’t break any significant rules of physics.  So, humanity has hit the limits of the light-engines, can’t escape the solar system, hasn’t rigged up a wormhole, hasn’t developed time travel.  But they developed (or allowed to evolve?) these superior machine intelligences. And these machine intelligences need to be exciting, to send a little shiver down your spine, like a HAL or a Wintermute.

Since reading all these cognitive science, AI and philosophy articles and books, I’ve come to the one conclusion: each author will vehemently argue a position based on what they learned in their mid-twenties and their mid-twenties alone.  So, I’m going to adopt the same rule.  I was studying systems thinking at that time.  So, from a systems thinking perspective (which, incidentally, is the most coherent response to the Chinese Room), consciousness is nothing more than an emergent property of the mind system – just one of those things that occurs when you have an appropriate number of neurons, in an appropriate configuration with appropriate properties.  (You can see where I got the name from.)

So, this got me thinking about what emergent properties might a Configuration have?  And that’s where the concept of treyr came in. It’s a difficult concept to explain, in the same way that explaining human consciousness to a machine would be difficult, but we can make a broad approximation.  In essence, treyr is the phenomenon where a Configuration experiences multiple near futures (and even a few distant futures) all at the same time.  This is a result of their continually cascading models of reality. I like to think it keeps the AI element in the book still interesting, while not upsetting Searle too much (and he’s a VERY grumpy man, so I was relieved about that).

I’m still reading about AI, Philosophy of Mind and emergentism.  Keep checking back – I’ve got a few more of these posts still to write I think …

Leave a Reply

Your email address will not be published. Required fields are marked *