AKRI

Intelligence :
Steve Grand "Machines Like Us"

The following material is derived from the key note speech given by Steve Grand OBE at the AKRI’s Biennial Seminar on the 17th October 2002. Steve is the Director of Cyberlife Research Ltd and has developed a robot orangutan called Lucy. For more information about Steve and Lucy visit his website at http://www.cyberlife-research.com

This talk is available FREE for download in MP3 audio download (18Mb).

 

Transcript - Transcribed by Kate Rickard.

Good morning nice to meet you all.

As I understand it, the AI component of AKRI comes out of knowledge based systems and relatively traditional machine intelligence stuff and John asked me to talk about how we might be able to achieve human like intelligence; not necessarily human level intelligence but human like intelligence and since I’m supposed to be setting the key note I guess what I’m supposed to do is kind of stand here with my hand on my heart like George Bush and say knowledge based systems are at forefront of creating human like intelligence, not that George Bush knows much about human like intelligence.

Unfortunately, I don’t believe a word of it and the kind of work I do is ideologically more or less diametrically opposed to traditional AI and conventional machine intelligence approaches. So I couldn’t quite figure out what I should do about that and I’ve decided fundamentally to just stand up here and slag off traditional A. I. without a moments thought and then when my talk’s finished I’ll just leg it before anyone comes up and lynches me, alright?. Having said that though I’d better start by just drawing your attention to the distinction between soft A. I. and hard A. I. I’m interested in hard A. I. which means I’m interested in making machines that are genuinely intelligent. Any of you who are involved in A. I. are probably more interested in soft A. I. which doesn’t necessarily have anything to do with intelligence at all. That’s normally the definition of soft A. I.; enabling machines to perform tasks which when performed by human beings require intelligence. Now, the reason why that doesn’t have much to do with intelligence is for the same reason we don’t call the telephone artificial shouting. We invented the telephone to save us having to shout to each other but we don’t call telecommunications artificial shouting because telephones don’t shout.

Now, I’m not knocking soft A. I. Its wonderful, it takes a lot of my intelligence to add up my shopping bill so I’m very grateful for the invention of the pocket calculator to do it for me. You may not remember but some years ago calculating machines were regarded as A. I. the fact that we used to call computers giant electronic brains is no accident and the fact that most of the metaphors in computing come from psychology. The relationship between doing things like calculations and what brain does was always supposed to be quite closely connected. I like spell checkers. It takes intelligence for me to check my spellings so I love the fact that someone’s gone out and invented a spell checker to do work for me. Probably the epitome of this kind of soft A. I. is the chess playing computer which can save me all the tedious bother of having to play chess. It can play it for me and I don’t have to worry about it because I hate chess. But there’s a bit of a danger involved because it takes intelligence for me to check my spellings, and to play chess, it takes intelligence for me to stand on one leg. And I know it takes intelligence because if I stop thinking about it I fall over but a telegraph pole can stand on one leg much better than me. And it’s not normally held that telegraph poles are more intelligent than I am. So it doesn’t follow that in soft A. I. solving a problem humans require intelligence for necessarily involves intelligence itself. So this is my kind of let out clause so I don’t offend anybody when I start slagging of soft A. I. So really I think you should probably call soft A. I. intelligence saving devices in comparison with labour saving devices.

Anyway what went wrong with A. I.? Well its all this guy’s fault and I presume most of you recognise Alan Turing. He was probably the one person qualified to expect that by now nobody would be making me stand up here and talk about how we were going to achieve human like intelligence because we would have done it. Because 52 years ago, he made this prediction "By the end of the century one would be able to speak of machines thinking without expecting to be contradicted". Well the end of the century came and went a couple of years ago and coincidentally the time ran out for Turing’s prediction the exact second when computers were destined to tell us how stupid they were on account of the fact that they couldn’t add up the date properly. So it all went horribly wrong and around the turn of the century I started thinking about why it went wrong. I considered it carefully and came up with the following considered opinion "its because we A. I. researchers are all narrow minded, domineering, chess playing computer nerds".

Problem solved. That isn’t really meant to be quite the insult it sounds. Its really a kind of mnemonic for some of the primary reasons why machine intelligence hasn’t succeeded very well over the past fifty years. So narrow mindedness for example really means Turing’s original conception of a thinking machine and that’s what he intended with his paper on computable numbers even though really it was a Pure Mathematics paper. It was a machine that does one thing after another; it was a very serial, very narrow kind of machine that did one step after another, and that’s what we appear to be like as human beings, we seem to be reasoning and taking one step after another. But the reasons for that aren’t quite as sophisticated as it sounds, I mean the main reason we appear to be one serial machine is the fact that we have a narrow bandwidth body. You can’t look in two directions at once. Even when I’m drunk I find I can’t walk in two directions at once, so I have to make one decision after another. But inside me is a hundred thousand million neurons in my brain, each of which is acting independently of the others; none of which knows of the existence of the others or what part it plays in the whole scheme. And its only really the fact that they all have to agree to make a decision with this one body that makes me appear to be appear like a coherent linear thing.

So that was a kind of a fundamental mistake in many ways. The domineering mistake really comes out of the fact that A. I. grew up just after the war in a very top down hierarchical command and control society and theoretically A. I. followed suite, and there are various problems with top down approaches, command and control structures. Probably the most apparent one is that the buck has to stop somewhere if when you write computer code you generally think in terms of main routines, calling sub routines, subroutines, commands, subordinate routines and so on, so its from the top down. There’s far too much machismo amongst computer scientists which means that the decision making process has to ripple back up and there’s always go to be some kind of command structure at the top making the central decisions that tell all these sub routines what to do; and the problem is the buck has to stop somewhere, and although its supposed to stop at the top level in the code in practice where it usually stops is with the programmer. Which means that an awful lot of the stuff that counts as A. I. is actually really the embedded intelligence of a human being rather than the autonomous intelligence of the software.

Chess! I don’t know what it is about chess; that was Turing’s fault as well, he was the guy who first proposed chess as a test solution for A. I. and fundamentally he chose it because he needed a game that the computer could play with a teletype as it’s output and punch cards as its input. So that was the main reason he picked it, but for 40 of the last 50 years chess has been the classic problem for A. I. and it doesn’t have much to do with intelligence at all.

Here’s a thought experiment; take one super, hyper intelligent, chess computer like Deep Blue, say, and take one domestic rabbit - this is what happens when you ask a rabbit to play chess. They’re not very good at it. The Queen’s opening gambit gets them every time. So on that basis, chess computers are far more intelligent than rabbits but if you swap the experiment around and try throwing them both into a bucket of water, it strikes me that the ones whose really the most intelligent is the one who figures out how not to drown. Now I’ve tried this experiment loads of times now and chess computers just don’t get it � it’s cost me a fortune.

Now I’m not sure what this proves, I guess it proves a few things. One is that the intelligence is grounded in survival and that’s why we’re intelligent and its very hard to make things intelligent unless you keep that grounding in survival. If you take that away then what’s the purpose, why would you want to be intelligent? Why should a machine want to translate French to Chinese? If you haven’t got that motivation underneath it, its very hard to make real intelligence work.

Another thing to draw from it, I guess, is that intelligence is about general purposeness. Its always possible to make a very specialised machine that can do a task as well as or better than a human being but that doesn’t necessarily make the machine intelligent . What makes us intelligent is the fact that we can do more or less everything equally badly. We can figure out how to get out of a bucket of water; so can a rabbit, so it seems to me that really rabbits are far more intelligent than chess playing computers and chess was a bit of a dead end in many respects.

This is the one that really gets me, it’s the computer. Somehow or other, the digital computer which was primarily intended to be a machine that could think, seems to have been the biggest handicap to A. I. for the past 50 years. As proof of this here’s two lab robots. They’re typical lab robots, three wheel machines with simple sensors. They can do more or less the same things; they can navigate around mazes without bumping into stuff. They can find their way back to their charging stations and re-charge themselves when the batteries run low. Very clever stuff and I’m not knocking it;its impressive. It’s a very difficult set of problems, but there are two slight differences between these robots. Dopey over there was built in 1999 and contains a microprocessor with maybe two million transistors on board. Amy over here was built in 1949 and is controlled by two valves. This was built by Grey Walter, one of the earliest cyberneticians. So it strikes me that computer power has risen by maybe a factor of a million or more over the last 50 years and the state of the art in robotics as not moved on at all frankly. That’s really odd, I mean, why? It beats me, I don’t have a good answer for that. In many ways I think it’s rather like all these people who go out and buy all the best hiking gear and sticks and stuff and then never stray more than 100 yards from their Range Rover. So having all that extra power has made us lazy and we don’think about the problems properly. Grey Walter was doing far better in the 40’s.

There was also this correlation between the computer and the brain which is very misleading. The computer, the definite method, the algorithm that Turing first proposed was supposed to be a replication of the process of thought and so the computer was based around psychological ideas at the time. But then of course the computer was such a brilliant success that everyone started basing their theories of psychology around the computer and theories of brain function around the computer and the two got kind of wrapped up in an endless loop, where they just kissed each others butts essentially. Neither brain theory nor computer theory moved on much. So that’s my slagging off of traditional A. I. I don’t mean to be rude to Turing because he was a real genius and actually, he had three brilliant ideas about computation. Very few people have heard of two of them because the first idea was just such a stonking success that it kind of eclipsed the rest of them. The first idea of course was the concept of an organised machine, the definite method that he talked about in 1936 that lead to the computer and computers are serial processing machines. They think in terms of procedures, one step after another, doing things, verbs. They are very precise most of the time and you think in terms of top down control processes. But he had two other ideas that were radically opposed to those, one of which he called un-organised machines and you can just about see behind there his 1948 paper called "Intelligent Machinery" which talks about unorganised machines. These days we call unorganised machines neural networks. He also did some work on self-organising machines which is some theories about how the chemistry can take a purely symmetrical and spherical egg cell and differentiate it and make it into an asymmetrical, unstructured embryo. So how you can break the symmetry of something that’s pure and produce complexity out of simplicity. The biologists know about that work; the Turing effect in embryology is understood and used a bit; it isn’t used anywhere else particularly. The un-organised machines - well neural networks did take off not all that long after but then Turing’s own version didn’t actually get published in his lifetime. He wrote it in 1948 and it was published in 1964 or something like that, after he was dead. The reason for that was actually Charles Darwin. Not the original Charles Darwin but his grandson who was head of the National Physical Laboratory at the time. He said that Turing’s paper ’Intelligent Machinery’ was a schoolboy essay and couldn’t be published. I’ve often wondered how the world would have been different if "On Computable Numbers" had been called a school boy essay and not been published and the "Intelligent Machinery" one had been published. I think everything we do would be radically different now.

So, I actually like those ideas. That’s the kind of work I do; self-organising, un-organised machines. Nasty, messy, relational, parallel stuff and I don’t really like this machine except that I’m very happy to use this machine to simulate those machines and it just turns out that Turing’s original conception of a universal machine allows you to use that universal machine to become any machine you like, within reason, and even to be whole populations of other machines. So you can simulate other kinds of material inside a computer that you can then use to make intelligence out of. Which is a radically different approach to using the computer to be intelligent.

Now I’ve been mucking about with that kind of approach since the late 70’s and nobody took the blindest bit of notice, which is understandable because I’m not an academic, I was just working on my own and no-body knew who I was. But between 1992 and 1996 I wrote a nine month project (it overran a little) which was a computer game called Creatures, and that sort of brought me into the public eye. Its not exactly human like intelligence, its more nasty, cute Disney-esque like intelligence, but it’s a kind of exemplar of this messy bottom up, distributed forms of intelligence.

These little creatures here are playing ball with each other and that was the first time I thought I’d really got the hang of this because the computer isn’t programmed to play ball and I didn’t know where it had come from. In fact, its not as impressive as it seemed. Its really kind of parallel play like very young children do, where they just happen to get some benefit out of playing together in the sand pit but they’re not actually playing co-operatively. I nearly fell off my chair when I saw them bouncing the ball to each other.

But the important point with the game is, these creatures don’t exist in the computer. I mean graphically they do. Graphically there is a centralised code to produce the image of them moving across the screen and articulate their limbs and stuff, but nowhere in the code is there any instructions for how to behave like a living thing. There’s only code for how to behave like a whole bunch of none living things, like nerve cells, enzymes and chemical receptors, and genes. And then the genes are used to collect all these other virtual structures together into big networks, producing the physiology and the brain of the creatures. Then it’s the interaction between all of those dumb things that produces the intelligent behaviour of the creatures. I don’t know what that proves. I guess it proves it is possible to take maybe five thousand different, stupid components and put them together in such a way that they behave like a coherent, intelligent entity. I do think they count as intelligent because they learn. None of their behaviour is programmed into them, they have to learn it all for themselves. They have to find out what’s good to eat, how to chat up another creature and reproduce with that creature and therefore pass their genes onto the next generation. The genes specify the whole structure of the creature and so the system is capable of evolving. So its real intelligence in the sense of the fact that I’ve let go; they’re completely autonomous entities, their own lives are in their own hands and the lives of their species are in their own hands. I could tell you loads of anecdotes about creatures but I just want to show you one thing because I think its amusing.

It turned out to be a commercial success; made a lot of money for everyone else except me and became a kind of a cult. At one stage there was 400 hundred websites on the internet devoted to the game. Many of which were adoption agencies like this one, which is where I get my creatures from, on account of the fact that I’m hopeless at bringing them up - they all die on me. Although these adoption agencies are set up just to spread the gene pool, to put other interesting creatures out on the web for other people to pick up and use and breed from and so on, but there’s quite a lot of problem children out there on the web, which are either creatures that have got some genetic problem, which means they don’t develop properly, which means they need a lot of looking after or in some cases they’re battered Norns (the creatures are called norns). The main reason there are a lot of battered Norns out there is a guy called Anti-Norn who’s in the US navy, in San Diego I think, who set up a website devoted to torturing my creatures, finding ways to make their lives miserable. I’m just showing you this because when we produced Creatures 2 and sent it out to the states we put in a little soft toy Norn with every pack, as a marketing gimmick. Anti-Norn got hold of a bunch of these and set up a web cam so we could tune in and watch him torture these creatures. So I just thought I’d show you some of the results. Is he sick or what? Actually I have a lot of sympathy for Anti-Norn because he makes people think; he stirs it up. He’s had death threats because people think his life is less important than the lives of those stupid little Disney-esque creatures.

Creatures was quite successful. It is difficult to make very complex entities like that, that are autonomous, and can learn and develop and so on, but I was quite disappointed with what I’d done and after I’d finished the game, I started thinking about what was missing. People were asking me whether these creatures were alive, and whether they were conscious and I would say, "Yes, they’re alive". I think its worthwhile arguing the case that they are real things despite the fact that they’re just computer code, and that in some sense they’re alive. But I always say that they’re definitely not conscious. And I started thinking about why they’re definitely not conscious and one of the things that struck me was that it was because they don’t have any kind of mental life. They have complex neural networks of a thousand neurons with internal states and stuff but all these internal states are trapped in a sensory motor loop, they’re like insects, so the environment changes, that changes the signals in their sensory systems , that changes the signals flowing through the neurons in their brains, that changes the muscles, the muscles then change the environment and so it goes round. That’s what its like to be an insect . But it’s not what’s its like to be a human being because we can step away from that. We’ve got a kind of second loop where we can manipulate things inside a world that isn’the real environment. We’ve got a world inside our heads which means that we can imagine stuff, or day dream or rehearse what we’re going to say to somebody, or make plans and so on. The more I thought about that the more I thought that was crucially important that it was something that evolved in the mammalian line at least that was crucial to consciousness and crucial to the way we think and therefore could be crucial to AI.

That got me interested in the cerebral cortex of the brain which lead to my other main interest which is the self organisation of cortex and the thing about the cerebral cortex, which is the pinky, grey rind on the outside of the biggest part of the brain, is that it’s a general purpose machine. It starts out more or less the same six layered structure everywhere, and yet by the time we are born and been around for a while, different bits of it become specialised to do certain tasks. So in the human being there’s at least a different hundred maps in cortex that do different things From isolating the boundaries of a visual object, to controlling the muscles, to planning what to have for lunch. Any number of these specialised things grow out of this unspecialised structure;it does it spontaneously. So I got very interested in that.

A few years ago I got squeezed off the board of my company and got kicked out into the rain, so I decided I would devote myself to these two questions:

"How does the brain self organise?" and "Where does mental life come from?" and are those two concepts related? That set me off building this robot called Lucy. Here’s Lucy, or at least this is the prototype of Lucy. I’ll switch her on but she’ll just sort of twitch like a headless chicken on account of the fact that her brain is in Somerset on my PC and it’s in pieces as well. I chose Lucy to be my test bed to try to understand these two big questions.

I’ve always made whole creatures. That’s one thing that’s different from the kind of AI I do to the kind of A. I. that is generally done. I make whole creatures rather than just isolated subsystems and there’s various reasons for that. One is it’s the integration of the things that actually makes intelligence. I m not even sure it makes any sense to be intelligent unless you are a whole creature. It’s a property of organisms not organs. There’s also stuff that you need to learn. I’m trying to look for a very unified central principle that lies behind the cerebral cortex and you can’t really understand < Lucy whirrs >, Oh shut up. You can’t really understand that unified thing unless you look at all the disparate versions that it becomes. < Lucy makes random noises > I’ll switch off her ’cos she’s going to distract me.By looking at vision, and audition, and motor control, and planning and all these different things I’m trying to find the common core that lies behind them, and I’m starting to get somewhere.

There’s also the fact that learning often requires multiple modes as well. We learn to see depth because of binocularity, and perspective and a whole bunch of other things in the visual system, but the only way we can actually calibrate our visual system and figure out what depth means is by virtue of the fact that we’ve got arms and we can reach out and touch stuff. We can only reach out and touch stuff because we’ve got depth vision so, you have to have arms to understand visual depth, you have to have a visual system to understand how to reach out and touch things. So it’s the integration between those alogies that’s important. Its very important that she’s a real robot because it’s the environment that actually generates most of the information that self organises the system. It’s also important that she’s biologically realistic, I’ll tell you about that in a minute. She’s a baby, mainly due to the fact that it took Albert Einstein 4 years to learn to tie his shoe laces. All of the high level stuff we do as adult human beings depends on the low level stuff we learnt when we were kids and the real sensory motor stuff we learnt when we were born and the embryological stuff we learnt before we were born to build up this huge hierarchy of stuff. And so its really important to learn to drop things and bang into stuff and figure out that this arm belongs to you and mummy doesn’t. All of these little things underline the high level stuff.

I’m taking a structural approach because fundamentally I’m a biologist and I understand structures but there’s also important reasons. I think one of the reasons AI has gone wrong in many cases is because we tend to think in terms of abstract information processing structures. Abstractions are fine but they can be misleading. If you asked a bunch of AI scientists to blindfold themselves and build an elephant one of them would go up and feel the elephants legs and say "Oh yeah, it’s like the trunk of a tree". They would then cut down four trees and stuck them in a square, then someone else would say "The body of an elephant’s like a barrel" and stick a barrel on top of the four trunks. Sooner or later you would end up with what they were satisfied with as an elephant but I’m not sure it would satisfy another elephant. So there’s a real danger if you’re going to make abstractions that might to miss the central principles and you might take away the peripheral stuff but not keep the important stuff.

Then there’s top down verses bottom up, and I was slagging off top down control systems earlier on. Most of the people in my branch of A. I. slag off top down control systems a great deal more than I do and believe that only bottom up systems work. But because I’m interested in imagination and imagery it seems to me you have to have both and the brain actually organises out of a tension between top down and bottom up control.

I’ll just give you a whiz through, because this isn’t supposed to be a technical talk, but I’ll just give you a whiz through of what’s inside Lucy, so you’ve got a feel for what goes on in there. Hardware wise she’s got a bunch of home built 16 bit computers, none of which do anything to do with her brain or thought processes. What they are there for is to turn nice, neat precise technology into nasty, messy biology, because one of my assumptions is that biology does what it does for a good reason.

Most machine vision work assumes you start with a fixed TV camera and a grey scale image, or a colour image that’s evenly distributed and even resolution across the image. Well, the retina of the eye isn’t remotely like that. For a start its mobile, it moves around very quickly and its ability to move round very quickly is crucial to vision because if you stop it doing it you can’t see. Within a second of freezing the eye you cease to be able to see anything. Its not even resolution, its extremely acute in the middle 1 degree, then its extremely fuzzy near the edges. Now it might just be that the optic nerve is already thick enough and if you made it high resolution all the way across the optic nerve would be so stiff it wouldn’t bend but of course there may be other reasons, things to do with how we see that rely upon that fact. The signals that come out of the retina don’t bare any resemblance to a grey scale image. They don’t look a bit like a real picture at all. You wouldn’t even be able to recognise things. They’re highly convolved, they’ve been smeared out across very large numbers of neurons. They don’t measure grey scales they measure contrast ratios and movement and all sort of other stuff. It seems to me that might be important because the brain evolved to fit the eyes it was given and the eye evolved to fit the brain it was given. So, if you’re trying to understand the brain you need the same kind of data format. The same applies to muscles, the same applies to hearing and to voice which is why she just makes this horrible bleating noise because if I was smart I would have given her a speech synthesiser, let someone else take the strain. But if I gave her a speech synthesiser her brain would have to talk ASCII code and human brains don’talk in ASCII code, they talk in terms of muscle movements, in terms of in breaths and glottal stops, tightening vocal cords and stuff . So I had to program in a model of the vocal tract into Lucy. Quite sophisticated digital signaling processing. All very clever stuff and I’m very proud of it and what actually comes out of it at the end is this horrible bleating sound.

You don’t want to know all this but I’m just going to whiz through what’s inside Lucy’s brain so that you get a feel for what it looks like fundamentally because it doesn’t look like computer code, I mean it is computer code, but you can’t think of it in those terms. You have to think of it in terms of biology and structure. These blobs are layers of neurons inside a deep part of the brain called the Superior Colliculus which is what controls eye movements and head movements in us. When we see a bit of movement over here, we automatically turn our head and our eyes towards it. We can’t help it and that’s because it’s a reflex controlled by the Colliculus. In frogs, that’s the top of their visual systems. Everything a frog does is controlled by the optic tecton which became the Superior Colliculus in us. So predator and prey are differentiated by the size and speed of movement. They can’t actually see anything at all when something’s staying still. I suspect even mating behaviour is probably controlled by this same structure - it detects another frog and does the necessary stuff.

For human beings it’s at the bottom of the visual system other than the top, but its still there and its still very important . You can see these big blobs of nerve activity on here. This is a model of what’s probably going on in a biology, if you do any neural network stuff, or you know people who do neural networks, neural networks never look like that. It’s a very different kind of nerve activity going on in there.

So that’s the very bottom of a visual system. The next layer up is up in the cerebral cortex and is called V1. We know where it is, we sort of know what it does but we haven’t got the faintest idea why it does it. We know that it segments images and can find edges and measure the orientations of the edges of an image, and down here is, you can barely see it, a small fraction of Lucy’s V1 in her cortex. That’s how it starts off when she’s born ,with all the nerves cells having fibres that are all spread out more or less evenly. By the time she’s been looking at straight lines moving around on a computer screen for a while it ends up structured like that, so that some of the neurons have receptive fields linear, like that, and some of them have like that and like that. The receptive fields cluster together until you can recognise the angle of orientation of an edge at every point on the visual field.

So all that self organises just due to a very simple set of rules inside the neurons. We know that biologically, we know more or less how that happens and it works just fine in Lucy. The question is "Why does it happen?". We don’t know, we assume it’s the start of a visual pipeline, the first stage in processing vision but that actually doesn’t make a lot of sense in many respects so I have my own theories about that and this is the next stage in the visual system in Lucy’s brain.

My basic assumption is that this first stage in the visual system is not abstracting edges just because edges are an important part of recognising something but they are actually designed to control Lucy’s eye movements to bring her eye into the middle of a boundary and in so doing remove some of the variation between images that simplify the problem of recognition. So, my work in Lucy is starting to suggest that the first stage in visual processing is actually also the primary motor thing, it’s actually designed to do something rather than just extract information.

Now I have a model of the next bit, V2, the medial temporal part of the cortex that extracts the next bit of information and you end up having transformed something you see into something that’s represented in object coordinate space instead on visual coordinate space its in object coordinate space and now it doesn’t matter which way up it is or how far away it is or where it is in your visual field it still looks the same. And so that’s an important part of the visual process. But you don’t want to know why or how!

This is another part of her brain that handles the servo-ing of muscle movements. Despite the messy motors, she has quite complex biologically plausible musculature that she has to throw around. This is a servo mechanism built out of a neural network that can control her muscles and learn how to coordinate them and that looks similar to the Superior Colliculus that I started with so now these are starting to look quite similar to each other I feel like I’m starting to get to what it is , what’s the essence of what’s going on inside the cortex . But I haven’t got there yet . I have learnt a load of stuff, like I say you don’t want to know but I’ve got a whole bunch of theories and hypotheses and speculations and discoveries about what’s going on in there. I suppose to sum it up I’d say that the brain doesn’t work the way we think it does. Our traditional view of the brain is just completely arse about face and it’s a much more remarkable machine in all sorts of respects. But you don’t really want to know the details of that.

I’ll finish on the bottom line which is "What does all this achieve when it comes to human like intelligence?". Fundamentally what I’ve got is 8 computers upwards of 50 thousand complex biological neurons arranged in an extremely complex neural network, very probably the most complex neural network ever made and I’ve got a robot that can point at a banana. That is it, that is what she can do, on a good day when everything’s working, I can hold up an apple anywhere in her visual field, wave them around at any angle and she can say the bananas that one. Which is actually quite impressive; it’s a difficult task, visual invariance is a difficult task but its not exactly the most elegant solution to the problem and if you were going to do traditional AI and soft A. I. what you would do is program the video camera to scan the image and you work out which half of the visual field was yellowest and move the arm in that direction. Maybe half a dozen lines of code.

I guess the final conclusion is if you want human like intelligence then I think this is the way to go about it. However, its going to take a while. Even if I knew all the answers today, even if I had the new kinds of hardware because computers are not appropriate, even if I had everything, it would still take Lucy 4 years before she could tie her own shoe laces, and judging by my son it would take 18 years before she could tidy her own bedroom. So I guess if you want human like intelligence you have to wait a bit. If you want real solutions to real problems now despite all my slagging off what you need is traditional A. I.!

I’ll stop there. If you want to follow Lucy’s progress at any time there’s her website.( http://www.cyberlife-research.com)

She keeps her diary on it but I haven’t updated it since last May, but I will update it soon and you can see how she gets on.

Thanks.

About Steves new Project - Grandroids

Videos showing the initial designs for Grandroids

  • https://www.youtube.com/watch?v=9XLJuDV03Mw https://www.youtube.com/watch?v=PZnw6aYYG68 https://www.youtube.com/watch?v=rKRpxj5sqg8 https://www.youtube.com/watch?v=rRem2V3YnNI