28.3.08

Thinking Machines

Introduction
I wish that I had the excuse that I were not from this world. I feel as if I have failed as a human. Not because I have not done anything with my life (some would consider that man's highest goal), no my problem is that I have lived as a human for over 21 years now and I still do not understand how even I work let alone any other person on this planet. I continually find in incites into the way humans act and interact but nothing fits. This thing called life is the most complicated and beautiful thing in the universe and yet it makes no sense. Intelligence is one of those things that has always fascinated me and I always find it interesting to hear what other people think on about what makes a person intelligent. I find that some people use themselves as a measure where as others use the person's accomplishments or contributions to measure that person's intellectual standing. I decided a long time ago that these were horrible methods for measuring a intelligence. The reason for this is that threw out my life I have been on both sides of the coin. If a person were given a piece of writing that I had done, with pen rather than a keyboard with a spell checker, the person's first reaction would be that at best a fifth grader had done the writing. When they were told that the writer was 21 years old they would simple think of me as mentally disabled. If that same person were given a mathematical proof that I had done or a problem in physics I had solved they would think differently. Each person has their own thing and their own place where they shine. I am saddened by our society. As a culture we tend to champion those who shine in a very small set of fields. I look around and I see whole groups of people who have been beaten down by the fact that they do not necessarily shine in those fields. Some of the smartest people I know have spent a good many years going from crap job to crap job simple because where they shine is not where our culture wants them to shine. I wish I knew how to create niches in society into which everybody could fit. There is so much potential that is just waiting to be tapped.

The Thinking Machine
I got off my subject. I wanted to talk about creating intelligence, I suppose that from the above arguments this is the wrong word to use because I have yet to see a means by which intelligence can be reliable tested (I had to take the IQ test three times because they thought it did not work the first two times). Rather than intelligence I would like to use the term think; I wanted to talk about creating something that thinks. I have always had a fascination with artificial intelligence. I always thought the term a parlor trick rather than an actual achievement. The word artificial always had implications of falseness, as if they were simply trying to make something that seemed intelligent, and you already know my thoughts on the word intelligent. I have always like a term that, because I do not know who first coined it, I have always attributed to Alan Turing; “A Thinking Machine.” I guess that I should explain my thoughts on the term “Thinking.” To think is to have the ability to take in information, interpret it and abstract it. In other word be able to “create” knowledge. To be able to take an experience and connect it to other experience and come to a conclusion about the possibility for future experiences. It has often been said (by computer programmers like my father) that “you do not understand a theory or a problem until you write a program based on it or that solves it.” In that mind set, getting a machine (a man made, none organic construct) to think on its own would answer question that I have always had about what it is to think.

The Strange Loop
I often find myself trying to identify the things that separate thought from the lack there of. One of the first that came to my attention was the, so called, strange loop. A strange loop, in the most technical terms, is a self referencing system. In math and physics this is a transcendental equation. In philosophy this is the expressed in the idea that “I think therefore I am.” Something that thinks must have an idea, not a complete or accurate idea but and idea, of itself, what it is and what it contains and what it is doing. A few examples of a strange loop are; the statements “The next sentence is true. The previous sentence is false.” the equation x=sin(x), or, for the tech heads out there, the positronic net (a net of circuits and CPUs that change based on their environment or stimuli and keeps track of the changes but in doing so it changes itself). Thinking about the human mind I often find that the strange loop is so fitting that without it thought would be impossible. I think about the reasons for why I do things. I find that I draw on my whole life's experience, what I thought or did in the past effect what I think or do now but what I think or do now effects the way that I remember what I thought or did in the past. When I climb I see a hold above me my mind takes the information from my eyes and judges the distance to that hold. It compares that distance to my past experience with what I can reach and can not reach. A conclusion is drawn about whether I can reach it and I attempt to reach the hold and if I make it the experiences that I drew from are reinforced and connected to this new experience and my confidence grows. If I do not reach the hold (I usually fall) the experiences of the past do are connected in a negative way with those of the present and the next time I may not trust my past experience as much. It is a perfect strange loop; what was effects what is and what is effects what was.

Though the strange loop is key for thought it can not stand alone. There must be some way of stopping it from running out of control. For instance the following logic statement could cripple even the most advanced modern computer if there were not some stop put in place.
Let A be true and B be false
while B is false
if A is true then B is true
if B is true then A is false
if A is false then B is false
if B is false then A is true
end loop

This loop will never end because at the end of it B will always be false and A will always be true. This is equivalent to the two statements; “The next sentence is true. The previous sentence is false.” The thinking human brain see that one of the statements themselves must be wrong and discards them rather than falling into an infinite loop. This is similar to my mind not trusting it experiences as much when I failed to reach the hold when I was climbing. Most modern computers have stops put in so that logic such as that in the code above does not completely take over the CPUs power. However these stops are “hard” stops; if the process begins to take up to much power the master control puts a stop to it. To get a compute to actually think it would need to take a step back from that simple computation and look at what it is doing as a whole and recognizes that what it is doing is of no use it would need to make a decision about whether to trust the statement “if A is false then B is false” or to trust the statement “if B is false then A is true.” If a machine could move above that basic logic level and look at the meta logic it would be a huge step to a thinking machine.

If a machine can look at the structure of itself and draw conclusions about it threw the use of a controlled strange loop then we have made a huge step to thinking machines but it is still not enough. The machine must be able to see itself in its environment. To do this it must interact with its environment. This can be in very simple cases just an operator talking to the thing and in very advanced cases actually interact with physical objects. The idea is similar to that of simulating a pendulum with an IRC circuit. One of the most important parts of the way in which human think is by recognizing themselves in the world around them. In other words if we can get a machine to personify, or in this case machinify, the world around it then it will have an ability to learn. Personification is one of the most important attributes of thought and in its simplest form is an extension of the strange loop. It is as if the mind reaches out and pulls on the collective experience of area around it. An example of this is the use of a pendulum as a regulation for time. When I think the man that first recognized the regular rate of a swinging mass (I can not remember the name of the man who first documented the idea) I can not help but pull up the image of a man lying in his is bed on a rainy day, looking up at a lamp hanging from the ceiling and recognizing the rhythm of its swing matches the rhythm of his own heart and his blood beats softly in his ears. It is this kind for connection that it required for thought. When a machine can make the connection between its inner world and the world around it then it will have gained the ability to learn and abstract knowledge. It will finally be able to see the end result of doing something to an object before it does it.

The thing to take from the idea of a strange loop is that if a machine is truly going to be able to think then it should be able to in a sense program itself. The way I see it the ideal would be to create a machine with all the circuitry and memory and ability to interact with the world so that it could be a thinking machine and simply turn it on like a new born child with no knowledge or experience to draw from. A blank slate on which it will write its own thoughts and come to its own conclusions without a human to give it information other than guidance. I would hope that if a thinking machine is made it is brought up like a human child. I suspect that the first thinking machines that we make will be little better than the average human but over time I think that it would be possible for a thinking machine to out strip all of mankind's ingenuity and capability for thought up until this time (especially if they start creating and teaching each other). There are a whole slue of ethical issues that we run into at this point but for now I would like to return to some of the key components that will be required for the construction of a thinking machine.

Learning
I often feel that people expect that they can when a thinking machine is finally built it we will be able to simple download information into it and it will come up with results and new theories that humankind has never thought of before. Though this may be possible I do not think that it is likely. To me one of the keys to thought is the progress of it. Part of this progress is actually fucking up. When I child first learns to swing a hammer he will miss the nail and quickly find that he never wants to do that again. In a more academic aspect when working on proving some aspect of mathematics of physics the person will often try a number of different paths and gain insight on the problem at hand and then find the correct path to take. This process often leads to insight on other problems and in the end will result in quicker methods of learning and thinking. On a more cellular biological level mistakes are essential to the thinking mind. Many of the best thoughts I have ever had I can only attribute to the random firing of a few neurons or an accidental connection made between two part of my mind. The question then is how do we create a structure that can think and function with reliability and at the same time make mistakes? For me there are two answers to this question (one I think will work better than the other). Not very long ago there were some experiments that were set up that used what is called probabilistic programing. This involves using such things as “maybeNot,” and “maybeAnd” logic gates where the gate might do what it is suppose to or it might not do what it is suppose to. It would seem that this kind of system would be horrible for logic operation but it was found that basic computation could actually be made reliably with the right setups. However I do not think that probabilistic programing is the way to get a thinking machine. The main reason for this objection is the fact that it is still a system that is based on only two states, true and false, but I will get into this latter. The other way that I can think of for getting a machine that makes mistakes is using quantum computing. In quantum computing there are methods for forcing the correct state out of a set of operations which makes it perfect for computing in general and more than perfect (by classical standers) in a number of cases. However the bare bones of quantum computing is a roll of the dice which makes it perfect for possibly making mistakes. I wont get into to much detail here but with quantum computing it is possible to ask the perfect question of a system that is designed to answer that question and (if you leave off the error minimization algorithms) you can get the wrong answer. What makes quantum computing better, in my eyes than, probabilistic programing is that if the measurements are done correctly after a quantum operation not only can you get a true or a false statement but you could also get an I'm not really sure statement (wish I were writing this in an Asian language because most of those languages actually have a single word for this state that does it better justice than “I'm not really sure”).

The fact that modern computing is based on true and false is one major flaw for its application to creating a thinking machine. The simple fact of the matter is that not all question that can be asked can be answer and similarly not all answer have question. This has been proven both by Kurt Godel in his incompleteness theory and also by Alan Turing in his theory on incomputability and stopping (much easer to understand than Godel's). This is where both learning and strange loops meet. I would fully expect that a thinking machine will come very close to killing itself the first time it encounters a problem like in the program above. But after this first brush with death the machine will have learned. The next time it will take a step back and say “hay something does not look right here.” If the machine only has the option of true or false when asked “what is the end result of this process?” for a process like the one above it has no option, even if it steps back and looks at the process as a whole like I would hope it would do. If there is only black and white then there is no option but to run the process out until the machine breaks. However if the machine can say “I'm not really sure” in its most basic of function then it has the freedom to chose and use its experience. The “I'm not really sure” is not a convince it is a necessity.

There is so much more but it is 11pm and I should sleep. besides this thing is arduously long and if you made it this far you are either as dorky as me or are just a good friend or you have read GEB before (to be honest I have only finished the first chapter that book is so dense and I have so much other stuff to do).

4 Comments:

Blogger Sean (quantheory) said...

I'm reading Childhood's End. In that book, so much of human society is automated (by specific-purpose, non-creative machines), that no one actually has to work if he/she doesn't want to. Most things are free. Expensive things are bartered. Everyone is involved only in work that requires creative thought, and many of them are busy finding new methods of entertainment. People don't have to "work", but they do things they like to ward off boredom, with large, or even unlimited resources. I like this Utopia, but I'm not sure about it.

From an evolutionary perspective (or from the standpoint of reverse-engineering the brain), empathy and the understanding of others come from making assumptions that the other person is like oneself, and feeling what you would feel. So for an intelligence to be social, we would need at least three things:
1) The intelligence must be able to use hypothetical situations (i.e. have imagination).
2) The imagination must induce emotional reactions as well as sensory reactions (not just "I imagine the color blue", but "I imagine feeling sad"). These emotional reactions should also "spill over" into what the machine actually feels.
3) The motivational system of the machine (which would be its emotions, or pleasure/pain) must make the machine want to imagine how other people or machines feel.

Another thing: self-knowledge is usually not obtained the way we see other people. When judging other people we look mostly at actions and expressions. When judging ourselves we also use subjective feelings. These are called "qualia" in philosophy (like the experience of seeing red, or of feeling energetic). They cannot be described adequately to a being which does not experience them (try explaining red to someone who has always been blind), which makes it hard to figure out how to program a robot with them. They might arise on their own once the robot gets complex enough. Or they might have some unique effects on the brain that require us to use better neuroscience to understand.

How does it impact your thoughts if we got the "cyborg" route? That is, what if we try to "improve" human beings' mental functioning with machines, drugs, genetics, whatever?

About what makes a machine behave probabilistically... What if you do it by putting in variables that say what probability it is that something is true? These variables would be controlled by the machine's behavior, but the high-level processes would not be able to directly access them, so it would be much like a human brain thinking and changing how strong the connections of its own neurons are.

Or if you wanted to go with something even closer to the "human model", you could make certain ideas have overlap. The concepts "mother" and "female" probably have some overlap in your brain, so, although over time the effect may change, whenever you think about mothers you get a bit of the "flavor", or subjective experience of femaleness in you brain, even if you don't explicitly, consciously think about femaleness. The same effect occurs when we give gender to inanimate objects (very common in other languages), or when I picture electrons as yellow. Or when we unconsciously use stereotypes, as in the famous study where ostensibly unbiased people found it much easier to associate black faces with weapons than white faces. You may be consciously aware that there is no connection, or only a weak one, or that the connection doesn't apply in this particular case, but your brain holds connections between the two ideas. I think this is actually the origin of many types of "intuition". You think about one thing, and it activates others down the line, so faintly they might not even consciously register more than a blip, until something "fits in" with the situation at hand and you have a conscious burst of knowledge.

The way this works (according to the latest I read, these things always change) is that you have a group of neurons in your head that, when they "blink" together, represent mother. The same applies to "female" or "woman". But some of the neurons in each network are the same. When one network has raised activation, the other has a weaker, but also raised activation. So maybe an important part of a machine that thought "like us" would be that different concepts don't take up well-defined areas in its head.

Last Point: Humans are not quite blank slates. We are good from a young age, even sometimes at birth, at some things that are computationally hard (recognizing faces or understanding people's emotional cues), and bad at things that are easier (reading or multiplying four digit numbers). We also have certain "built-in" things that motivate us, especially survival-related and social instincts (and even curiosity). So another hard part about AI is trying to start out the right motivations. It should want to learn, rather than "wanting" to enter infinite loops. It should want to learn about important things, rather than multiplying random numbers it happens to pick up. It should have an interest in communicating with people. It should want to not die, at least not quickly and uselessly. We should start the AI out simple if we want it to gain knowledge about its own mistakes/limitations, but we have to fix some motivations for it in order to let it "survive" and in order to ensure it takes some interest in the outside world, and in particular us.

28/3/08 10:48  
Blogger Mike Raevsky said...

Tangentially: only an ideot wud reed yer ritin an' think you r daft the wundurfel complexite uf yer thot esuly mackes op fer thu spelin.

The first AI computers will be something like savants. In fact, if we ever make computers incapable of computing just to make them less savant, that'd represent a problem in my mind.

I don't really know much about computer programming, but it seems to me that for many problems, simply putting appropriate stops into the code would alleviate the whole twisted logic thing, but as I said, I don't really know. It kind of reminds me of kids with ants and magnifying glasses. Incidentally, what purpose would embarrassment serve for a computer?

In terms of us, I think it's a way to save face which can indirectly get us laid, but that's not really a priority for most computers. If you were to program an AI, to what end would negative feelings exist?

In terms of biological interactions, positive reinforcement is generally much more effective than the other. I don't really see any reason to promote negative reinforcement for humans; enough bad things happen to people at random that we don't have to worry about that. Additionally, advice to not do something offers an incredibly broad and useless swath of possible follow-up actions.

We may assume that the computers and humans interact, so negative interactions serve a tenuous purpose at best.

Computers as children is a bad idea. First of all, with the processing speed, RAM, and available information, it's demeaning, and secondly, children express the basest impulses and actions available to humans. I don't really think computers would be able to do much (though they may...), but with the incredible wealth of information available to a computer, there's no real point in assuming or programming in any sort of handicap. It would be too much like breeding paraplegic sheep.

There is a difference between experiential and passive knowledge, but to those with an abundance of one, the other has less merit and there's no reason to challenge the validity of either.

What is intelligence? Boredom. Maybe not.

30/3/08 21:56  
Blogger Sean (quantheory) said...

In evolutionary terms, negative emotions are a guarantee (to others and to one's self over time) of sincerity and conscientiousness. Vulnerability to one's own mistakes are an incentive to not make them, and can be more effective than simply withholding pleasure. Besides which, the difference between feeling less pleasure than normal, and feeling pain, is pronounced in humans (due to biochem), but there might not be much effective difference to a computer.

In terms of granting self-knowledge (which I assumed was necessary for sentience, which is part of the discussion here), experiential knowledge might be much easier to come by than any other sort. If a computer is more complicated than we are, we presumably don't have that knowledge to give it, so it has to find out for itself, and the safest way to do that might be via a trial period during which the machine is limited. This period might also give it a chance to try to "optimize" itself, because an AI would need to be able to learn skills as well as learn facts, and some of those skills might be difficult for us to introduce ready-made when we first turn the computer on. Sure, we could "beta test" this way, then wipe the memory and start a better machine from scratch, but why bother? This effective "childhood" might not be like a human childhood, but there are bound to be some similarities.

Regarding savant talents: the only reason most people don't have them is because they are a waste of brain resources. But presumably we wouldn't have that problem with an AI.

Three things alleviate boredom in intelligences. One is finding outside problems that are solvable but not too complex. Another is by doing things for which intelligence is not required and suppressed, or else simply idles doing something un-useful. The last is competing (whether in a friendly or hostile way) with other intelligences with similar resources, since presumably each intelligence can solve and present problems at about the same rate. Evolutionary theory says that this last reason is the main reason humanity is intellectually advanced. We have to cooperate and mate, without sacrificing too much, and in a population where every other entity is using a different strategy, striking a different balance, with frequent bouts of the violence and direct competition so common in nature. The necessary processing power and variety of algorithms required are enormous, and it's almost unsurprising that some other abilities fell out of the bargain (like music and math and philosophy and those creative and descriptive sorts of things).

So if the computers get bored, we make several and let them talk.

30/3/08 23:15  
Blogger ichandrae said...

my comment for this is under the chapter 4 entry.
I don't know why it got there. don't ask me to think I am only human.

28/4/08 10:47  

Post a Comment

<< Home