Volume I: The Inner Brain and Conceptualization
|
Volume II: The System MindVolume II, The System Mind, makes an analogy with the human mind to model our modern economic system. It discusses how a kind of mind emerges from a complex system like a modern economy in the same way that the human mind emerges from the workings of the physical brain. The following chapters discuss how various aspects of our economic/political system such as foreign policy, social trends and political ideology are the result of the human mind interacting with the system mind. There are then a number of chapters that provide evidence for the existence of a system mind. The topics of these chapters are the Russian transition, the Kennedy assassination, the 9/11 attack, the Iraq war, and the Woman's Liberation movement. The last chapter discusses possible economic and political trends for the future.
Purchase Volume II on Amazon |
What is Artificial Intelligence.
By D.D. Wells.
In order to promote Artificial Intelligence, it is being called “deep learning”. Does the introduction of this relatively new phrase mean to suggest that humans learn only superficially? Deep learning will make a whole new world – great job losses, new jobs in creating AI, whole new perceptions. People have made estimates of job losses due to AI. For example, Carl Benedikt Frey and Michael Osborn of Oxford University published an article stating that 47% of jobs in America were at high risk of being “substituted by computer capital” soon. Let’s look at the basis of these predictions because the people who make them haven’t tried to tell us exactly what AI is and how it differs from human intelligence.
In any process, information flows in various ways. There are basically three levels of information flow that are involved in any kind of process. The first is an order of magnitude involved in processes such as data storage, emailing, word processing, texting, Googling, buying online, etc. These processes involve relatively small information flows and in theory, could be carried out with mechanical calculators. However, since the vast majority of uses for most computers involve these processes, it would be appropriate to call this the computational level.
The second is the perceptual level. But we must distinguish between the two halves of the perceptual process. The first half involves the information content of what is coming to our sense organs -- photons for sight, pressure waves for sound, molecules in the case of taste and smell. The second half involves the mind creating a mental representation using this information -- a visual picture in the case of sight, and tastes and smells for gustatory and olfactory perceptions.
We could call the third level, the intentionality level. To elucidate this level, let us define a few key concepts. Manipulations are motor abilities that involve information processing that is probably at the perceptual level. They include the motor dexterity involved in doing certain kinds of art, crafts, music and athletics. First, skills, talents and manipulations must be distinguished. Manipulation is simply mechanically going through the motions of a performance. Skills are creative ways to manipulate. A talent is an innate ability to easily develop a skill. There must be innate wiring in the brain that enables a talented person to develop the skill of improvising in jazz or cartooning in art. A robot may be technically able to manipulate the holes of a clarinet consisting of the ability to render a difficult series of notes. However, the ability to creatively improvise in jazz while manipulating a clarinet, or interpret in the case of classical music, is a skill aided by an innate talent.
It certainly seems the case that AI will be able to compose songs as good as what popular music has produced in the last 50 years. And it certainly will be able to create art as good as what’s been passed off as modern art. However, will AI ever be able to creatively compose a song like Stardust, improvise like Benny Goodman, or compose a Beethoven symphony? An article in the Feb., 2018 issue of Scientific American magazine talked about the threat to music from AI. As with a lot of questionable articles these days, the writer doesn’t mention the most important point, which is that the robot first needs to be trained up by examples that humans pick out as melodious. To do this for a symphony or a painting would involve the robot churning out billions of possibilities that humans would have to sort through. It is easier for a human to compose the symphony or the painting in the first place. In general, regarding aesthetics, the computer may eventually match certain human capabilities but only if it were guided by human aesthetic judgment. But then, can we attribute the results to the robot or the human.
The second half of the perceptual process involves what is called consciousness or awareness. Many events considered conscious are called qualia. When light comes into our eyes when looking out over the world, the resulting brain activity causes a mental picture to appear in the mind. This picture is a quale. Other forms of qualia are tastes and smells and feelings and emotions. The view in this paper is that the mind creates the detail of qualia in the mind, a process that is honed by millions of years of evolutionary history.
It is pretty certain that robots don’t create qualia. After all, it appears that lower animals like fish and reptiles don’t have the brain circuitry to create qualia. And anything organic is immensely more complex than a robot. However, people are notoriously behavioristic. Perhaps in the distant future, a robot could be made that would move continuously, talk and even have warm skin like Arnold in his Terminator movie. That might convince a lot of people that the robot is experiencing tastes and smells. But scientists would probably know better.
The famous Nobel laureate, Roger Penrose addresses the consciousness problem. Penrose is one of the few people who appreciates the two fundamental questions of the universe: that the big bang seems to contradict the second law of thermodynamics and the problem of how mentality, including consciousness, arises from information coming to our physical sense organs and activity in a physical brain. How does physical plus physical equal mental?
First, the nature of the relationship between what’s out in the world and what’s in the mind is very controversial. Most philosophers, like John Searle, take the view that there are identifiable things out in the world, which the mind perceives, and then creates a mental picture of what is already out in the world. So, for example, Searle specifically says that before there were humans to perceive them, there were nevertheless mountains and rivers on Earth. But is this really true?
There is a hierarchy of worlds starting with the world of galaxies down to the world of solar systems, down to features of planets like Searle’s mountains and rivers, down to the chemical world of atoms and molecules down to the quantum mechanical world of quarks and then to the speculative world of strings and fields. We don’t know how many worlds there are below these. The premise of this article is that an observer of any size could look around and on the basis of information coming to his or her sensory organs from whatever is in the vicinity would create perceptual entities which would be defined as comparable in size to the observer. So, for example, humans on earth receiving information from whatever is out in our world, create perceptual objects like Searle’s mountains and rivers. Through hypothesizing, scientists at a given level could create models of laws and entities for supra-levels above them and causal micro levels below. But entities and events at those levels would not be created by the process of perception but by scientific hypothesizing .
In some ways, this view could remind us of the philosophy of Bishop Berkeley who said things don’t exist if there is no perceiver. I think Berkeley was right in this respect. But also he went along with the conventional view that things did exist without human observers. His explanation was that there was a God who would be peering at objects to keep them in existence even when there were no human observers. This is is how God came to exist for Berkeley. After all, he was a Christian Bishop. Of course, this view gave God a lot look at. But what Berkeley didn’t point out because he didn’t think of it, is that entities created by God's perceptual processes on a given day would depend on the size God assumed on that day. Of course we could allow Berkeley's God to grow or shrink to any size. If He assumed a certain size, His perceptual processes would create atoms and molecules. At a much larger size, He would see the solar system with planets and moons. In any event, Berkeley needs to be revived.
Humans happen to be of a size that their perceptual processes create Searle’s mountains and rivers. After many wrong flat earth attempts, scientists finally postulated a supra-world consisting of a spherical earth revolving around a blazing sun. We make pictures of these entities so that they become part of our perceptual world. But would observers the size of the solar system then perceive solar entities similar to the entities we have postulated for that level? We’ll never know.
Going downward, science gives us pictures of entities in our causal world of atoms and molecules with little electron balls revolving around a nucleus. But scientists tell us these are no better than cartoons. Just as humans have created models of the supra-world of solar systems, we might wonder what the models of the next level supra-world of observers of a size to perceived atoms, would be like. I suppose they might correlate types of atoms and molecules to come up with edges and 3-dimensional objects. But we don’t know how close their supra-world objects would be to our mountains and rivers. The bottom line is that Searle is assuming observers the size of humans to see mountains and rivers. Before humans came along, perceptual and scientific worlds would depend on the size of the observer who might not perceive mountains and rivers as we see them.
We have said that the mind creates perceptual objects like a glass of water. Let’s elaborate. When we view a glass of water, light photons bounce off whatever is out in the world and into our eyes. It is the mind that creates the picture of the water in the glass. And we say the picture of the glass is in the mind as a form of consciousness. A robot could make a physical picture – even a camera can do that (although even this would have to be interpreted by human observers). But where is the computer’s mental picture?
Aa we mentioned, animals below mammals like fish and reptiles are probably not conscious and do not experience qualia . We will use the term ‘detection’ to denote the relationship between these animals and arrangements of molecules out in the world. A fish doesn’t create a picture of an obstacle in its path. He detects the obstacle so as to swim around it. Photons come into its eyes and goes directly to motor areas of the brain which run the muscles so as to avoid the obstacle. The fish doesn’t have the brain organs to be able to create a quale – a mental picture.
It's hard for us to imagine how we would avoid a bad smell if we didn’t have a bad sensation in our minds. We say we’re avoiding the smell because of the bad qualia. If an animal couldn’t experience qualia, why would it avoid the smell. Well, lower animals are wired by instinct to avoid injury by simply reacting to patterns of photons by themselves in place of vision, or reacting to molecules by themselves in the case of smells and tastes. Detection is sufficient to tell these animals how to navigate the world. But mere detection explains why they can easily be fooled.
Among mammals, we still have the question of how and why the brain causes a conscious mental picture or quale in the mind. Penrose starts off analyzing the consciousness hierarchy at a level called understanding. But I would think that this is the least complex form of consciousness. Not only that, but it’s a very special kind of understanding: mathematical understanding, which he rescues with the famous Gödel argument. Briefly, Gödel showed that humans can see the truth of certain number theory sentences that cannot be proven within number theory. This shows that humans must be using more than computational algorithms to ascertain these truths. This would open the door for the idea that humans can create qualia.
However, even if we accept his arguments about mathematical truths, what about the 99.99% of people who never have occasion to engage number theory? I think number theory is in a world of its own. In the field of mathematical logic, truth is taken as ascertaining facts. Proof refers to what can be deduced within number theory. However, for real-world events, where do we locate truth and proof? For truth, sentences about real-world events would be deemed true if they experimentally corresponded to facts. The question then is, are people using algorithms to determine the truth of statements describing real world events? If a robot could do it, this would suggest that humans might be using algorithms to determine the truth of such sentences.
Let’s suppose the robot could be programmed to parse the world into our familiar objects and events? Then, by looking at falling objects, I think the robot could come up with Newton’s laws. And since these laws would be factual, describing reality, they would be deemed true. But where does the concept of proof come in? I don’t see how the Gödel argument would apply. There is no theory in which to prove these propositions. If we insist on using the term ‘proof’, it seem to me that determining truth would be the proof of the corresponding sentences. If the cat is on the mat, wouldn’t the robot be able “see” the totality of this event to determine the truth of the corresponding proposition? What could proof possibly involve in addition to seeing factual truths?
But determining the truth of propositions that involve human emotions, intentions and understanding is an altogether different matter. Penrose believes that no algorithm could make such human judgments. But he doesn’t stress that judgments and understanding are about meaning-laden events in the world, which is what intentionality is all about. Intentionality is the reaching out to objects in the world and conferring meaning on them.
There seems to be a hierarchy of conscious events in terms of complexity. Qualia are at the top of this hierarchy and Mathematical understanding below. So, if he can explain qualia, his methods may be able to explain lesser complex mentality. In this regard, his very important work explained in his books claims qualia stem from a non-computable quantum mechanics process in the microtubules of the brain. But he admits he doesn’t know what this process is and he doesn’t speculate as to how much capacity for information processing qualia require. The question is, why is it non-computable? Is it because it’s something like chaos theory; or is it because a Turing machine simulation would not stop; or is it because it requires infinite capacity information processing? Penrose doesn’t say. Or, perhaps it isn’t non-computable at all. After all, this non-computability rests on the non-computability of mathematical understanding which, as we have pointed out, may not be relevant to perceptual processes of real-world events. All his illustrations of non-computability are clever and convincing, such as determining whether a certain set of polyominoes will tile the plane; but these examples are again mathematical in nature and very far removed from real world perception.
We might look at the evolution of consciousness to try to understand and explain it. Is there a role for meaning in the evolution of consciousness? And how much information capacity is required for consciousness and for intentionality? At this point, we should define ‘meaning’. In my book, The Inner Brain, Conceptualization and the System Mind, I explained that any concept has a surrounding meaning cloud. Of course, every individual would have a different meaning cloud. Let’s look at the personal meaning cloud you might have for the concept HORSE. This cloud would contain all the mental items in your mind having anything to do with horses. It would consist of all the memories you have of experiences with horses, all the mental images you could conjure up involving horses doing a variety of activities, and all the beliefs and knowledge you have explicitly or implicitly about horses. There could be an infinite number of such items, although we can’t bring most of them to mind. And each of us would have a different list. (The concept of infinity is very problematic. In my opinion, a trillion trillion, 10 to the 24th power, for all practical purposes can be considered infinity. That is more than the number of zeros and ones in all the digital devices in the entire world.)
The concept of intentionality is the ability to have meaning clouds for all the concepts you are aware of. Thus, the ability to do intentionality would require an infinite information processing capacity, which the human brain seems to be able to accommodate. This infinite capacity rests on the fact that synapses seem to vary continuously enabling an infinite number of states. An added note is that if language, including semantics, involved only finite state processes, we could never get beyond syntax which is all computers can do. We can sum up with the aphorism that semantics is simply infinite state syntax.
This is illustrated by John Searle’s famous Chinese room argument. He looks at a room where there are devices that look at the syntax of a sentence coming in, and on the basis only of syntax, will come up with an appropriate response to the sentence. Searle says that on the basis of only syntactical manipulations, the Chinese room would never understand Chinese because you could always invent a new statement that would confound the Chinese room and produce an inappropriate response. However, if, whenever this happened, you could add to the store of syntactical information so as to respond appropriately to the new statement, you would have, in effect, a constructivist definition of infinity and the Chinese room would be deemed to understand Chinese. Its semantics would rely on its infinite state syntax.
An example came up when IBM’s program Deep Blue beat Garry Kasparov, the world’s chess champion at the time. I felt sorry for Kasparov because, in a sense, he had lost to the computer. However, he might not have felt as bad if he had realized that the computer didn’t know it had won because the computer doesn’t know what winning means. In fact, the computer isn’t really playing chess in the intentional sense because the computer doesn’t know what ‘playing’ means. Humans say the computer won because we anthropomorphize the activity of the computer. Besides, we now know that the human is playing on the basis of abstract gestalts, which are of a higher order complexity; whereas the computer is simply analyzing and evaluating moves under an algorithmic rating system. We might say the computer is intelligent, although if pressed we would probably say it was a kind of artificial intelligence.
These examples show that computers with finite state capacity cannot accommodate meaning which implies that meaning is infinite state. Alan Turing in the 1940s conceived of an abstract computer containing an infinite tape and a few simple operations. He showed that his Turing machine could do anything a modern computer can do. It is interesting to note in this regard that Turing's machine might be able to do intentionality if it could use its infinite tape. However, a machine attempting to carry out such a putative computation would never stop. But when a Turing machine is doing a computation that doesn’t stop, you can’t know it won’t stop, and so you’ll never see a demonstration of intentionality in a machine.
Searle comes to the same conclusion about meaning. First, he says that the brain is a computer mainly because everything is a computer. But then he adds that what the brain can do that a computer can’t is intentionality. But he doesn’t follow up on this insight to ask, why. This paper takes two positions in this regard. The first is that intentionality and its inclusive meaning is an infinite state matter; secondly, it is only because the brain is not a finite state device that it can do intentionality. Also, in my book, it is shown that infinite state capacity and its accompanying intentionality is the foundation for the brain’s ability to support intelligence, creativity and consciousness itself.
Getting back to information capacity, how much is required for consciousness? To answer this, let’s start with how and why consciousness evolved in the first place. Animals below mammals probably have very little consciousness, very little intelligence and most objects and events have little meaning to them. These creatures can survive in a non-perceptual world of mere photon reception and molecular detection, for which they are hard-wired to respond to without knowing the meaning of any of it. They don’t need consciousness and probably don’t have it.
So, why did consciousness evolve in mammals? The suggestion here is that consciousness emerged as a vehicle for free will and free will became a necessity in an ever-increasing complex world of mammals. As an aside, in my book, I explained that the existence of free will is based on the idea that meaning structures in the brain create a total field that represents the entire person and can resolves the choice of equivalent brain impulse paths. And since this field is the result of enormous information-content meaning structures, orders of magnitude above the perceptual level, we will never be able to make models and predictions.
In order for free will to be exercised, there must be a workspace where a number of meaning-laden concepts are held in view simultaneously to be mixed and matched. This would clearly enable evolving creatures a wide array of combinations and permutations to deal with an ever-increasing complex world. Possibly the ability to simultaneously hold together meaning-laden concepts in such a workspace is what consciousness is. And since such concepts require infinite capacity, so would such a conscious workspace. This view implies that the advantages of free will was a necessary concomitant for the evolution of consciousness. But how were the physical conditions of consciousness possible? Penrose has in effect two explanations and it isn’t clear how they are related.
First, he has his quantum mechanical process in the microtubules of the brain. But then he has a second explanation which is that if you get lots of neurons to cohere in a quantum mechanical way, they oscillate in a kind of harmony. I take it that coherence is the product of entanglement. And this produces consciousness. But meaning structures involving huge number of neurons interconnected in an infinite variety of ways would be a perfect basis for such coherence to take place. My claim is that it was only when intentionality became possible because of the infinite information capacity of the mammalian brain that large numbers of interconnected neurons could began to cohere and create the oscillations that are the basis for consciousness. Also, since interconnectivity in a brain rises exponentially with brain volume, this would suggest that the degree of consciousness would increase by some power of brain volume; and this seems to be the case.
Our question as to the relationship between consciousness and intentionality might be elucidated by the third leg of our mind/body stool: intelligence. Besides, this paper is about artificial intelligence and we will be better able to define AI if we have an elaborate concept of human intelligence. First, we might point out that intelligence also seems to require intentionality. I think computers might score well on the usual IQ tests. Computers can manipulate geometric figures and have enormous vocabulary. However, many researchers in the field of intelligence have pointed out that there is more to intelligence than what is measured with IQ tests.
The view here is that in many areas of life, we deem intelligence on the basis of understanding meaning. If a general reads the actions of an opposing general in a war and can accurately judge what his actions mean, his effective responses will be deemed intelligent. A CEO who judges the meaning of the behavior of a competitor and calls for a certain course of action to counter effectively, is called intelligent. In sports we hear announcers referring to basketball or football IQ. This refers to is a player’s ability to accurately judge the meaning of a defensive position and respond effectively. Thus, intelligence involves intentionality.
Now, even though intentionality and free will may have been necessary for consciousness to evolve, the question arises as to whether it had to evolve? Well, with existing capacity and an increasingly complex world to deal with, consciousness would enable advanced mammals to use free will to plan and cope in this world. Looking at the lower animals we see that their level of consciousness seems only to minimally accommodate what little meaning and intelligence they need to survive. This might be near zero in fish and reptiles.
As we go up the phylogenic scale, we see among mammals that consciousness articulates as they evolve increasing capacity to accommodate meaning and to display intelligence. We don’t think of mice as having a great deal of free will, but we do give dogs and cats a limited degree of free will. That’s the way we talk. Of course, many scientists would dispute the whole idea of free will especially for lower levels of complexity. But how do they prove this? G. E. Moore and Wittgenstein would say that how we talk creates some degree of reality. Our ordinary talk is soaked in words for choices and freedom of action.
We can consider the evolution of computers the same way that we looked at the evolution of consciousness. This leads to the main question of this paper. What is Artificial Intelligence and how does it differ from human intelligence. What has caused a stir in the AI field is that computer chips have gotten to the level of information processing comparable to the perceptual level. Computers can now recognize faces, judge moods, diagnose diseases, read x-rays, distinguish any category of object. But all of this involves only finite information capacity. However, nobody in the AI field even talks about the capacity that is required to accommodate skills involving creativity, intelligence involving intentionality and consciousness involving sensations and emotions. If these capabilities require an infinite state device to accommodate, this explains why only the brains of advanced creatures like mammals have these capabilities. It also explains why after chip capacity has reached levels of complexity undreamed of 50 years ago, there is still no intentionality and so no consciousness in the computer as is shown by the chess example. If this is true, Artificial Intelligence is no more intelligence than an artificial rose is a rose.
As to the future of AI, I think the effects of AI have been greatly overblown. Yes, perhaps in the near future, there will be robotic bricklayers, plumbers and other perceptual level skills. They already have driverless cars. However, jobs that involve an array of emotions like empathy and understanding cannot be replaced by finite state devices like robots. Jobs such as teaching, personnel, administration, counseling, human resources, social work, law enforcement, and others will probably never be replaced.
The situation is ever more incisive when we turn to healthcare. As we continue to commit genocide on our genes, healthcare will become an increasingly greater part of the economy. In my book, I predicted that by the end of the century, healthcare will become three fourths of the economy. And by its very nature, healthcare requires doctors and nurses whose relationships with patients involve great amounts of understanding and meaning.
The only areas in limbo are those involving aesthetics. We don’t know if computers will ever be able to design a great piece of architecture or compose a great jazz tune or even a Beethoven symphony. In our ever-shrinking aesthetic world, computer outputs may very well be good enough for most people. An important area of human activity is justice. Will robots ever replace jurors in a criminal trial.? Most likely not. In my opinion, the reason is that the robot cannot cope with things like intentions and motivations that involve meanings. Already, courts have excluded machine lie-detectors from jury trials.
Summarizing, as Penrose would admit, we don’t know the nature of his quantum mechanics process that enables qualia; and so, we can’t be sure whether it is non-computable. However, the claim in this paper is that consciousness and its various aspects are an end-product and that non-computability characterizes the process. The first step was the evolution of an infinite state mammalian brain to accommodate meanings. This is the foundation for consciousness which evolved as a workspace enabling the free will mixing and matching of meaningful concepts. Then, high information content meaning structures of the brain made possible quantum mechanical coherence of large numbers of neurons involved in those meaning structures and caused the oscillations that made qualia possible. As we see on the phylogenic scale, qualia are the last step in the evolutionary tree. Since computers are finite state devices, intentionality and all that flows from it will never be possible in them. Thus, what separates human from computer artificial intelligence is intentionality and the ability to accommodate meanings.
By D.D. Wells.
In order to promote Artificial Intelligence, it is being called “deep learning”. Does the introduction of this relatively new phrase mean to suggest that humans learn only superficially? Deep learning will make a whole new world – great job losses, new jobs in creating AI, whole new perceptions. People have made estimates of job losses due to AI. For example, Carl Benedikt Frey and Michael Osborn of Oxford University published an article stating that 47% of jobs in America were at high risk of being “substituted by computer capital” soon. Let’s look at the basis of these predictions because the people who make them haven’t tried to tell us exactly what AI is and how it differs from human intelligence.
In any process, information flows in various ways. There are basically three levels of information flow that are involved in any kind of process. The first is an order of magnitude involved in processes such as data storage, emailing, word processing, texting, Googling, buying online, etc. These processes involve relatively small information flows and in theory, could be carried out with mechanical calculators. However, since the vast majority of uses for most computers involve these processes, it would be appropriate to call this the computational level.
The second is the perceptual level. But we must distinguish between the two halves of the perceptual process. The first half involves the information content of what is coming to our sense organs -- photons for sight, pressure waves for sound, molecules in the case of taste and smell. The second half involves the mind creating a mental representation using this information -- a visual picture in the case of sight, and tastes and smells for gustatory and olfactory perceptions.
We could call the third level, the intentionality level. To elucidate this level, let us define a few key concepts. Manipulations are motor abilities that involve information processing that is probably at the perceptual level. They include the motor dexterity involved in doing certain kinds of art, crafts, music and athletics. First, skills, talents and manipulations must be distinguished. Manipulation is simply mechanically going through the motions of a performance. Skills are creative ways to manipulate. A talent is an innate ability to easily develop a skill. There must be innate wiring in the brain that enables a talented person to develop the skill of improvising in jazz or cartooning in art. A robot may be technically able to manipulate the holes of a clarinet consisting of the ability to render a difficult series of notes. However, the ability to creatively improvise in jazz while manipulating a clarinet, or interpret in the case of classical music, is a skill aided by an innate talent.
It certainly seems the case that AI will be able to compose songs as good as what popular music has produced in the last 50 years. And it certainly will be able to create art as good as what’s been passed off as modern art. However, will AI ever be able to creatively compose a song like Stardust, improvise like Benny Goodman, or compose a Beethoven symphony? An article in the Feb., 2018 issue of Scientific American magazine talked about the threat to music from AI. As with a lot of questionable articles these days, the writer doesn’t mention the most important point, which is that the robot first needs to be trained up by examples that humans pick out as melodious. To do this for a symphony or a painting would involve the robot churning out billions of possibilities that humans would have to sort through. It is easier for a human to compose the symphony or the painting in the first place. In general, regarding aesthetics, the computer may eventually match certain human capabilities but only if it were guided by human aesthetic judgment. But then, can we attribute the results to the robot or the human.
The second half of the perceptual process involves what is called consciousness or awareness. Many events considered conscious are called qualia. When light comes into our eyes when looking out over the world, the resulting brain activity causes a mental picture to appear in the mind. This picture is a quale. Other forms of qualia are tastes and smells and feelings and emotions. The view in this paper is that the mind creates the detail of qualia in the mind, a process that is honed by millions of years of evolutionary history.
It is pretty certain that robots don’t create qualia. After all, it appears that lower animals like fish and reptiles don’t have the brain circuitry to create qualia. And anything organic is immensely more complex than a robot. However, people are notoriously behavioristic. Perhaps in the distant future, a robot could be made that would move continuously, talk and even have warm skin like Arnold in his Terminator movie. That might convince a lot of people that the robot is experiencing tastes and smells. But scientists would probably know better.
The famous Nobel laureate, Roger Penrose addresses the consciousness problem. Penrose is one of the few people who appreciates the two fundamental questions of the universe: that the big bang seems to contradict the second law of thermodynamics and the problem of how mentality, including consciousness, arises from information coming to our physical sense organs and activity in a physical brain. How does physical plus physical equal mental?
First, the nature of the relationship between what’s out in the world and what’s in the mind is very controversial. Most philosophers, like John Searle, take the view that there are identifiable things out in the world, which the mind perceives, and then creates a mental picture of what is already out in the world. So, for example, Searle specifically says that before there were humans to perceive them, there were nevertheless mountains and rivers on Earth. But is this really true?
There is a hierarchy of worlds starting with the world of galaxies down to the world of solar systems, down to features of planets like Searle’s mountains and rivers, down to the chemical world of atoms and molecules down to the quantum mechanical world of quarks and then to the speculative world of strings and fields. We don’t know how many worlds there are below these. The premise of this article is that an observer of any size could look around and on the basis of information coming to his or her sensory organs from whatever is in the vicinity would create perceptual entities which would be defined as comparable in size to the observer. So, for example, humans on earth receiving information from whatever is out in our world, create perceptual objects like Searle’s mountains and rivers. Through hypothesizing, scientists at a given level could create models of laws and entities for supra-levels above them and causal micro levels below. But entities and events at those levels would not be created by the process of perception but by scientific hypothesizing .
In some ways, this view could remind us of the philosophy of Bishop Berkeley who said things don’t exist if there is no perceiver. I think Berkeley was right in this respect. But also he went along with the conventional view that things did exist without human observers. His explanation was that there was a God who would be peering at objects to keep them in existence even when there were no human observers. This is is how God came to exist for Berkeley. After all, he was a Christian Bishop. Of course, this view gave God a lot look at. But what Berkeley didn’t point out because he didn’t think of it, is that entities created by God's perceptual processes on a given day would depend on the size God assumed on that day. Of course we could allow Berkeley's God to grow or shrink to any size. If He assumed a certain size, His perceptual processes would create atoms and molecules. At a much larger size, He would see the solar system with planets and moons. In any event, Berkeley needs to be revived.
Humans happen to be of a size that their perceptual processes create Searle’s mountains and rivers. After many wrong flat earth attempts, scientists finally postulated a supra-world consisting of a spherical earth revolving around a blazing sun. We make pictures of these entities so that they become part of our perceptual world. But would observers the size of the solar system then perceive solar entities similar to the entities we have postulated for that level? We’ll never know.
Going downward, science gives us pictures of entities in our causal world of atoms and molecules with little electron balls revolving around a nucleus. But scientists tell us these are no better than cartoons. Just as humans have created models of the supra-world of solar systems, we might wonder what the models of the next level supra-world of observers of a size to perceived atoms, would be like. I suppose they might correlate types of atoms and molecules to come up with edges and 3-dimensional objects. But we don’t know how close their supra-world objects would be to our mountains and rivers. The bottom line is that Searle is assuming observers the size of humans to see mountains and rivers. Before humans came along, perceptual and scientific worlds would depend on the size of the observer who might not perceive mountains and rivers as we see them.
We have said that the mind creates perceptual objects like a glass of water. Let’s elaborate. When we view a glass of water, light photons bounce off whatever is out in the world and into our eyes. It is the mind that creates the picture of the water in the glass. And we say the picture of the glass is in the mind as a form of consciousness. A robot could make a physical picture – even a camera can do that (although even this would have to be interpreted by human observers). But where is the computer’s mental picture?
Aa we mentioned, animals below mammals like fish and reptiles are probably not conscious and do not experience qualia . We will use the term ‘detection’ to denote the relationship between these animals and arrangements of molecules out in the world. A fish doesn’t create a picture of an obstacle in its path. He detects the obstacle so as to swim around it. Photons come into its eyes and goes directly to motor areas of the brain which run the muscles so as to avoid the obstacle. The fish doesn’t have the brain organs to be able to create a quale – a mental picture.
It's hard for us to imagine how we would avoid a bad smell if we didn’t have a bad sensation in our minds. We say we’re avoiding the smell because of the bad qualia. If an animal couldn’t experience qualia, why would it avoid the smell. Well, lower animals are wired by instinct to avoid injury by simply reacting to patterns of photons by themselves in place of vision, or reacting to molecules by themselves in the case of smells and tastes. Detection is sufficient to tell these animals how to navigate the world. But mere detection explains why they can easily be fooled.
Among mammals, we still have the question of how and why the brain causes a conscious mental picture or quale in the mind. Penrose starts off analyzing the consciousness hierarchy at a level called understanding. But I would think that this is the least complex form of consciousness. Not only that, but it’s a very special kind of understanding: mathematical understanding, which he rescues with the famous Gödel argument. Briefly, Gödel showed that humans can see the truth of certain number theory sentences that cannot be proven within number theory. This shows that humans must be using more than computational algorithms to ascertain these truths. This would open the door for the idea that humans can create qualia.
However, even if we accept his arguments about mathematical truths, what about the 99.99% of people who never have occasion to engage number theory? I think number theory is in a world of its own. In the field of mathematical logic, truth is taken as ascertaining facts. Proof refers to what can be deduced within number theory. However, for real-world events, where do we locate truth and proof? For truth, sentences about real-world events would be deemed true if they experimentally corresponded to facts. The question then is, are people using algorithms to determine the truth of statements describing real world events? If a robot could do it, this would suggest that humans might be using algorithms to determine the truth of such sentences.
Let’s suppose the robot could be programmed to parse the world into our familiar objects and events? Then, by looking at falling objects, I think the robot could come up with Newton’s laws. And since these laws would be factual, describing reality, they would be deemed true. But where does the concept of proof come in? I don’t see how the Gödel argument would apply. There is no theory in which to prove these propositions. If we insist on using the term ‘proof’, it seem to me that determining truth would be the proof of the corresponding sentences. If the cat is on the mat, wouldn’t the robot be able “see” the totality of this event to determine the truth of the corresponding proposition? What could proof possibly involve in addition to seeing factual truths?
But determining the truth of propositions that involve human emotions, intentions and understanding is an altogether different matter. Penrose believes that no algorithm could make such human judgments. But he doesn’t stress that judgments and understanding are about meaning-laden events in the world, which is what intentionality is all about. Intentionality is the reaching out to objects in the world and conferring meaning on them.
There seems to be a hierarchy of conscious events in terms of complexity. Qualia are at the top of this hierarchy and Mathematical understanding below. So, if he can explain qualia, his methods may be able to explain lesser complex mentality. In this regard, his very important work explained in his books claims qualia stem from a non-computable quantum mechanics process in the microtubules of the brain. But he admits he doesn’t know what this process is and he doesn’t speculate as to how much capacity for information processing qualia require. The question is, why is it non-computable? Is it because it’s something like chaos theory; or is it because a Turing machine simulation would not stop; or is it because it requires infinite capacity information processing? Penrose doesn’t say. Or, perhaps it isn’t non-computable at all. After all, this non-computability rests on the non-computability of mathematical understanding which, as we have pointed out, may not be relevant to perceptual processes of real-world events. All his illustrations of non-computability are clever and convincing, such as determining whether a certain set of polyominoes will tile the plane; but these examples are again mathematical in nature and very far removed from real world perception.
We might look at the evolution of consciousness to try to understand and explain it. Is there a role for meaning in the evolution of consciousness? And how much information capacity is required for consciousness and for intentionality? At this point, we should define ‘meaning’. In my book, The Inner Brain, Conceptualization and the System Mind, I explained that any concept has a surrounding meaning cloud. Of course, every individual would have a different meaning cloud. Let’s look at the personal meaning cloud you might have for the concept HORSE. This cloud would contain all the mental items in your mind having anything to do with horses. It would consist of all the memories you have of experiences with horses, all the mental images you could conjure up involving horses doing a variety of activities, and all the beliefs and knowledge you have explicitly or implicitly about horses. There could be an infinite number of such items, although we can’t bring most of them to mind. And each of us would have a different list. (The concept of infinity is very problematic. In my opinion, a trillion trillion, 10 to the 24th power, for all practical purposes can be considered infinity. That is more than the number of zeros and ones in all the digital devices in the entire world.)
The concept of intentionality is the ability to have meaning clouds for all the concepts you are aware of. Thus, the ability to do intentionality would require an infinite information processing capacity, which the human brain seems to be able to accommodate. This infinite capacity rests on the fact that synapses seem to vary continuously enabling an infinite number of states. An added note is that if language, including semantics, involved only finite state processes, we could never get beyond syntax which is all computers can do. We can sum up with the aphorism that semantics is simply infinite state syntax.
This is illustrated by John Searle’s famous Chinese room argument. He looks at a room where there are devices that look at the syntax of a sentence coming in, and on the basis only of syntax, will come up with an appropriate response to the sentence. Searle says that on the basis of only syntactical manipulations, the Chinese room would never understand Chinese because you could always invent a new statement that would confound the Chinese room and produce an inappropriate response. However, if, whenever this happened, you could add to the store of syntactical information so as to respond appropriately to the new statement, you would have, in effect, a constructivist definition of infinity and the Chinese room would be deemed to understand Chinese. Its semantics would rely on its infinite state syntax.
An example came up when IBM’s program Deep Blue beat Garry Kasparov, the world’s chess champion at the time. I felt sorry for Kasparov because, in a sense, he had lost to the computer. However, he might not have felt as bad if he had realized that the computer didn’t know it had won because the computer doesn’t know what winning means. In fact, the computer isn’t really playing chess in the intentional sense because the computer doesn’t know what ‘playing’ means. Humans say the computer won because we anthropomorphize the activity of the computer. Besides, we now know that the human is playing on the basis of abstract gestalts, which are of a higher order complexity; whereas the computer is simply analyzing and evaluating moves under an algorithmic rating system. We might say the computer is intelligent, although if pressed we would probably say it was a kind of artificial intelligence.
These examples show that computers with finite state capacity cannot accommodate meaning which implies that meaning is infinite state. Alan Turing in the 1940s conceived of an abstract computer containing an infinite tape and a few simple operations. He showed that his Turing machine could do anything a modern computer can do. It is interesting to note in this regard that Turing's machine might be able to do intentionality if it could use its infinite tape. However, a machine attempting to carry out such a putative computation would never stop. But when a Turing machine is doing a computation that doesn’t stop, you can’t know it won’t stop, and so you’ll never see a demonstration of intentionality in a machine.
Searle comes to the same conclusion about meaning. First, he says that the brain is a computer mainly because everything is a computer. But then he adds that what the brain can do that a computer can’t is intentionality. But he doesn’t follow up on this insight to ask, why. This paper takes two positions in this regard. The first is that intentionality and its inclusive meaning is an infinite state matter; secondly, it is only because the brain is not a finite state device that it can do intentionality. Also, in my book, it is shown that infinite state capacity and its accompanying intentionality is the foundation for the brain’s ability to support intelligence, creativity and consciousness itself.
Getting back to information capacity, how much is required for consciousness? To answer this, let’s start with how and why consciousness evolved in the first place. Animals below mammals probably have very little consciousness, very little intelligence and most objects and events have little meaning to them. These creatures can survive in a non-perceptual world of mere photon reception and molecular detection, for which they are hard-wired to respond to without knowing the meaning of any of it. They don’t need consciousness and probably don’t have it.
So, why did consciousness evolve in mammals? The suggestion here is that consciousness emerged as a vehicle for free will and free will became a necessity in an ever-increasing complex world of mammals. As an aside, in my book, I explained that the existence of free will is based on the idea that meaning structures in the brain create a total field that represents the entire person and can resolves the choice of equivalent brain impulse paths. And since this field is the result of enormous information-content meaning structures, orders of magnitude above the perceptual level, we will never be able to make models and predictions.
In order for free will to be exercised, there must be a workspace where a number of meaning-laden concepts are held in view simultaneously to be mixed and matched. This would clearly enable evolving creatures a wide array of combinations and permutations to deal with an ever-increasing complex world. Possibly the ability to simultaneously hold together meaning-laden concepts in such a workspace is what consciousness is. And since such concepts require infinite capacity, so would such a conscious workspace. This view implies that the advantages of free will was a necessary concomitant for the evolution of consciousness. But how were the physical conditions of consciousness possible? Penrose has in effect two explanations and it isn’t clear how they are related.
First, he has his quantum mechanical process in the microtubules of the brain. But then he has a second explanation which is that if you get lots of neurons to cohere in a quantum mechanical way, they oscillate in a kind of harmony. I take it that coherence is the product of entanglement. And this produces consciousness. But meaning structures involving huge number of neurons interconnected in an infinite variety of ways would be a perfect basis for such coherence to take place. My claim is that it was only when intentionality became possible because of the infinite information capacity of the mammalian brain that large numbers of interconnected neurons could began to cohere and create the oscillations that are the basis for consciousness. Also, since interconnectivity in a brain rises exponentially with brain volume, this would suggest that the degree of consciousness would increase by some power of brain volume; and this seems to be the case.
Our question as to the relationship between consciousness and intentionality might be elucidated by the third leg of our mind/body stool: intelligence. Besides, this paper is about artificial intelligence and we will be better able to define AI if we have an elaborate concept of human intelligence. First, we might point out that intelligence also seems to require intentionality. I think computers might score well on the usual IQ tests. Computers can manipulate geometric figures and have enormous vocabulary. However, many researchers in the field of intelligence have pointed out that there is more to intelligence than what is measured with IQ tests.
The view here is that in many areas of life, we deem intelligence on the basis of understanding meaning. If a general reads the actions of an opposing general in a war and can accurately judge what his actions mean, his effective responses will be deemed intelligent. A CEO who judges the meaning of the behavior of a competitor and calls for a certain course of action to counter effectively, is called intelligent. In sports we hear announcers referring to basketball or football IQ. This refers to is a player’s ability to accurately judge the meaning of a defensive position and respond effectively. Thus, intelligence involves intentionality.
Now, even though intentionality and free will may have been necessary for consciousness to evolve, the question arises as to whether it had to evolve? Well, with existing capacity and an increasingly complex world to deal with, consciousness would enable advanced mammals to use free will to plan and cope in this world. Looking at the lower animals we see that their level of consciousness seems only to minimally accommodate what little meaning and intelligence they need to survive. This might be near zero in fish and reptiles.
As we go up the phylogenic scale, we see among mammals that consciousness articulates as they evolve increasing capacity to accommodate meaning and to display intelligence. We don’t think of mice as having a great deal of free will, but we do give dogs and cats a limited degree of free will. That’s the way we talk. Of course, many scientists would dispute the whole idea of free will especially for lower levels of complexity. But how do they prove this? G. E. Moore and Wittgenstein would say that how we talk creates some degree of reality. Our ordinary talk is soaked in words for choices and freedom of action.
We can consider the evolution of computers the same way that we looked at the evolution of consciousness. This leads to the main question of this paper. What is Artificial Intelligence and how does it differ from human intelligence. What has caused a stir in the AI field is that computer chips have gotten to the level of information processing comparable to the perceptual level. Computers can now recognize faces, judge moods, diagnose diseases, read x-rays, distinguish any category of object. But all of this involves only finite information capacity. However, nobody in the AI field even talks about the capacity that is required to accommodate skills involving creativity, intelligence involving intentionality and consciousness involving sensations and emotions. If these capabilities require an infinite state device to accommodate, this explains why only the brains of advanced creatures like mammals have these capabilities. It also explains why after chip capacity has reached levels of complexity undreamed of 50 years ago, there is still no intentionality and so no consciousness in the computer as is shown by the chess example. If this is true, Artificial Intelligence is no more intelligence than an artificial rose is a rose.
As to the future of AI, I think the effects of AI have been greatly overblown. Yes, perhaps in the near future, there will be robotic bricklayers, plumbers and other perceptual level skills. They already have driverless cars. However, jobs that involve an array of emotions like empathy and understanding cannot be replaced by finite state devices like robots. Jobs such as teaching, personnel, administration, counseling, human resources, social work, law enforcement, and others will probably never be replaced.
The situation is ever more incisive when we turn to healthcare. As we continue to commit genocide on our genes, healthcare will become an increasingly greater part of the economy. In my book, I predicted that by the end of the century, healthcare will become three fourths of the economy. And by its very nature, healthcare requires doctors and nurses whose relationships with patients involve great amounts of understanding and meaning.
The only areas in limbo are those involving aesthetics. We don’t know if computers will ever be able to design a great piece of architecture or compose a great jazz tune or even a Beethoven symphony. In our ever-shrinking aesthetic world, computer outputs may very well be good enough for most people. An important area of human activity is justice. Will robots ever replace jurors in a criminal trial.? Most likely not. In my opinion, the reason is that the robot cannot cope with things like intentions and motivations that involve meanings. Already, courts have excluded machine lie-detectors from jury trials.
Summarizing, as Penrose would admit, we don’t know the nature of his quantum mechanics process that enables qualia; and so, we can’t be sure whether it is non-computable. However, the claim in this paper is that consciousness and its various aspects are an end-product and that non-computability characterizes the process. The first step was the evolution of an infinite state mammalian brain to accommodate meanings. This is the foundation for consciousness which evolved as a workspace enabling the free will mixing and matching of meaningful concepts. Then, high information content meaning structures of the brain made possible quantum mechanical coherence of large numbers of neurons involved in those meaning structures and caused the oscillations that made qualia possible. As we see on the phylogenic scale, qualia are the last step in the evolutionary tree. Since computers are finite state devices, intentionality and all that flows from it will never be possible in them. Thus, what separates human from computer artificial intelligence is intentionality and the ability to accommodate meanings.