What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
Are sensations necessary for intelligence?
Let us see if we can remove sensation from the picture.
But before that, let me state that this not a perpetuation of some philosophical debates like,
Explanatory gap - Wikipedia, the free encyclopedia,
Philosophical zombie - Wikipedia, the free encyclopedia
Hard problem of consciousness - Wikipedia, the free encyclopedia.

I really do not care if materialism/physicalism is true or false. Or whether mind is fundamentally different from matter. I find this kind of problems very useful for young philosophers to sharpen their milk teeth on, but no more than that. So, if you are new to philosophy, those links can be very interesting for you, but do not expect any metaphysical enlightenment from my writings.
We start with our visual field of 250x250 or whatever dimensions you prefer. Since we have no sensations we need to account for, all we have are neural codes of some sort (chemical, electrical) in the retina, that are somehow "translated" in fewer neural codes, in the optic nerve. These "optic" codes will have to make it possible for the brain to do everything we know it can do: dance the jig, play chess, fall in love, invent an atomic bomb.
Those optic codes alone cannot of course do all that, but we can imagine the same process going on for auditory stimuli, proprioceptive feed back during actions, and all those other cool things you can imagine for the brain. Yes, even falling in love is to be without feeling and sensation. Ned Block, Ned Block - Wikipedia, the free encyclopedia, who is vehemently opposed to this idea, loves to talk about orgasms: what is an orgasm without all the sensations and feelings?
Remember, we are not saying that those feelings and sensations do not exist, just that there are of no consequence for the functioning of the brain, and we therefore could construct a robot with a brain and learning abilities similar to our own, MINUS SENSATION.
Let us go back to our robot vision.
Evidently, the idea that the optic array could be made part of a a bigger array will be valid, even if there are no sensations coded for in this array. The same thing, all things being equal, for all the other input organs and motor functions. The whole would not look much different from any neural network, with or without a body.
What will this robot be able to see?
If its brain is complex enough, and it must be if this argumentation is to have any meaning, then it will be able to do what every animal and human can do: recognize and categorize objects.
And then what? Store them in a database? To what purpose? Like Agent Smith would say "it is purpose that defines us". So our robot needs one or more purposes. I hear the proponents of the elimination of sensation already cry victory, but that would be a little premature, if it will ever be the time for it.
Life, as we know it, is the result of billions of years of evolution, so we must grant the same time to our robot and ask ourselves: would it have been possible for evolution to create purpose in beings without sensation? Such a question is not easily answered without taking a stand on the very issue we are discussing. To deny this possibility would be unfair and unprovable. So, let us assume that our robot, which evolution could have created with flesh and bones, or with any other materials it would have deemed fit, has a survival instinct.
Therefore
1) ROBOT NEEDS SURVIVAL INSTINCT

But that is not enough, all the instinct in the world, will not help you if you keep forgetting how to survive.

2) ROBOT NEEDS MEMORY

No groundbreaking discoveries until now, let us go on.

Does it no need some kind of intelligence, be it very rudimentary? After all, the evolution argument could be put to use here also. Okay, but where shall we start?
Maybe the survival instinct has to be more fleshed out. Let us start with the idea that the robot will try to avoid any damage to itself.
We do not need the complementary instinct, which states that the robot should seek that which is beneficial to its constitution, it is after all included in the package: not-doing something can be damaging to its health, and is as such to be avoided.
Let me remark that the positive formulation of a survival instinct would be much less effective. Seeking that which is beneficial to you does not make you see the dangers in obtaining it.
So we have now:
1) ROBOT NEEDS SURVIVAL INSTINCT. AVOIDS DAMAGE.

Recognizing and categorizing objects:
That demands a complex perceptual apparatus that appeared only very late in the history of life on Earth. The first systems were probably very primitive and entailed no more than reacting to light by moving towards it or away from it.
(Braitenberg vehicle - Wikipedia, the free encyclopedia)
As Braitenberg showed, such organisms do not even need a memory to achieve complex behaviors.

Even before such distal systems, that react to objects from a distance, there are even more primitive organisms that only react to direct contact. Often, the act of moving is also the same as the act of eating or engulfing the prey.
Pseudopodia - Wikipedia, the free encyclopedia
Such primitive organisms need no cognition nor memory to survive. And their fast reproduction cyclus is not centered on the individual but the species itself. If one individual dies, there is no one to mourn its passing.
 
We can say that, at least, primitive organisms do not need sensation to function properly, that is in a compatible environment. In fact, they do not even need a survival instinct.
(that does not mean they have no sensation at all. This is purely theoretical)
What about more complex systems? How do they come into being?

First, there is the number of receptors within the same organ. More receptors mean a greater discrimination of details, and the possibility of identifying and categorizing objects. Which would not be possible with single receptors, all reacting to the same stimulus. For vision, we would need receptors that react to different wave-lengths, just like it is the case with animals and humans.
Let us then imagine such a primitive organism, like the Amoeba - Wikipedia, the free encyclopedia, with, next to its pseudopodia, eyes that are as complex as the human eye. Let us also assume that the difference in body and "brain" constitution, is limited to the necessary neurons in the optic nerve and its connections to the moving and feeding mechanisms. In other words, instead of needing to make direct contact with its prey to recognize it and feed upon it ( or run away from it), our super-amoeba can now learn to recognize it from a distance.
What does it need for that?
Our amoeba, let us call her Amy, can only do 3 things:
- move forward,
- move backward
- feed (and excrete, which can also be used to move backward or forward).

We are back to our visual arrays: the small one, the visual scene, and the big one, the storage in memory of the different combinations of activated optic neurons.
Amy has to survive long enough to build up enough experience, in order to survive even longer. She could of course pass on what she has learned to her offspring, creating different generations with different memories. Genetic theory states that this is impossible, but we will do as the Tortoise, and have it written down, before moving further, ignoring this impossibility altogether.

Amy can now associate different combinations of stimuli with different actions. Some will make her move forward to feed, other to run away. But how can she know when to run away?
Either the encounter is lethal, and that is end of story.
Or she survives the encounter, because she is at least as strong, and as lucky, as her adversary.
Suppose her adversary shows somewhere on its body what we would call a "red" mark.
Amy would then, most probably, avoid any organism that displayed a "red" characteristic. She has no way of evaluating the significance of "red" as more as a general sign of danger.
But how is that even possible for Amy to develop such an avoiding behavior? She cannot think "red is dangerous, let's move away!".
Her body/brain must develop a sign indicating danger and the behavior to adopt. Amy must be able to assess damage to her body and brand it as such, for us as "undesirable". Which means that somehow, next to optic neurons, Amy will need "pain" neurons.
But Amoebas already have the ability to react to direct contact, so we do not need to create a whole new network. All we need is to make a connection between the "perception of red" and the appropriate behavior (move away). How can such a connection come to pass? And what would happen if Amy barely survived an encounter with an adversary with "blue" characteristics? And yellow? Or a combination of all the above? For each encounter, a new connection would have to be made.
Apparently it would be much easier if there were a central location which all "undesirable" stimuli could activate. The very principle of a pain center. That would make learning possible without the need of neural reshaping by each new event. But easier does not mean necessarily more true or probable. Natural evolution is a slow process, and easy ways are always self-evident only in hindsight. So maybe this "pain center" did evolve from numerous new connections being made each time necessity demanded it. And since its only characteristic is its connection to the move-away-mechanism, all visual neurons will end up connected to that mechanism. The pain center will be the sum of all connections.
What we have now is a very simple construction: some "perceptual"combinations activate the avoidance behavior, with no flexibility whatsoever. "Red" means "move away", whatever the circumstances.
Let us see if we can make Amy a little bit more flexible.
Let us imagine that Amy, that had developed a "phobia" to any organism showing "red", finds itself unable to avoid such an organism. And because Amy is the heroin in this story, she not only survives, she wins the battle and consumes her opponent.
We have now the strange situation that "red" perceptions mean both avoid and feed upon.
The encounter with the last opponent was not an easy one, Amy did sustain damage to her body.
So the connection to the avoid mechanism cannot be simply be erased. But the fact that Amy did win cannot be ignored either.
We do not need to grant Amy any rational consideration in the matter, just assume that an association with the feeding behavior and the "perceptual" neurons will be made. Here again, we can start with the connections already there but inert, or hope that evolution, in time, will take care of these connections. In both cases, the fact that the "feed" neurons, and the "red" neurons", are both activated at the same time, would make some form of hebbian learning more likely.
Hebbian theory - Wikipedia, the free encyclopedia
We have now a competing pain and feed center, both vying for the same behavior. But please, dont open your statistical toolbox yet.
We can imagine a simple, chemical, mechanical process whereby, if confronted with 2 organisms, one not being "flagged" as a danger, Amy will go for it rather than for the "red" one. But if confronted with 2 "red" opponents, the non-red characteristics become suddenly more relevant than the red ones. Or even the particular red configuration of each organism.
Amy has achieved a degree of flexibility she did not have before.
And we still did not need any appeal to sensation to explain her behavior.
 
(Allow me, when talking about perception or sensations as we would experience them, like "red", to leave the quotes from now on, but please remember that we are talking about a robot or robot-like organism).
Amy is now capable of choosing her prey, or identifying an enemy, based on distinctive traits.
All we needed for that were 2 principles:
1) perceptual neurons that are linked to motor and feeding functions, as well as to sensors on the body to identify damage. At the beginning, like in the amoba, there need not be any distinction between those 3 functions.
2) The hebbian principle, or a version of it: "what fires together wires together".

What Amy cannot do is choose between two targets of equal desirability or danger. A situation very familiar to botters. Her brain does not permit any solution, so it will be caught in an endless loop.
We must realize that such a situation would be extremely rare in real life, where complexity is even greater that in a game world. So the necessity to solve it is not really overwhelming.
More important would be the integration of other input modalities, like sound or smell (which in the real evolution, is much more primitive than other sensory modalities).
If we keep the same functions, moving and feeding and reacting to damage, then the end of the evolutionary path for Amy would be very close.
Especially feeding is important here. As long as Amy keeps the same diet, there will be no dramatic change in her behavior. She will learn in time to distinguish between friend and foe, edible/inedible, dangerous/innocuous, and all possible combinations of those different categories. She will also learn to rely on all kind of signals, not only visual. All in all, her behavior will be as complex, or as simple as many animals on earth.

We can also easily imagine that circumstances would create cooperation behavior. If Amy finds herself close to an enemy she would normally avoid, and if that enemy has sustained any damage attacking another amoeba, its smell could trigger the approach and feed behavior in Amy. This could, in fact, save the other amoeba from a fateful end.
What will then be the relationship of our new couple? Assuming they do not fight for the spoils (they have no reason to, the way we have wired them so far), new visual, sound and smell combinations will be stored in memory. The presence of another amoeba in a fight with a common enemy could trigger the move and feed behavior even if the enemy is not wounded. We would then have a "feeding frenzy" behavior, very different from the normal, more prudent feeding behavior.
That could end up with just the sight of an enemy and a nearby amoeba being enough to trigger this frenzy.
The end of this episode could be that Amy, whenever seeing another amoeba, will keep close to it because of the associations with food. Fear or desire for protection would play no role, for the simple reason that we have not endowed our amoeba with these feelings.
Now, besides feeding frenzy, we are witness to another form of behavior: grouping, flocking, swarming, etc.
And that, as far as I can see, would be the end of Amy's evolutionary path.
Our preliminary conclusion would be: even reasonably complex organisms can learn without sensation.
But their evolutionary path seems to be a short one. That is something that needs more reflection.
 
Could Amy avoid this evolutionary dead-end?
It would be, I suppose, easier for her to become independent of her group if she got isolated from it. The memories would still be there, but the necessity to act upon them would be gone.
But once again, we are confronted with the limited number of factors that determine her behavior and learning opportunities. We can change her diet, give her fins, or even feet or wings, the same kind of behaviors would repeat again and again.
A more versatile body would mean more chances of survival, and more variety in behavior, but certainly not more intelligence.
How can we accrue her intelligence?
Jean-Paul Sartre, in one of his books, writes of someone standing at the window and watching himself cross the street.
The direct link we have between input, memory and action is, I think, the reason of Amy's stagnation.
Through this direct link, Amy has learned to take into account characteristics of organisms which did not in themselves indicate danger. Instead of reacting to simple stimuli (red, green or yellow marks), she started to react to complex ones (bigger parts or even the whole of the body of preys and predator. Or even shapes and shadows).
What we now need is a decoupling of stimuli/input and actions/output, to get even more flexibility.
How can we achieve that? We can, of course, play god and design it like we want it. But first, we would like to know how such a decoupling could come to pass in nature.
Let us say that we keep the links as they are, but we stop the current, as it were, just before it activates the appropriate actions.
We could have a virtual copy of the moving and feeding mechanisms, and any stimulus would end up in the copy first.
If nothing else happens in this copy, that would only create an extra step.
What if we made it a real world simulation?
Amy gets a stimulus, the stimulus activates the simulation, and only the winning virtual action gets to pass to the following phase, and become the real action in the real world.
There is no way we can make this copy more complex than the original. Unless of course we are ready to swish our magical wand around, which we are not allowed to. So, at least at the start of its evolution, the copy will be just that.
But does that make Amy smarter, or only slower?
The only possible gain would be if some "deliberation" took place in the simulation. But what would be the advantage compared to what is already happening in memory in case of conflicting stimuli? For instance when a possible prey shows a red mark and other traits of easy prey at the same time?
The only advantage that I can see, is that, in the deliberation, not only the perceptual stimuli and their respective actions are taken into account, but also what happens after a choice has been made. Let us not forget that the bigger the memory, the more connections there are, and the longer it takes to have all existing connections activated. This things take time, even in an efficient neural network. And organisms need to take fast decisions if they want to survive. By involving the actions, those parts of memory that are related to the events following the decision are also taken into account in the final decision. So the simulation, even if it is a simple copy of the action mechanisms, allows a more efficient use of the memory resources.
The fact that we did not need to add anything extraneous to the organism is I think a sign that we are on the right track.
Also, remark that this will not immediately make Amy smarter. Her behavior in familiar situations will not necessarily change. But the possibility to delay an action until its consequences have been taken into account, can mean a big leap forward.
Still, without external, additional influences, Amy may become smarter than comparable organisms, but the difference would certainly not be enough to make of her an equal of humans.
But at least, we have not been compelled into using sensation. I wonder how long this will last.
 
I think I made a serious logical or conceptual mistake. When I assumed that the number of neurons in Amy's retina would be drastically reduced in the optic nerve, just as is the case with humans, I forgot an essential ingredient. Amy has no (color) sensations. So the question is, under which heading could different retinal codes be combined? I honestly do not see any possibility, unless of course the mathematical and statistical toolboxes are opened and magically applied to Amy's brain.
Remember that Amy started as a primitive organism, and unless we can justify the existence of those formula's, we have no right to use any of them here.
Each retinal neuron is said to be sensitive to a certain wave-length. In animals and humans, individual neurons are combined to produce sensations, when they do not do so themselves.
It is like a digital camera without the firm and soft-ware. Potential images can be taken, but unless they are translated to colors humans can see and interpret, they remain a meaningless array of (ranges of) electrical values.
Even the idea of a pixel loses its meaning, since it is build up from the sensations of Red Green and Blue.

Does that mean than Amy cannot exist? Certainly not. But we will have to equal the number of optical neurons to that of the retinal neurons. It will also demand more of our imagination to understand how this array of electrical values can be used to identify and recognize objects. More than ever, the use of human language, laden as it is with sensation, must be understood in a metaphorical manner.
The advantage of such a difference between Amy and us, is that it will, eventually, hopefully, help us better understand the role of sensation in animal and human perception and cognition.
 
Since I do not know as yet how to proceed further with Amy - our relationship has reached a critical point, so maybe a time out will do us both some good - allow me to draw one conclusion from my mistake.
Computer vision as it is now researched and used, make use of human principles that rely heavily on color, and therefore, sensations. But computers cannot have sensations, which means we are saddling up our computers with concepts which cannot have any meaning for them. The least I can say, is that it is very inefficient.
At the worst, it could mean that at least some of the difficulties in this area of research stem from this use of inappropriate concepts and tools. Maybe we should rethink computer vision and free it from anthropomorphism.
 
Many species, when they have attained the evolutionary plateau Amy has reached, have usually learned one trick, and one trick alone.
A few examples:
- breaking clams on a stone while you are floating on your back,
- throwing turtles or other hard-shelled creatures from a height.
- using a stick to get to ants or insects (used by birds as well as apes),
- breaking nuts with a stone,
- washing sand off your food,
- using an empty shell as a mobile home and protection on the road,
- ...
I am sure you have seen at least one of those on National Geographic Channel or Animal Planet.
Could Amy learn such a trick also? What would it take for her to do that? Do we need another evolutionary miracle, or can we make do with what we already have?
There is only one principle involved: the use of a tool.
How do you learn that?
First, like I said before, it really does not matter whether you have prehensile thumbs, or wings or claws. Also, even fish seem to adorn their nests to attract potential mates. And like birds with their beaks, they have only their mouth to manipulate objects.
Specific body shapes, or parts, are not a prerequisite to new forms of behavior or even, probably, intelligence.

In "The Act of Creation", Arthur Koestler - Wikipedia, the free encyclopedia, in his particular blend of Lamarckian evolution, Bergsonian vitalism and Jungian mysticism, did manage to come with a very powerful argument against Darwinism: It would be very difficult to use vanilla evolution theory, or its genetic variation, to explain the appearance of such a behavior. Unluckily, he did not offer any useful clues himself as to how it could have happened.
I do not have a solution either, but this kind of behavior does make the general question more concrete: why are humans so much smarter than animals?
If they could learn one trick, why not more?
But first we have to answer a previous question: how did they learn the first trick?

edit:
How come not all species are vegetarians? Easy, there is not enough grass for everybody. And the more rabbits, the more foxes, until there are not enough rabbits left for all foxes, and they start dying out. Giving a chance again to the rabbits.
The cheetah and the gazelle is another standard fairy tale told by evolutionists of all calibers.
But how come cheetahs do not do what other animals do: team up to get their prey and protect their young?
For that matter, why do gazelles not do the same as bulls and elephants, and also team up to protect their young? That would be the smart thing to do after all. If others have learned it, why didn't they?
Well, maybe there are no heroes among gazelles because they are, individually, too weak. That is not the case with bulls or elephants. Each individual alone is a match to a lion or other predator. So, they do not really team up, do they? They just act according to their own possibilities.
This explanation sounds full-proof, until we see images of a mother gazelle attacking a lion and make it step back in fear of its horns and hoofs. There is nothing weak in a gazelle, and certain not in a bunch of them.
And imagine rabbits teaming up against a fox. The poor guy would not stand a chance in hell!
Predators, throughout all species, would then of course reconsider the benefits of cooperation. And we would get a new arms race. But why don't we? Why is cooperation such an isolated behavior except by swarm species?
Once again, it seems that once a species has learned one trick (for instance, speed for the gazelle and cheetah), there is no room left for other possibilities.
 
How can Amy learn how to use a tool?
A tool is, in principle, nothing more than an "enhanced" organ. Instead of using a part of your body directly, you grab an external object, and make it a part of your body.
Such an explanation would seem very plausible when using a stick for instance, There does not seem to be a great difference between stretching your arms (or neck), and lengthening the reach with a stick. It is less obvious in the case of indirect use, like what some mammals do:
- dive for a clam AND a stone,
- swim to the surface,
- lie on your back,
- lay the stone on your belly,
- pound on the stone with the clam until it breaks.
Let us take the easy cases first.
Let us imagine that Amy is a robot, how would we incorporate such an ability and keep it plausible in evolutionary terms?

Somehow, the external object must be "seen" (remember that I won't be using quotation marks all the time) as a possible enhancement to a body part. The fact that it is not factually a part of the body has to be overlooked somehow.
This is not in itself a big hurdle. If Amy ever sees insects running along a stick, she might, by chance, grab the stick instead of the insect. Since she does not want to eat the stick, she will release it and grab the fleeing insects. The association between stick and insects has been made. And also between grabbing and releasing the stick.
This is where it gets tough. Hoe to get Amy to stick the stick in a hole to have insects attach themselves to it?
It has to see the stick as an enhancement to its own reach. But that is exactly what we are trying to explain!
Ever seen an ape lord grabbing a branch and thumping the floor with it to impress its constituents? It never uses it to beat on them, only to make noise. I find this quite peculiar.
It has no problem with beating on them with its bare paws. So the branch is only useful to make noise and stir up dust. The same ape will use a stick to catch ants, and it, or someone related to it, will throw stones at intruders.
This ape has no trouble seeing the branch or stick as an enhancement to its body. Before asking ourselves why it does not do more with it, let us first analyze this ability a little more in details.
In each example, the intent of the animal seems to stand central. It wants to get the insects, it wants to impress other apes, and it wants to frighten intruders.
So the problem is not seeing the tool as an enhancement to one's body, but seeing it as a tool, independently of the use it can have in specific situations.
As soon as the intent has been realized, the external object ceases to be an enhancement to the body, and becomes a burden to be released immediately.
But "intent" is a mental term, and as such, we cannot apply it to Amy. This analysis certainly sounds promising, but also lain with booby traps. We must proceed cautiously.
 
We already had something like intent by Amy's creation. Didn't we assume that shed needed a survival instinct?
But we also talked about Braitenberg's vehicules, that did not need such an instinct, or any cognitive or emotional abilities.
In fact, our Amy, until now, represents nothing more but a sophisticated version of Braitenberg's devices. And that maybe just the problem right there.
The link between action (moving, feeding) and memory has been loosened, and that has given Amy an added flexibility and versatility, but it has not really added to her intelligence.
What if Amy had all the intelligence she could ever need right there? What if there was nothing to add to her intelligence? This certainly does not mean that she has learned everything that there is to learn. We already know that it is not true. But what if she is already intelligent enough to be able to learn whatever is possible to be learned? What if she missed something else but intelligence?
Before we give free rein to our imagination, may be we should track our steps back and examine them carefully.
 
We already had something like intent by Amy's creation. Didn't we assume that she needed a survival instinct?
But we also talked about Braitenberg's vehicles, that did not need such an instinct, or any cognitive or emotional abilities.
In fact, our Amy, until now, represents nothing more but a sophisticated version of Braitenberg's devices. And that maybe just the problem right there.
The link between action (moving, feeding) and memory has been loosened, and that has given Amy an added flexibility and versatility, but it has not really added to her intelligence.
What if Amy had all the intelligence she could ever need right there? What if there was nothing to add to her intelligence? This certainly does not mean that she has learned everything that there is to learn. We already know that it is not true. But what if she is already intelligent enough to be able to learn whatever is possible to be learned? What if she missed something else but intelligence?
Before we give free rein to our imagination, maybe we should trace our steps back and examine them carefully.
 
So far, all we needed was (taking into consideration only vision as a distal input modality):
- retina,
- optical nerve (in Amy's case, both were the same),
- damage receptors, being the same as touch receptors,
- moving and feeding receptors/actuators,
- a decoupling between those actuators and the input and damage receptors.
- a version of the Hebbian principle "what fires together, wires together".

I gave no explanation as to how the decoupling came into being. I did what any evolutionist would do: I needed it, so I let evolution create it. This is certainly a weak link in the argumentation. And of course, if the Hebbian principle is rejected, then everything crumbles down.

As far as Amy's intelligence is concerned, it is solely based on that last principle: the input arrays, connected to the damage, feed and move arrays, represent all possible combinations that could ever determine Amy's behavior. Chance and experience will determine which possible combinations become real connections.
Let us assume that all possible combinations have been realized. Amy would be a robot with an incredible amount of experience and memory. Let us now turn Amy into the mammal mentioned previously. How would she look at the stone on her tummy?
Could she start looking at it as a tool, independently of the specific use she has learned to recognize?
Suppose one day, she goes diving and finds clams, but no stone. They have all been buried under the mud, or taken away by the current. Even if we assume that she will start digging around in the hope of finding one, there is no reason to think that this experience would in any way change her behavior in a drastic manner. That would only be the case if, once she has found a stone, and eaten her fill, she decides to keep the stone for the next time.
Such a behavior would not be very surprising, after all hoarding is a well known phenomenon.
Still, there must be something preventing all animals from displaying such an essential behavior. Essential, because, once again, it would un-link the stone from immediate usage, and turn it into a tool to be kept until needed.
What is therefore the difference between tool keeping and hoarding? I am not aware of any species doing tool keeping, and certainly not more than one tool. There must be therefore a powerful, if subtle, distinction between the two behaviors.
The question becomes therefore: can Amy decide to keep the stone?


edit: Sea otter - Wikipedia, the free encyclopedia is quite interesting, as are some references given. I found the fact that sea otters have pouches where they keep fish and stone fascinating. It makes tool keeping, in the case of sea otters, a trivial possibility.

edit 2: http://www.otterproject.org/wp-cont...er_foraging_and_feeding_behaviors_for_MMC.pdf is a literature review and contains many details concerning the behavior of the sea otter. I could not find out if otters really had attained the stage of tool keeping, or were stuck, even with their pouches, like all other animals, in opportunistic tool use.
 
Let me state that any organism that is capable of hoarding should be capable of tool keeping. After all the principle seems the same: keep something until you need it. The difference being that hoarders do not usually keep their treasures on themselves, but bury it somewhere, or put it out of reach somehow.
Could that be an essential difference? The fact that some animals, like the sea otter, have pouches where they could keep their tools without hampering their movements, should make tool keeping a more common phenomenon than is the case. The fact that otter, who apparently can keep their tools, do not display any hoarding behavior, makes the relationship even murkier.

Let us rephrase the question in terms easily applicable to a robot. Is "keeping the stone" one of the possible combinations offered by Amy's brain? If it is, we have our answer right there, otherwise, we will have to keep looking.
We have now a very concrete problem, and we should be able to solve it unambiguously.

All combinations are based on the link between memory and the feed and move mechanisms. So the stone will certainly be there somewhere.
Suppose Amy has just finished feeding, with the help of a stone, and she releases or throws the stone away. The only way to inhibit such an action would be if not all combinations where not determined by the immediate need to move and/or feed. And that is our problem right there. We have constructed Amy in such a way, that she can only react to the environment. We have not even given her a reason to move or feed. After all, hunger is definitely a sensation.
We can remedy to that by making her react to internal stimuli that would be functionally equivalent to hunger, but that would certainly not solve our problem of immediacy.
Amy must display (aspects of) hunger behavior while not hungry!
That would be possible if we had a copy of those internal stimuli, linked to the copy of the move and feed mechanisms, and of course, to the rest of the memory.
Virtual hunger would let her keep the stone, because she needs it to still this virtual hunger.
We have now a new problem, or maybe new opportunities: virtual processes that must result in real actions: the keeping of the stone. Her virtual hunger must certainly not result in virtual movement and feed behavior!
Her lack of real hunger, must somehow inhibit the virtual feed behavior and only allow the keeping of the stone. Such a pattern is by definition possible, but it is certainly not necessary that it be so.
Still, this analysis would seem to show that, even without sensation, Amy could learn to take the future into consideration. But for that, we need a new evolutionary miracle. The fact that it is of the same kind as the first one, does not make it any less "magical".
 
We now have:
- a copy of the external stimuli (in the form of memory),
- a copy of the internal stimuli,
- a copy of the feed and move mechanisms.
I suppose that is the end of the line. There is nothing left to be copied. If we need something else, and we do, for a greater intelligence, we will have to find an internal justification for it. And I, for my part, can think of none.
Sensationless Amy can go no further!
We can of course appeal to Evolution and pray that it will bring about a change that would make further development possible. But it will have to be something that is not already present in Amy as we know her. We need a radical genetic mutation! A so-called evolutionary leap.
Saltation (biology) - Wikipedia, the free encyclopedia
Punctuated equilibrium - Wikipedia, the free encyclopedia

Since I completely miss the expertise to say anything meaningful in this regard, I propose first to see if, by reintroducing sensation, we can break the evolutionary deadlock. There will always be time later to ask ourselves what it would take to break it without abandoning the sensationless hypothesis.
 
To analyze the role sensation can have in the evolution of intelligence, we will have to follow the same path as with Sensationless Amy.
That means we start with a primitive version where the link between stimuli and behavior is direct.
The fact that Sensitive Amy does have sensations has least one advantage:
The number of neurons from the retina to the optic nerve can be drastically reduced. The sensitive version is definitely more efficient at the start.
It will be obvious that this Amy (I will leave the "sensitive" from now on, unless needed), will need some kind of memory sooner or later.
How about the copy of internal stimuli? It can probably be left out, since we now have a direct link between internal and external stimul on one hand, and sensations on the other.
No copy of internal stimuli.

Under the same token, we could assume that the need for a copy of the move and feed mechanisms would be no longer present. Hunger sensation, and other internal, proprioceptive sensations, could fulfill the same functions the copy of internal stimuli did in Sensationless Amy.

The magic that we expect from evolution would be, so far, nothing more but the existence of sensation.

But we are, I am afraid, getting ahead of ourselves. We were supposed to analyze the first stage, that of a direct link between stimuli and behavior.
It would seem that, with or without sensation, in case of a direct link the possibilities for both Amys are quite the same. It is a mechanical process whereby stimulus S produces reaction R. Sensation might as well be left out the equation. Still, since we already assumed its existence, we will be able to use it as soon as the circumstances will allow it.

We therefore need to break this direct link, if it ever existed, and connect external and internal stimuli not to the action apparatus, but to the sensations.
Amy's second stage, or mabye the initial one, means that the direct link has been moved to a Simulus-Sensation-Action pattern.
Still, all we seem to have done is replaced the copy with sensations. The logic though remains the same:
Simulus- Copy (of Stimulus) or Sensation- Action.
So, unless we can somehow show the added value of sensation, we have not made much progress.
Furthermore, it would be deceiving to speak of sensations without being more specific.

Which sensations are we allocating to Amy?
- visual and other perceptual sensations,
- touch, pain,
- hunger,
- sexual drive for reproduction (which we shall not use in this analysis),
- proprioceptive sensations (of moving limbs).
Those are the sensations that we could grant a primitive organism like Amy without too much controversy.

I had already alluded to the issue of tool keeping with Sensationless Amy. Here, instead of general internal stimuli, we have one special kind: hunger. What we said about "hunger" and "virtual hunger" can be said here without the quotation marks.
It would appear we do need a copy of the sensations anyway. That is, if we want to break the direct link between sensation and behavior, like we did with the direct link between internal stimuli and behavior in the previous analysis.
 
Once we have an indirect link between sensation and behavior, the same sensation (hunger) can give rise to different behaviors. The body chemistry can change without the need to change the neural connections. For instance, a change of diet because of a genetic mutation will easily be integrated in the existing connections.
To be accurate, this argument is only valid if hunger does more than just trigger move and feed mechanisms. It has also to identify the object of hunger. We could parody the famous Husserlian quote "consciousness is consciousness of something", with " hunger is hunger of something".
It seemed so obvious in the analysis of Sensationless Amy, that I did not mention it. But here, it is necessary to do so, if we want to appreciate the added flexibility that sensations bring to Amy.
In Sensationless Amy, a "need" of element e, is connected to a specific trigger, while in Sensitive Amy, any element needed is connected to a general sensation of hunger. We could say that in the first case, we are speaking of a specific "hunger", while in the second, a general hunger is meant.
The move and feeding mechanism in Sensationless Amy will only be activated by specific triggers "known" beforehand. In Sensitive Amy, the mechanisms can be triggered whatever the elements needed have become during the organism's evolution. The sensation of hunger will have to adapt to the new elements that trigger it, but anything that comes after it can remain the same, except for changes dependent on experience. An other way of catching the prey for instance may have to be developed, but that is something that would also have to happen by a change of environment, or a mutation within the prey itself.
 
The flexibility Sensitive has attained, is not so different from the flexibility that Sensationless had reached herself. In both cases, it is an optimal use of existing possibilities, within the same kind of boundries. A copy of the external stimuli in one case, that of the sensations in the other.
We cannot really say than one Amy is smarter than the other. Their "mental" capabilities seem to match up. And even if we could pinpoint a difference, it certainly would not be a huge one.
The existence of sensations seem to make adaptation to genetic mutations less drastic, but whatever one Amy can learn, the other Amy can learn also.

Let us go on with the analysis, and hope something will come up that will help us understand the difference between the two Amys.


Does Amy need a copy of the external stimuli and of the action mechanisms?
I would say she does, because otherwise, she would be completely dependent on the external stimuli for any possible behavior.
But wait! We did not grant Sensationless Amy a copy of her external stimuli. At least, not in a way she could control them internally.
Memory is written whenever an experience happens, and that memory is then used By Amy (any one of the two) in her "deliberations". But nowhere did we mention "virtual external stimuli", like we did with internal stimuli and actions.
What would happen if we added that to our list of copies?
More than ever, both Amy's would emancipate from the immediacy of external stimuli. Instead of having to wait for an external event, they can "imagine" it, and act accordingly. They can take initiatives that would have been impossible without the virtual external world "in their head".
How is that different from memory associations and consecutive actions because of these associations. Don't they make this extra copy redundant?
I suppose they would, if these associations could happen freely, independently from external events. That is in fact all any copy does: it frees all kind of internal data from external or internal processes (like the bodily need of nutrients).
A copy therefore needs not be taken literally. It represents a different way for the brain of accessing its data.

I hate it to be a party pooper, but I am afraid that even this essential addition would not be sufficient to make our Amys as smart as humans.
Let me rephrase it: both Amys are as smart as they could ever get. In fact, they are as smart as humans, if we take into consideration what their limitations are. Humans with the same set of sensations would not be any smarter.

We have the following arrays that are not part of the external world, but constitute what an organism can "think" and do:
- sensations,
- actions.
You can look at it as the number of possibilities that different arrays of different structures can offer maximally. The number of actions are limited more by the purposes behind them, than by the body itself. Humans are the perfect example for this phenomenon: they can fly even higher, faster and longer than birds!
By using virtual copies of those arrays, organisms can enhance their existing capabilities, but only to the limit of all possible connections of both arrays.
If you want to change this limit, you will need to add to the number of sensations rather than to the number of actions. In the latter case, you will have more flexible behavior, but within the same "mental" boundaries. Whereas, if you add to the number of sensations, you drastically change the relationship of the organism to the world, and the possibilities it has to change it.
Of course, sensation is here used as a generic term for bodily sensations, feelings and emotions.
So, however strange it might seem, the only way for Sensitive Amy to get smarter than her twin sister, is to get more "sensitive", more emotional. And the only way for Sensationless Amy to keep up, would be to emulate her.
 
I do not have a concrete example yet of the influence of a larger sensation array on the possibilities for an organism to change the world. But I would like to present a caveat, a warning:
Adding to the array of sensations which are directly linked to external events would not really broaden the horizon of Amy. Becoming a bat (What Is it Like to Be a Bat? - Wikipedia, the free encyclopedia) might be an exhilarating experience that would open all kinds of possibilities, but it would not change Amy in a fundamental way. In this case, it would mean a body change, and a new form of sensitivity to sound.
What would change Amy's relationship to the world, and to herself, would be sensations which are not caused by external, but by internal events. This is also an evolutionary leap that cannot be explained in terms of the past. It is making sensations themselves an object of evolution. Instead of wasting time on how that could be possible, I will try to find concrete examples of "primitive" sensation that gave rise to complex ones, like sexual drive, reproduction , and love for the offspring and partner. I consider this not a very convincing example because such feelings are easily observable among animals that certainly do not have the mental capabilities of humans. So I will have to come up with better examples.

edit: caring certainly does not explain the difference between animals and humans, but it could maybe explain the difference in sophistication between different species. It would take more knowledge than I possess of comparative ethology (animal's behavior) to prove the plausibility of such an argument.
 
When I first started reading up on animal intelligence and consciousness, some 10 years ago , there was hardly anything on the Internet. So I read a few books and any free article I could get my hands on. Except for clever Hans, the only thing worth remembering, for me, was that animals were much smarter than people thought, but still, no way as smart as people.
Now, reading up again on the subject
Animal cognition - Wikipedia, the free encyclopedia
Bird intelligence - Wikipedia, the free encyclopedia,
my layman conclusion still stands. Still, I despaired of ever being able to find a good example that would explain why sensation, in the general meaning i use, is necessary for any leap in intelligence.
See, my reconstruction of the Amys is purely abstract and theoretical, not to say very sketchy. I have no way of determining in which phase of evolution a real animal is, relative to the Amys, which is quite a conundrum.
I thought I would have to invent those examples, and hoped by reading up, that I would somehow find some inspiration.
Once in bed, it hit me that the answer had been staring at me right in the face, all along, and in every animal experiment I ever read about.
Animal psychologists often make the distinction between "native" intelligence, as shown in the wild, and "trained" intelligence, as shown in experiments, and where the human handler makes sure they are able to fulfill the prerequisites of the test. Animal psychologist have a tendency to look down on the second form of intelligence, thinking that only the wild one is really indicative.
I realized that animals are always acting naturally. The experimental settings, as strange as they might seem in comparison to the natural habitat, are nothing but an (extreme) change of the environment, one in which they now have to survive. The fact that this environment is human controlled does not make it any different from an environment controlled by Mother Nature or Evolution. In both cases, the animals have to accept it as it is and cope with it.
The distinction between native and in-experiment intelligence loses therefore much of its relevance.
The second, and the more important point for our subject, is this:
In almost every experiment, the animals solve a puzzle which they would never have been confronted with in nature. But they only do it because of the desire to get their reward! Any of the examples in almost any experiment, is a proof of my claim: animals are intelligent enough, what is holding them back is not an ounce of extra brain matter, or a few smart algorithms, but their own emotional setup. In that, they are no different from intelligent high-school or college dropouts who just did not have the proper motivation to go on.
I still have to look more closely at some examples, especially those tests the animals failed, to see if this is the whole of the story, but I am sure it is at least a big part of it.
 
There was a time when animals were thought to be solely controlled by instincts. Nowadays the view is much more nuanced, and the distinction between animal and human is becoming increasing difficult to pinpoint. Tool use,
Tool use by animals - Wikipedia, the free encyclopedia
once thought to be exclusively human has turned out to be a widely spread phenomenon among species.
But instinct has not disappeared from the picture, only replaced by a more subtle concept, instinctual drift.
If you google this expression, you will find a wiki stub, Instinctive drift - Wikipedia, the free encyclopedia, and also a classic article
Classics in the History of Psychology -- Breland & Breland (1961)
that gave birth to this expression.
If you read some of the examples of animals reverting to "instinctual" behavior, despite the presence of rewards to behave according to the wishes of the trainers, you will be reminded of a phase common to both Amys: the direct link between sensations and behaviors.
It would seem that not only the emotional makeup of the animal is determinant, but also the kind of connections its brain will allow. Which should not surprise us really.
I found the example of the hedgehog described in the paragraph Biological Constraints of a link already mentioned, very enlightening. The hedgehog is supposed to learn how to avoid an electrical shock by going to another corner of the cage when it hears a bell. But all it does is roll up in a ball, which is not really effective.
Dennet, in one of his books, maybe Consciousness explained, gives the example of a bug happening to pass in the vicinity of an ants nests, triggering hostile behaviors from those same ants which are resolved to defending their nest. The bug, whose instinct it is, in time of danger, to search for dark holes to hide in them, keeps trying to get in the nest, adding to the virulence of the attacks, and resulting in the bug's death. For Dennett, trying to minimize the importance of anything that cannot be put in terms of computer programs, this example looks like a gift sent by the gods.
For me, it is an invitation to ask whether that has anything to do with intelligence. Animal psychologist are now being warned and trained to take into consideration the instinctual makeup of the animal they are studying, but they do not really seem to stand still by its meaning: what animals cannot learn tells us probably as much about how their brain works, as what they do learn.
 
Status
Not open for further replies.
Back
Top