What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
Thought is certainly one of the most intriguing subjects in psychology and philosophy.
One of the most influential books on this subject since the last quarter of the previous century is Fodor's Language of Thought of 1975.
Language of thought hypothesis - Wikipedia, the free encyclopedia
The Language of Thought Hypothesis (Stanford Encyclopedia of Philosophy)

To keep it short, Fodor considers thought as a language, mentalese, and as such it has all the characteristics of a language. Among others, it uses a finite vocabulary and a finite number of rules. It would takes us too far to discuss Fodor's conception in details, but that should be enough for our purpose. You will have noticed that such a view is perfectly compatible with the vision of the brain as a computer. Fodor was in fact a strong proponent of the classical view of computing. He was convinced that even neural networks had to use the same principles as traditional computers.
I completely agree with Fodor, but only if his analysis is said to refer to the "language" of thought, and not to thought itself. It seems to me that his argument that the expression of any thought can only be piecemeal, and not in a holistic, ineffable way, is right on track. Whether it is a work of art, a technological challenge or an abstract thought, once you start externalizing your thought, you have to do it sequentially, and according to rational rules.
Where he goes wrong, as far as I am concerned, is in his conclusion that this necessity shows the essential nature of thought. I am convinced that thought, just like perception, is a top-down process. A thought is nothing more than the effect of a number of neurons getting activated at the same time. It can, as such, determine a course of action without any need of dissecting it beforehand. But there is no way to express it without making use of sequential actions, like the producing of sounds or the writing of a text.
Furthermore, the question remains what exactly thoughts are made of. I see two aspects to thoughts:
1) They are a combination of actual actions, sensations and emotions.
2) they are virtual actions or virtual sensations and emotions.
Virtual means something we all can easily understand: we are able to disengage motor functions from other mental functions. We can "imagine" that we are doing something, without really doing it. Or, as is the case with empathy, we can imagine feelings we do not actually have at that moment.
 
Mirror neurons (Mirror neuron - Wikipedia, the free encyclopedia) are a very intriguing phenomenon in neurology. (Neurons that get activated by the performing of an action, as well as by seeing the action being performed by somebody else).They have been considered by many great names in the fields of psychology and neurology as the panacea to many neural issues. They also show how dangerous it is to attribute specific properties to neurons, especially if these properties function like magical keys that unlock mysteries of the brain. I was pleasantly surprised to discover an article that formulated much more clearly my vague objections against mirror neurons: http://else.econ.ucl.ac.uk/papers/uploaded/362.pdf, Heyes, C., Where do mirror neurons come from? Neurosci. Biobehav. Rev. (2009).
That makes me look very suspiciously to my "own" idea of neurons that react to different inclinations, lines, or curves. Maybe the way the brain works is not so easily characterized, and we need more radical concepts. One of which being that the sensitivity to lines and curves is not a characteristic of neurons, but itself the result of experience. A conclusion I would have no problem subscribing to.
 
My remarks about the Language of thought are certainly not everything there is to say on a millenia old subject! Without pretending to ever be able to exhaust the issue, let me add a few notes.
People very often say that "children are cruel". Which is of course only true if you judge them with your adult eyes. They have not yet had the time to learn and internalize moral codes of conduct, which, unless they have really lousy parents or are born with a congenital deficiency, they all learn in time. How do they do that?
For starter, let me say that children not only learn moral/ethical principles, they also learn rational rules. I will not get into the nature-nurture discussion, so I will just assert the following without any proof:
Human children have the innate ability to learn ethical and rational rules, but whether they do learn them is a nurture matter.
Allow me now to concentrate on the rational side of the equation and start with a very bold assertion:
Rationality is a form of emotionality.
I realize that this quite an unorthodox view, and I also know that I do not have an exhaustive list of arguments to support it. So I will just start with what I currently have.
My assertion would seem easier to countenance if you think that most so-called rational rules, are in fact social rules or conventions. Others are obviously empirical rules, that could only be learned through experience.
But what about logical rules?
Before I answer this question, or at least make a first attempt, let me show you why the rules just mentioned are in fact emotionally based.
Pascal, the French seventeenth century mathematician and philosopher, was also known for his quote that "what is Truth at this side of the pyrenees, is an Untruth at the other side" (I do not think that it is the exact formulation, but it is close enough).
But relativism is still just another form of rationalism, and Pascal certainly never doubted the rationality of thought, just the human capacity of finding Truth.
My aim goes beyond relativism. I think that social rules are first learned because we want our parents, and later, teachers and friends, to approve of us. This is not a rational consideration but a very deep emotional instinct. Love and approval means security and comfort, and if love is not enough for you, than the combination should be able to do it.
Experience rules will certainly not be based on love (alone). But nature is a harsh teacher who does not allow mistakes very often. Her motto would be: you break the game, you pay with pain. Something that is utterly undesirable to small children. And again, avoiding pain is certainly not a rational response at the outset.
I hope I have, at least, made my assertion plausible. Just let me catch my breath before I tackle the "hard problem of rationality".

edit: the right, French quote is :"V?rit? en de?? des Pyr?n?es, erreur au-del?."
 
In formal logic, the most important element is the so-called operator. It determines how to deal with a proposition in case of a unary operator like NOT, or how propositions relate to each other, like with OR, AND, IF...THEN, etc.
There is of course much more to say on the subject, but that will suffice for my purpose here.
One remark seems to push itself to the foreground: operators represent usually actions.
- do Not do this,
- do this AND that,
- do this OR that,
- IF you do this, THEN you must do this also.
Even if you replace "do" by "think", the action aspect continues to be present.
"How To Do Things With Words" of 1955 by J. L. Austin - Wikipedia, the free encyclopedia is a very enlightening booklet which tells only half of the story. Austin gives examples like " I name this ship The queen Elizabeth", or "I take ... as my lawful wife", which are not just the utterance of words and ideas, but at the same time what he calls "speech acts". Actions in and out of themselves.
Here, you are supposed to do things with thoughts, and not just words.
Are the thoughts then not in the operators but in the propositions themselves? Logic is of course the relationships of propositions through operators, still, we are not talking of logic as a discipline, but of thoughts themselves.
There is also the case of the ancient syllogisms:
"All men are mortal
Socrates is a man
Socrates is mortal."
There do not seem to be any action here, does it?
We could easily reinterpret this classic as follows:
Take all men without exception.
If Socrates is among them, then he will also be mortal like all the others.
Such an interpretation would be very superfical and would bring no added clarity to the subject. But maybe it shows us the way.
"All" is neither a fact that we can perceive, nor does it represent any emotion or sensation as far as I can see. Even when the "all" refers to a finite group, it keeps its abstract nature. Apparently.
"All the men I am looking at right now are mortal.
Socrates is one of the man I am looking at right now.
Socrates is mortal."
Compare this with
"All the men i could ever look at...."
That looks suspiciously like a virtual perception to me, doesn't it? Also, all these considerations sound suspiciously like Hume's ideas. I read him a very long time ago, but that does not mean that his influence has disappeared from my thinking. I suppose I will have to reread him.
David Hume - Wikipedia, the free encyclopedia
 
A concept that keeps coming back in psychological theories of perception is that of "expectation". I will give an example that shows that such a combination of perception and expectation is not strange at all. An experiment with babies looked something like this: an object was shown to a baby, and then moved behind a screen. When the object reappeared, there was no significant reaction from the baby. Then 2 objects were shown, which then were moved behind the screen. Only one object reappeared. The babies sucked harder on their pacifiers, a sure sign that something was very wrong there. Obviously, the babies were expecting as many objects to reappear, as they had seen move behind the screen. The question of course is, whether the babies were expecting anything in the first case. After all, an expectation means that you have already a representation of what should happen before it does. I find it hard to believe from babies, even if their reaction in the second case clearly shows that they are surprised. But you can be surprised by something even if you had no previous expectations at all. So the assumption that perception is always accompanied by expectation is based on a fallacy. This is important because very often perception is coupled with other intellectual functions which turn it into an even more mysterious operation. Some see it as a form of deductive or statistical reasoning (Bayesian approaches to brain function - Wikipedia, the free encyclopedia), while others are convinced that perception cannot be explained without reference to action.
These conceptions are not only theoretical, they have very practical consequences. Rodney Brooks for instance, (Rodney Brooks - Wikipedia, the free encyclopedia) is a proponent of embodied cognition and enactivist theory of vision, but he is also a builder of robots. He believes that cognition, and vision, are impossible without a body. A brain is not enough.
While I have great sympathy for such a combined approach, I think that the borders between perception and other functions can become very easily blurred.
One reason why this conceptual confusion, as I see it, is still very strong comes from the popularity of the What and How, or the What and Where theory of vision. (Two-streams hypothesis - Wikipedia, the free encyclopedia).
This theory is, once again, only plausible if the brain is some kind of computer with different modules, each with its processing capabilities. Rather than hammer on my opposite view, I would like to remark that even among the proponents of the brain-as-a-computer, this theory has its detractors which hardly are mentioned by Goldstein, and briefly referred to in the wiki.
There are 2 main arguments in favor of the theory that some parts of the brain process the identity of objects (the what stream), while others process the location of and possible actions on this object (the where and how stream).
First, experiments showing that optical illusions, which obviously effect the what stream, have no influence on the shape of the fingers when people are asked to grasp the objects concerned. The width of the grasp seems to be related to the real width of the object, and not the illusionary one. This would strongly indicate the independence of both perceptual streams.
Let me state right away that I would have no idea how to account for the discrepancy shown in these experiments. If that is indeed the case, then I have gone seriously wrong somewhere. Luckily for me, the opinions of experts are divided on this issue. Still, that remains quite a hurdle I am not sure I know how to deal with.
The second argument stated by Goldstein, is the putative fact that different neurons (ganglion cells) are responsible for the what and the how/where stream.
This argument is not a problem as far as I am concerned. The idea that neurons have different sensitivities (or better yet, are linked to different sensations), is nor an argument for the computer model of the brain, nor against it.
I would be the last to advocate the isolated study of any brain function, still I feel compelled to warn against the easy amalgam of functions under the header of vague processing models. And I promise to try and show how all this can be used in object recognition. After all, this is a botting forum.
 
I have had time to think about the problem of illusion (the what) and grasping (the how and where), and I think I know how to salvage my views. The wiki mentions 2 articles that disagree with the two-streams-hypothesis, but they are not publicly accessible. Since I refuse to pay protection money to the publishing mob to read a few pages, you will have to be satisfied with my own efforts.
The rod and frame illusion (https://www.google.nl/search?q=rod+...ANKmK0AXZ1IHwAw&ved=0CCkQsAQ&biw=1280&bih=590) is usually used for these experiments, but any other optical illusion where an object's real shape and size are incorrectly perceived, should do it. Optical illusion - Wikipedia, the free encyclopedia.
I think that focus is the key to this conundrum. The illusion is an overall effect, it can be shattered by focusing on only parts of the scene, or by covering part of it with a sheet of paper. When we try to grasp an object, we are concentration on those parts of the object we want to grasp, and not on the whole object, nor on the overall scene. The illusion then stops having any effect on our perception. The fact that the illusion returns with a vengeance is not an argument per se for the two-stream-hypothesis. Like it is often the case, optical illusions do not disappear just because we know that they are illusions (see Ponzo illusions).
 
Let's see if we can distill some rules for object recognition, based on my analysis until now.
It will be obvious to everyone that we will not be using any mathematical or statistical rules.
Under the same token, we will not try to find distinguishing features of objects in a scene, prior to having identified at least the family/kind to which this object belongs.
Since we are not allowed any mathematical or statistical tricks, all that we can use is preexisting knowledge in the form of databases. Animals and especially humans, take years before they are able to correctly identify and certainly manipulate objects. We do not have this luxury, so we will rely on databases instead.
Let me give you a concrete example.
Take a screenshot of your toon anywhere you want. Imagine the kind of "knowledge" your toon would need to learn its way in that area.
- vegetation
- topographical elements: mountains, riffs, lake.....
- kind of mobs
- ...
Remember the first Gulf War, and Schwarzkopf boasting about the cruse missile that maneuvered between panicking civilians to get to its target? Maybe you still think that that was a show of American expertise in AI. I bet the missile was man-controlled, at least for that occasion. Almost 15 years later, and there are no known algorithm the military could blindly trust a 100 million $ weapon to. Certainly not with the press watching.
Our case is less dramatic, nobody dies but our toon or our botting account.
But the lesson of all this is: there is no abstract algorithm for object and scene recognition.
To be more honest, I believe that the mathematical/statistical approach could eventually lead to a workable algorithm in the long run. But remember the Borge's library? Such an algorithm would have to contend with an astronomical number of possibilities. And the only way such an algorithm could ever be useful would be to give it a huge memory of all its computations. And a way to relate those findings to the real world. Such an algorithm would in fact encompass almost the whole field of AI. We would then have the computer equivalent of a living super-brain, even though they would be completely different in their operations. Would it be intelligent? Who cares? Such an algorithm would be so powerful the question would be meaningless.
Of course, most approaches rely on the preprocessing of images by other algorithms or by humans, making of our Super Algorithm a mere theoretical possibility.
Back to my analysis.
Since only holistic and top-down processes are allowed, features can only be identified as features of a specific object. In this regard, I would like to quote the father of the phenomenological movement, Edmund Husserl (Edmund Husserl - Wikipedia, the free encyclopedia) : "Das Bewu?tsein ist Bewu?tsein von Etwas." Consciousness is consciousness of something. (Cartesian Meditations).
That does not mean that we cannot recognize features that are common to many objects, only that they are first recognized as objects themselves, and only later, through association or other mental operations, as features belonging to one or more objects.
We have no trouble perceiving an angle, but not as a feature of a triangle or any other geometrical shape, but as an angle in its own right. Even though we might recognize it as being a part of a more encompassing shape.
It is obvious that such an approach must rely heavily on databases.
We also must not forget the inherent imprecision that accompanies any perception. Without it, we would be unable to recognize an object as a member of a family. OCR-like algorithms would probably be very helpful.
Another aspect that I have not breached yet, but which is essential to object recognition, is scene recognition. Let me get back to you on that.
 
So what about scene recognition?
Let me first state that I am not a proponent of Gestalt theory (Gestalt psychology - Wikipedia, the free encyclopedia) even though they seem to advocate a top-down approach also. They are too intellectualistic for my taste.
Instead I will use the question posed by Goldstein: "Why is it difficult to design a perceiving machine". The answer may lie in the paragraph that follows, but not the way the author meant it: The Stimulus on the Receptors Is Ambiguous.
He gives the example of different shapes that create the same image, not realizing that he is making a huge, and very dangerous (rich in consequences) assumption: that the visual system works like any physical optical system.
This short wiki gives a very clear definition of one problem that seems mysterious when we equate the visual system to a (mechanical) optical device.
Inverse problem in optics - Wikipedia, the free encyclopedia
I am not sure how to deal with this question. After all, it seems to me that we are able to distinguish between two objects easily, even if they have the same retinal image according to the theory. And if that is the case, assuming that the brain uses the same optical principles with some unknown algorithms, is unnecessarily complex. I particularly advise reading the second reference as an illustrative example of computer based models and homunculus fallacy.
Perception viewed as an inverse problem.
If I am right, then we definitely have a problem as far as object and scene recognition is concerned. We have no idea how the vision system works, except by using it.
This is of course scientifically and philosophically unpalatable. But maybe all we need is a change of model, and instead of trying to solve imaginary problems created by the dominating models, we should go back to the original phenomenological inspiration: perception as a function of living beings, and not of machines.
Which does not mean that computer vision has no right to exist. On the contrary, it deserves its own domain, but we should not hastily draw conclusions from the workings of one and blindly apply them to the other. I thinks that the differences are more numerous than the similarities.

edit: the Gestalt theory started with the assumption that optical illusion made any theory of perception based on sensations false. After all, if we are seeing things that are not there, there is no way to link the imaginary with concrete neurons. This is I think a very simplistic vision of sensations. Why could the illusionary objects not be the combined effect of other sensations? The fact that we have no way to link those "unreal" objects to a retinal image is, as far as I am concerned, an additional indication that the very idea of retinal image must be wrong.
 
The issue of scene recognition brings immediately to (my) mind the distinction between visual space, and the external, object space. Illusions show us that we cannot identify one with the other, even if that would work most of the time. A really interesting wiki is Visual space - Wikipedia, the free encyclopedia.
Once again, I will use Goldstein's textbook as starting point. Let me first remark that the author has a very pragmatic approach to the problem. He uses different theories to explain different aspects of perception, without ever trying to show how they can work together to explain all aspects of vision. Vision is of course very complex, and is probably hardly explainable with a single encompassing theory, but still, it would have been nice to show how they articulate with each other.
He gives the example of an experiment in which a group of people are presented with a kitchen scene, then with drawings which are and are not related to a kitchen. Especially the drawing of a bread which looks very much like that of a mailbox, seems to warrant the conclusion that our knowledge determines our perception, since the drawing of the bread is more easily recognized as the drawing of the mailbox.
Another, very interesting example, is that of "the multiple personalities of a blob", in which hazy, kind of out of focus pictures, all show the same "blob" as a part of different scenes. We automatically interpret the blob as being a "natural" part of each different scene. Those two examples are supposed to illustrate the idea that some kind of reasoning is involved in perception.
Like I said before, I really do not like this blurring of borders between functions, as they add more obscurity than they help clarify concepts.
The blob changes obviously its nature with each scene, but I would like to make two remarks.
First, unless we are focusing on details, we do not notice them. In fact, experiments have shown that, for instance, the text that somebody is writing can be changed by a program, without the writer noticing it, even if he kept looking up at the screen while writing. (If I remember correctly, I read that in" consciousness explained" by Dennett).
There was even this experiment where somebody was building some lego like object, and another person kept changing parts when he was not looking. The builder just kept on building his "project" without realizing what was happening.
So, instead of attributing a change in function to the blob, we might as well assume that it was not fulfilling any function in the recognition of the scene.
Second, the first example shows that what we are experiencing at the moment can influence the speed with which we can switch tasks. If we are shown kitchen scenes, it will activate many neurons more or less intensely, that are all related to the kitchen. That is known as "priming", even though it is usually only used in experiments involving single neurons. I also have read somewhere of an experiment involving people going from room to another. The findings suggested that tasks that were related to one room, could more easily be forgotten when changing rooms, or even standing on the threshold between the two! Something we have all experienced when going from the living room to the kitchen (or vice versa), and once there, wondering why we were there.
My final remark will be more general.
We can obviously reason, think and deliberate with ourselves about pictures. We are capable of drawing conclusions based on clues and understand the gist of a scene that had eluded us at first. We can even discover new visual aspects that we had missed before, because our logic told us that they should be present in the scene somewhere. In fact, the picture the author showed of the clutter on his desk, even if it were presented in another context, is a perfect illustration of the influence of knowledge on perception. He indicated that there was a pencil, easy to find, and glasses, which were hard to find. I had to magnify the text (pdf file) to 400% before I could recognize a part of a leg of his glasses. That shows that the reasoning does not have to be our own. What other people tell us influences what we can see.

But, first, this is not the normal way that perception takes place. We do not need to think about something before we see it. It would be really strange if that were the case.
Second, just like in the case of the ubiquitous blob, not all details are immediately relevant to the observer. The idea that the information in the real space should be duplicated in the visual space is, I think, a strong fallacy. It assumes, once again, that the eye functions like an optical device. In this case, a camera, with its objective registering capabilities.
So, yes, hints, thoughts and reasoning will show us what we could not see before. But that does not mean that my partial vision of the glasses was determined by the thought that I had. In fact, i was thinking of "glasses", literally, and not of the frame itself. I even thought that I saw them on the keyboard! What I finally discovered was completely different with what I had in mind. But once I saw it, I knew that those were the partially hidden glasses.
Here also, the magic word would be focus. Instead of relying on spontaneous perception, we appeal to our experiences to analyze a situation. But there is a difference between a visual scene as a percept, and the same scene as an object of inquiry. We can alternate between both functions, see and think about what we have seen. But thinking does not make us see, even if it helps us focusing on a specific area.
 
A question, one of many, that I have no answer for yet, is the difference in intelligence between Man and Animal. All claims to the contrary have turned out to be a hoax
(Clever Hans - Wikipedia, the free encyclopedia) or refuted by experiments. Apes, who look most like us, learn no more than about a hundred words, and seem to consider them as special tools only to be used in experimental situations with humans in order to get treats.
That the shape of the body plays no exclusive role in the evolution of intelligence, is show by the re-occurrence of the prehensile thumbs (often considered as a milestone and a significant factor in evolution) or limbs in other species.
Thumb - Wikipedia, the free encyclopedia
Prehensility - Wikipedia, the free encyclopedia
Evolution of human intelligence - Wikipedia, the free encyclopedia.
One intellectual endeavor that claims to explain the difference in intelligence is evolution theory. I must say that I find it much more plausible than the idea of Creation in 7 days. But to say that evolution theory answers all questions is certainly a bridge too far.
I find it, to be honest, very difficult to take researchers seriously who rely on vague evolutionary principles to solve their problems. It seems to me to be the tool of lazy intellectuals. I think it was Jerry fodor who said that evolution theory is always running behind the facts. I find this particularly evident in this article, written by the famous duo and spokesmen Comides and Tooby.
Evolutionary Psychology Primer by Leda Cosmides and John Tooby
Their second principle reads:
"Principle 2. Our neural circuits were designed by natural selection to solve problems that our ancestors faced during our species' evolutionary history."
In other words, we have the brains we have, because if we did not, we would not be what we are.
I do not intend to present here an exhaustive critique of evolutionary theory. For people interested, the debate between Stephen Pinker (How the Mind Works - Wikipedia, the free encyclopedia) and Fodor's The Mind Doesn't Work That Way, is very interesting.
Suffice to say that, for me, evolution theory is not the way to answer my question: how come we are much more intelligent than animals? Or, as a variation, what do animals miss that humans have? Maybe it is a soul, like religious people think, but since I have no expertise whatsoever in that area, you will certainly not mind if I look elsewhere for an answer.
This is as far as I am concerned, an essential issue, because it would help us understand the brain more thoroughly.
 
Here is a possible answer:
The Human Brain in Numbers: A Linearly Scaled-up Primate Brain
In short, humans have, in absolute terms, more neurons than other species. What makes this article interesting is the fact that it shows that some criteria, like relative size, or ratio body mass/brain mass, which have always been used, are not very accurate.
I say "possible answer", because we still need a model that could equate more neurons with more intelligence. For a processor, the more transistors, the faster it can execute some operations, but it does not make one computer smarter than the other. Speed is of course not irrelevant, problems which cannot be solved in time are generally lethal in nature. But it does not explain the difference in cognitive abilities when the time factor is less critical.
 
I have learned something new today, and that is always nice. Until now i have always thought that neurons (with their axons) were all that mattered in the brain. That is also the dominating view until now through all schools of thought. In White matter - Wikipedia, the free encyclopedia, the writer puts it as follows:
Using a computer network as an analogy, the gray matter can be thought of as the actual computers themselves, whereas the white matter represents the network cables connecting the computers together.
But apparently the sentiments are changing concerning an essential part of the white matter: the glial cells.
#62: Glia
Maybe my question is too premature, maybe we still do not know enough of the brain to answer such a simple question: why are humans smarter than apes and mice?
But then, speculating is so much fun...
 
Memory is essential to almost all neural functions, and whether the brain is a computer, or not, without some kind memory you have no functioning brain.
According to the literature, memory comes in different forms (Memory - Wikipedia, the free encyclopedia). I will not discuss those different forms, and will speak of memory as if there was only one kind. Not because it is necessarily so, but pure out of convenience. But the fact that the computer model very often creates its own problems is also at work here. In Encoding (memory) - Wikipedia, the free encyclopedia the theoretical biases are particularly evident.
It seems so obvious to speak of encoding, but when we think about what we are dealing with, it becomes less and less obvious.
Let us take the example of a visual scene: how do you think the brain encodes visual elements? or auditory? Or haptic? We all know that it is possible to code for these in a computer. But those codes only get their meaning when they are experienced by a living, perceiving subject, who then translates those electrical impulses in sensations. Is there a way to code sensations? If there is, then there would be a one-to-one correspondence between mental elements and neural elements. And if we find those codes, we will have solved the mind-body problem once and for all!
But even if we find this correspondence, there will still be the possibility that those neurons are only the triggers to those sensations. The same way a switch can turn the light on and off, without coding for the light in any way. If that is the case, then maybe the talk of only one kind of memory is not that far-fetched after all.
 
I have systematically spoken of "sensations", and not of the philosophically laden term "qualia (quale is btw the singular, and not the plural form) for a simple reason. I believe, with no proof whatsoever, that the existence of sensations is undeniable. I also refuse to believe, again without any proof, that such an essential element of life could be of no consequence. If you read the immense literature on qualia (Qualia - Wikipedia, the free encyclopedia) you will realize that it is one of those metaphysical issues that can can never be resolved one way or another. And I refuse to be drawn into that sterile discussion. I find it more constructive to show how theories that do not take sensations into account create their own problems, which are easily avoided, when sensations are taken into consideration. The reverse is of course also true, and opponents of the use of sensations, in scientific or philosophical arguments, will also be able to point to problems created by this approach. That is the way of things, and no metaphysical debate will ever put an end to this state of affair.
For those of you new to philosophy, I would recommend the discussions about Mary (Knowledge argument - Wikipedia, the free encyclopedia), and if you are still in possession of all your mental faculties, Searle's Chinese room (Chinese room - Wikipedia, the free encyclopedia) is really worth the time.
 
I think I have at least 5 different 3D sensations. (If you have read the previous post, than you know that this would be a good time to open a can of qualia).
When I look at a 3d scene
1- with both eyes,
2- with one eye.
When I look at a 2D picture of a 3D scene
3- with both eyes,
4- with one eye,
5- When I look at a 3D movie, or two 2D pictures, of a 3D scene, at the same time, with 3D glasses, or some other device.
The last example is supposed to cover all 5 situations. At least, that is what the textbooks would like us to believe.
 
There was a time when philosophy was easily overseen. Either you believed that Mind and Matter were one, whereby you could be a spiritualist/idealist (everything is Mind), or a materialist (everything is Matter). Or you believed that they were two distinct things, and you still had enough room to play and find out who was the strongest in school. In psychology, you could find the same distinction between those that considered mental phenomena as obsolete and an obstacle to scientific progress (Behaviorism), and those who believed them an essential part of the psychological endeavor. Behaviorism, along with McCarthyism, was really almost a pure American phenomenon. It hardly had any influence on researchers in Europe and other countries but the US. But in the USA, until the publication in 1957 of Syntactic Structures - Wikipedia, the free encyclopedia by Chomsky, being branded a mentalist was almost as bad as being accused of being a communist. That might explain the ideological virulence of nowadays American intellectuals. And the scholastic, never ending debates, that are certainly fueled by the "Publish or Perish" mentality.
Intellectuals, usually university professors, in the old world, enjoy a job security that their American colleagues can only dream of. A European professor hardly needs to publish any research or show any results to keep his job, whereas his American colleague would probably be relegated to a back room, if his contract is renewed at all.
Since there are no evident content criteria to judge the quality of a publication, mostly quantitative criteria are used to determine the "quality" and standing of a faculty member. One of them is the number of times the work is cited in other publications. It is therefore essential for any academician to be on the reading list of his eminent colleagues. People who disagree with each other are, of course, more liable to quote the work of their opponents to pull it down. And writing a book just to say that you agree with everything you have quoted is not really a way to build a reputation. That is why we can observe an almost exclusive American intellectual phenomenon: the inflation of theories of the Mind, among others. It seems that every small detail is a juicy piece of meat worth fighting for by a group of ferocious predators.
One of these phenomena is the Representational theory of mind. The wiki writer (Mental representation - Wikipedia, the free encyclopedia) made a feeble attempt to show the way in this dense and impenetrable jungle, and very quickly referred to Mental Representation (Stanford Encyclopedia of Philosophy).
I so understand him! I have read many of the books referred to in both articles, and I am not (too) ashamed to say that I would probably fail any test or exam on the question. Especially on who said what, and why. So, unless you intend to write a thesis on the subject, do not even try. It may be hazardous to your health!
It was important for me to say this because it relates very closely to the nature of thoughts, and I have already, briefly, breached the subject. On which I will have more to say sooner or later, I am sure.
 
What is the relationship between the choreographic representation of a dance, and the dance itself? The answer to this question might show the existence of another false problem created by the computational theory of mind: the grounding problem, named after an article of 1990 by Stevan Harnad.
Embodied cognition - Wikipedia, the free encyclopedia
Symbol grounding - Wikipedia, the free encyclopedia
The Symbol Grounding Problem - Cogprints.
If you have read the Chinese room, you will know that it concerns the fact that computer symbols, according to Searle, have no intrinsic meaning: they need to be interpreted by a mind to get their meaning. They are not grounded in the real world, like our symbols and words are.
Once again I have no intention of presenting an exhaustive view of the issue, but just to point to what I see as a fundamental flaw.
The fallacy of the grounding problem resides in the fact that the author, and, all the thinkers i have read (about) that have approached the subject, are basing their arguments not on a real, existing (even if generic) device, but on the idea of a computer.
What is this symbol system they are talking about?
If you take a computer program, it will be probably written in a high-level language, which ultimately will be translated in a series of 1's and 0's. That is where the imagination, if certainly not the knowledge, of the researchers stops.
This series of 1's and 0's fulfill all the prerequisites of being an abstract symbol system, and as such, the grounding problem is very real.
But that is not the end of the line, the end being a series of actions triggered by electrical impulses. The result is therefore anything but symbolic. It is certainly grounded in the real world of electronic switches.
The individual actions, the turning on or off of an individual switch, are certainly not arbitrary. They are guided by the intentions of the programmer, who has learned to used them from the designer of the machine. Each action has a specific intention and meaning for the programmer, without which it would be no different from gibberish in comparison with words intentional used. In fact, a computer program is no different from any other action undertaken with or without the use of specific tools. It gets its meaning, as all our actions, from intentions. From the intentions of the machine designer, the programmer, and the users. It is very like the choreographic representation of a dance. If you know the words you can do the moves.
So, yes, Searle, at the end of his experiment still does not understand Chinese. But a bilingual user would have no trouble following the whole process. In fact, it would have been the same if Searle had been watching a Chinese movie with English subtitles. He would still have no understanding of Chinese, but a perfect understanding of the movie nonetheless.
 
A very intriguing notion, as far as I am concerned, is that of working memory, especially when applied to vision.
Take our visual field, its contents are ever changing, even though the same receptors are involved over and over again. Where do these contents go, when they are not overwitten immediately (it is hard to suppose that most of them are not overwritten immediately)?
The problem is that, as far as I can see, all receptors are being used all the time. That is, they all receive light more or less intensely.

Okay, imagine a visual field of 250x250
all receptors are used, so just copying the information to another location will certainly free the receptors for the following stimuli/data.
What happens to the copy? And do we need one?
If the array is linked to a much bigger array, let us say, all arrays involved in vision, it will be possible to match the visual scene with different parts of the big array. Since it is a neural network by definition, there is no search algorithm involved, each neuron activates its counterpart, and all together they recreate the sensation (as memory of that sensation, and not as the same original sensation).
But that is only meaninful if the copy is that of the sensations, and not of the activated neurons. Which appears to be the case anyway, since the number of neurons in the optic nerve (about 100 millions) is just a fraction of the number of receptors.
Any action we undertake, any extra-visual sensation or emotion we will undergo, will be linked not to the fleeting visual scene, but to all the locations in the brain where the elements of the scene have been stored. In fact, there is no reason for the visual elements to be stored more than once, since the links to other dimensions of the visual experience (extra visual sensations and emotions, actions), will be available immediately.
Let us go back now to the visual scene and how an object in that scene can be extracted from the scene, to be stored and used again an again.

An object in a visual scene can never be more than the whole visual scene (in our case, 250x250). It will usually only be a part of it.
Let us take first the example where one single object fills the entire visual scene.
1) receptors are "translated" into other neural codes/sensations
2) the location of each sensation relative to the others has to be part of the code, (otherewise mutiple objects would have the same configuration).
3) the new codes/sensations are stored in the brain.

Now let us imagine a new visual scene where the same object forms only a part of it.
We immediately realize that size will be a problem. In this second scene, the object uses much less receptors than in the first one. Will the "translations" then not differ from each other?
Size constancy seems to be the magic formula. But I have expressed my doubts about the one to one correspondence between visual space and object space. It would not be very credible from my part to change my mind whenever that is convenient for my analysis.
We are able to recognize an object as being the same from different distances, but also a bigger and smaller object as being similar (circles, squares...).
It is not difficult to explain the case of figures with straight lines. After all, like we have seen with vectors, the very fact that makes bottom-up recognition of objects impossible, will make it possible for us to recognize a smaller square as being similar to a bigger square. The vectors and angles of the smaller square are identifiable as being part of or the same as the bigger square.
It is more difficult for curves, since a smaller and bigger circle can, perceptually, have no points in common. We also cannot rely on geometrical knowledge (center, tangent), because all animals also recognize circles whatever their size.
In my post about squaring the circle, I advocated a "magical" solution: neurons that react to a range of stimuli. It is magical in the sense that it discharges us from the obligation of providing an explanation we do not have. I must admit, that I do not have a better one yet.
Also, we are able to recognize objects even in different colors. In fact, we would have no trouble identifying a blue apple or a blue banana! Likewise, we have no trouble with black and white pictures and movies.
This presents less theoretical difficulties than curve recognition. After all, many animals, if not most, have monochromatic vision. It would therefore be understandable if the vestiges of this property of vision had been kept and improved upon during evolution. (This is not an explanation, but an invitation to provide one!).
Once we have neutralized color, all we have left are color-neutral neurons in a specific configuration. The only relevant fact would be their relative position to each other. Which can certainly not (always) be unique to a specific object. But that is in fact an advantage, this lesser precision makes categorizing objects much easier.
The same way, shapes would be much easier to recognize without the color component. And since shape is a part of the configuration of an object, and not something that we perceive independently, we are able to recognize sketches and caricatures of objects, persons or animals.

We have now every neuron forming an object accounted for, including their relative position to each other. All that rests, is the location of the object in the visual scene. After all, every teacher will tell you how convenient it is that students always sit at the same desks.
The question is whether retinotopy (neurons near each other on the retina are believed to be near each other in other parts of the brain) is the whole story. After all, the same object can be part of different visual scenes and at different locations. And since visual scenes change also constantly, we cannot assume that the same neurons will always indicate the position of our object. So location is more an aspect of scene recognition than object recognition as such. Unless we consider the fact that by concentrating our attention on one object, we are making sure that it always falls on the same location on the retina. But that would suppose the same angle.distance, etc. Besides, we recognize objects without having to concentrate on them.
We have all objects inventoried in the scene (I left occlusion and lighting problems out of the equation), the question that remains is the empty space between them.
If you replace air with water, and make abstraction of the distortion of vision under water, you will maybe agree with me that we do see empty space. Which we usually call perspective. A magical name that is supposed to be an explanation, but that is not.
So I suppose we will have to make place for empty space as well. How? I have no idea.
 
We are now ready for the neural library of Babel, or neural Babel for short.

Imagine an array with all possible (visual) sensations. It we imagine all possible combinations, we will have all possible perceptual objects in all their variations in color, size, and viewpoint. A neural version of such a huge (but finite) series, will be to have each sensation connected to all other sensations, and a way of turning sensations on and off, just like a light show.
Since we cannot rely on individual neurons to switch themselves or others on or off (they would need to know beforehand what the end result should be), we must find an activation and inhibition mechanism that makes sense.
And luckily, we have one! In fact, we have at least two! But both based on the same principle.
1) Specific sensations are linked to specific combinations of optic neurons, and to different parts of the brain. Each sensation is just the configuration of on and off neurons. Brains have not started as complex devices until very late in history, and it is conceivable, that primitive sensations had each apart neurons, and when more complex sensations came to be, connections between the different neurons had to be set. Or vice versa. This is definitely an egg-chicken story. What that means for us, is that there is no external mechanism to the sensation itself. This way we avoid the homunculus fallacy completely.
2) Whereas the first alternative presupposes the existence of a central location for each sensation, we can also imagine that, starting with the optic neurons, the different combinations are the different sensations, which are then linked together with other non-visual sensations, emotions, and actions, in different parts of the brain.
This alternative is less intuitive, but that does not make it any less plausible than the first one.
The analogy with Borge's library is that our brain contains all possible sensations, at least virtually, it only needs experience/actions to link them together.
 
The idea that rationality and emotionality are certainly not opposites, and even related, is not new. Antonio Damasio has made it one of his central themes.
Descartes' Error - Wikipedia, the free encyclopedia
Where we differ is that Damasio does not deny the existence of a Reason, and of rational processes, as opposed to emotional ones. He emphasizes that the latter are necessary to the functioning of the former. It is a dualist vision that he thinks is supported by evolutionary considerations, rational processes having come later in time.
I do not think that we can divide the mind so conveniently, however blurry the borders may be made, and liberal the exchange between the different aspects. We can of course distinguish in our thinking between "rational" and "emotional" aspects, but my claim is that it is a figure of speech only. Even "pure" rational thought, like logic and mathematics, not only have their origins in primitive emotions, they are still a form of modern emotionality. We have learned, by nature and society, that some rules are better than others. We can, for some of those rule, clearly distinguish traditional and conventional aspects which can easily be exchanged for others without putting our "rationality" in danger. For logical rules, however, the distinction is much harder.
Take the Identity Rule A=A.
or IF A=B And B=C THEN A=C.
How can these rules be considered as emotional?
But what does rational mean in this context? (a very humorous, and enlightening article written by the author of Alice in Wonderland, Lewis Carroll What the Tortoise Said to Achilles - Wikipedia, the free encyclopedia)
Unless we believe that we have some kind of transcendental faculty, like Kant believed (Critique of Pure Reason - Wikipedia, the free encyclopedia), how can these rules be innate and not learned?

edit: the whole text is here: What the Tortoise Said to Achilles - Wikisource, the free online library
And btw, Lewis Carroll was a logician before he published the children stories he used to tell his little daughter Alice.
 
Status
Not open for further replies.
Back
Top