What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
Npc's show usually a very interesting and helpful behavior: they turn to face you when you click them. That means you can get multiple shots of an npc where only the background differs. A perfect way to neutralize the background in the analysis of the scene.
 
Everyone agrees that mental functions are distributed all over the brain, and so with vision. Apparently that gives rise to many problems, one of which has gotten its own name: the binding problem. That is how Bruce Goldstein put it:
"To solve the binding problem, the brain must combine information contained in neurons that are located in different places."
You really wonder how somebody so intelligent could say something so stupid! How could a part of the brain "combine information" from different parts of itself? I would have no problem with such a formulation if it were metaphorical, just a figure of speech. But what comes after shows that the author really means his words to be taken literally. He explains the "synchrony hypothesis" which states that the brain recognizes that different parts belong with each other when the neurons in those parts are firing synchronously. Again, a very strange affirmation that only makes sense with the model of the brain as a computer with a processing unit on one side, memory on the other. The fact that the processing unit is not believed to be a central processing unit, but a distributed one, is just a detail that doe not change the equation. Let us think for a moment how this could work.
When we look at, for instance, a picture of a woman with a dog in her lap, some (groups of) neurons concern the woman, others the dog. Those different neurons are distributed among a number of locations in the brain, which means, according to this model, that all those pieces of information have somehow to be brought and processed together. But how does the brain know which parts belong to the woman, and which to the dog? Well, the brain knows that because the neurons which concern the woman in the picture, all fire in synchrony, as do the neurons concerning the dog. Because both groups of neurons have different firing patterns, the brain has no problem distinguishing between them.
As far as I am concerned, this is in itself a proof of the inanity of such an argumentation. But I will try to make it more explicit.
Either the different groups of neurons are linked together from the start, in which case their firing pattern is really irrelevant to their linking. Or they are not linked from the start, and then we have to ask ourselves how the brain could be able to to know which neurons are synchronized with each other, and which are not.
Again, either the brain knows that because the synchrony creates, somehow, by itself, a link to the supervising parts (how?!). Or, the supervising parts, by themselves, somehow, detect the synchrony in different locations, which make it possible to link those locations together. In both cases we have replaced the mystery of synchrony by the mystery of the creation of a link where there was none. Ockham's razor tells us that if we have to have a link between the different parts of the brain involved in the perception of an object, it would be much simpler to assume that those parts are linked from the start, rather than creating a mysterious entity that is then supposed to account for this link.
That brings us then back to the first case: all the neurons concerning each object individually, are linked from the start, and we do not need any supervising parts to explain the fact that, perception of an object is the effect of the firing of certain neurons and not others. What the author is trying to do is the reverse of the natural order: He starts from the perception of an object, sees that different neurons in different locations are activated, and asks himself how that is possible. It would be like a chemist who would start with water, notices that oxygen and hydrogen in a 1-2 ratio are involved, then asks himself what, in those elements, could explain the existence of water. But if you take the other way around, the way physics and chemistry do, that the putting together of those elements is the creation of water. Then it might be plausible to assert that, the fact that those different groups of neurons are activated, is what explains the perception of this specific object.
edit: Goldstein is explaining the synchrony hypothesis, he himself is not entirely convinced that the theory is the only explanation for the binding problem. But he does not deny the existence of the binding problem.
 
Oooops, wrong thread, how the hell did I get into psychology 102? and based on context, looks like a copy paste course.


Long story short, a computer will never match the thinking capabilities of the human mind. Computers can not assemble and interpret information as fast as the human mind. It can perform calculations faster, but calculations alone isn't enough, The human mind can fill in blanks and accurately make predictions, skip calculations entirely and accurately guess and basically is designed to handle RNG situations. Mankind will never out perform millions of years of natural selection.
 
Oooops, wrong thread, how the hell did I get into psychology 102? and based on context, looks like a copy paste course.


Long story short, a computer will never match the thinking capabilities of the human mind. Computers can not assemble and interpret information as fast as the human mind. It can perform calculations faster, but calculations alone isn't enough, The human mind can fill in blanks and accurately make predictions, skip calculation and basically is designed to handle RNG situations. Mankind will never out perform millions of years of natural selection.
I was not going to answer your post for the simple reason that I did not know what to say. After all, except for the insult (psychology 102, copy and paste course) all you had to offer were assertions based on your beliefs, with no arguments whatsoever. If you had followed this thread, you would have known that it is really not that simple.
And as far as the copy and paste is concerned, I would really be interested in your sources. If there are articles or books out there that I missed, and that say the same thing I am saying, I would really love to read them. Thank you.
 
Oooops, wrong thread, how the hell did I get into psychology 102? and based on context, looks like a copy paste course.
LOL seems we are scaring customers :)

Long story short, a computer will never match the thinking capabilities of the human mind. Computers can not assemble and interpret information as fast as the human mind.
tell that to IBM and their project Watson :)
Watson (computer) - Wikipedia, the free encyclopedia

anyway we don't need to do nearly as much as IBM all we need is visual object recognition (not currently for HB but in general if we want to make/change bot so it does not have to inject itself in game process) path finding (current pathfinding is decent but it can be much faster and generate more detailed paths if we use GPU acceleration) decision making and prediction (neural networks)
since mobs and bosses (and dungeons/raids) are scripted, a lot of it can be predicted with simple neural network (with small amount of "runs" since script does not change)

for example if we enter goal "don't let any party member health drop to less than 25%, and use as little mana as possible" and give neural network access to tank and healer it will learn in just a few tries at which points boss in dungeon (5man) does big damage to tank/group and optimize which cooldowns (tank and healer) to use each time and at which milliseconds during encounter, and what heals to use each second to minimize mana expenditures while keeping all group members to 25% or more health

It can perform calculations faster, but calculations alone isn't enough, The human mind can fill in blanks and accurately make predictions, skip calculation and basically is designed to handle RNG situations
neural networks with enough of domain specific training can also fill in the blanks, and can also accurately (much more accurately, at 1 millisecond/1 healthpoint/1 manapoint level) make predictions, handling RNG is also not a problem, basic probability calculation with desired probability factor can handle this pretty easy again much more precise than human could, and reaction times of computer are 1 frame or less (less if we need/get ability to pool WOW state more than once each frame)

all this uses a lot of CPU power BUT modern GPU's are perfectly suited for these easily parallelized tasks, and newest generation has up to 5.6 TFLOPS of processing power (possibly up to 7.3TFLOPS if you are into overclocking or up to 10 TFLOPS if you find one of those very rare radeon 7990 cards and overclock it to 1.2 GHZ - easy to do with nice water cooling since chip does up to 1.3 GHZ)
and all that is only 1 card, if you have 4 in your PC its 40 TFLOPS if you get 8 its 80 TFLOPS ...

of course even 1 such GPU is overkill for what we need, i am just saying if processing speed is problem, we have solution, but midrange GPU probably has more power than we will ever need
 
@Bloodmarks
I really like your optimism. I find the field of object recognition in a botting program very exciting. I am not sure, no, in fact, I am sure I have not gotten it right yet, but I will keep trying.
 
@Florida. Loved your clip, but its relevance eludes me.

@Bloodmarks. Thanks for the link. It is difficult to judge based on blog articles, I hope better documentation of this algorithm will show up soon.

I remember philosophizing one day with a cousin, over life, the universe and everything. It was the time that 800x600 was considered as high quality resolution for a monitor. I then had what I thought was a brilliant epiphany. I told him: imagine a screen of 800x600 pixels, and all the images that could be created with every possible combinations of colors at every pixel. You would have in fact, everything that has ever been, or will be, in those images.
I found out a few days later that somebody else, the Argentinian Borges, had beat me to the idea some 50 years before. But, since they did not have computers at that time, he had used the analogy of a library (he was in fact a librarian The Library of Babel - Wikipedia, the free encyclopedia ), in which every possible content of a 400 pages volume was to be found.
Object recognition is something like that: any image is just one of this infinite series of possible images. So, algorithms, to remain tractable, have, one way or another, to limit the number of combinations before searching through the remaining possibilities.
Researchers know that as long as the processing capabilities are less than that of a (human) brain, any algorithm will be a palliative to this lack of power.
I think they are right and wrong at the same time. As long as they consider the brain as a computer, they will keep looking in the same direction, and confusing reaction speed with processing power.
Let me try to put in in practical terms.
The brain has billions of neurons (How Many Neurons Are in the Brain?). And every neurons has hundred of connections with other neurons, so that the possible connections are really astronomical. If we could build a similar computing device, the quality of search algorithms would stop being critical. Almost any algorithm would do.
But what do I mean by reaction speed? Like I said, a short while back, many groups of neurons are involved in the perception of a single object. And even though I do not believe the binding problem is a real one, there remains the mystery of how the brain assembles different sensations into unified objects.
In physics/chemistry, H20 gives water, that is an empirical fact. In vision, red + green = yellow. Yellow is easily distinguishable from red and green, and we never think of its composition when we are looking at a yellow object. Why would the brain need to do that? If a green neuron and a red neuron are activated at the same time, we will have a yellow sensation. Let us take this argumentation and interpolate a little. When looking at a multi colored patch, many neurons are activated at the same time, giving us many color sensations. What almost all researchers presuppose, is that those different sensations have to be processed individually, each neuron computing its own reaction, based on the feedback it receives from other neurons.
What if there was no computing involved at all. And that the fact that a neuron is activated is in itself meaningful?
That would mean that every configuration is unique at such. The melange of sensations when we look at B is similar, but also different from B. We do not need to analyze the image into its components. The fact that the vertical side of the character is more pronounced in one case than in another is certainly a distinctive identifying trait. But talking about it and taking it into consideration means that we have already seen it. We could probably train animals to distinguish between these two images.
This approach has often been called holistic. Gestalt theory is such a holistic approach. It would certainly advocate the idea that we "immediately" see both characters as distinct (which does not mean we have to be able to pinpoint the difference).
So far, nothing really original. But the question is why is it that the brain is able to look at images, or even at movies (25 images a second), and immediately grasp the gist of each scene?
Remember the idea of black boxes? What if, every time we look at an image or a scene, certain black boxes light up, while others are turned off, or remain off? And that any such configuration is what we call the gist? That is not such a far fetched idea if you think that we consider an object as being the sum of different features. That might also explain how difficult it can be to express some of our ideas. We get them holistically, but expressing them mean that we have to translate those holistic configurations in a sequential series of sounds or letters. Artists know that, and they prefer to paint or draw, rather than to speak about their inspirations. The paint brushes may be also sequential, but they are relatively coarser than sounds or characters.
But I wanted to be practical. I will try again.
Since we cannot have neural networks with the precision of a human brain, we will have to settle for the next best thing. Take an image of let us say 256x256 pixels or less. Let each pixel be connected to at least 2 values, one precise, one a range of values.
We have now a precise translation of the image which can be compared with other translations of other images. And we have an array of 256x256 arrays of values. I realize that the number of elements to be compared is staggering. But what I am interested in is testing the theory first, its practicability comes later.
I think that is how the brain does it, except that is does not take into consideration all possible combinations, but only those that it already knows more or less partially. Which would make it perfectly legitimate to reduce the search domain with the use of wow databases.
 
Squaring the circle (Squaring the circle - Wikipedia, the free encyclopedia) is mathematically impossible. But it is very easy to do that with XAML or any software program. All you need is adjust the angles and voil?, the circle/ellipsis becomes a square, and vice versa. It is almost magic. The same magic the brain uses when recognizing a caricature where all the face components are drawn in squares or sharp angles, instead of smooth curves. Or when we recognize a sketchy drawing with simple lines, as representing a human or an animal, like we do in cartoons and strips. Somehow, a right angle can represent a smooth curve, and how I think it is done should not surprise you anymore.
Imagine neurons that react to vertical line with an inclination between 90 and 70 degrees.
Other neurons react to 80-60 degrees. and so forth.
This way, part of a circle will activate those neurons also, and voil?, you have squared the circle.

edit: I am not sure a single neuron could react to a whole line, like many researchers seem to think. Maybe it takes more than one to oversee a line, but that is an empirical question that can easily be solved in time.
 
A (hardware) method that could be used for object recognitions as I have presented it, is anti-aliasing (AA).
Straight lines that are not purely horizontal or vertical have to be "filled up" to avoid jagged edges. Linking different shapes by similarity. does look a little bit like that. The transition from a straight angle to a smooth curve is definitely an AA operation. Or at least partially. I must confess that I do not know enough of the inner workings of a gpu to go beyond generalities.
 
anti aliasing is simple averaging of pixel values, different anti aliasing types just change how many and which pixels are sampled (some types also reduce effective resolution in process),

main effect is reduced sharpness and reduced image detail level
i am not sure how that can help with image recognition task? i was assuming you need as much details as possible for that
 
anti aliasing is simple averaging of pixel values, different anti aliasing types just change how many and which pixels are sampled (some types also reduce effective resolution in process),

main effect is reduced sharpness and reduced image detail level
i am not sure how that can help with image recognition task? i was assuming you need as much details as possible for that
Let me focus on the idea of recognizing different kind of curves, even if some are rather angular. I looked up AA, and I must admit that it is probably quite a different operation. But some principles are common to both.
One is the already mentioned effect of smoothing angles to curves.
Second is the fact that to be able to smooth the lines up, AA has to recognize them first. And since, I suppose, that is happening at the hardware level, that has to be very fast. Such an algorithm could be very useful.
But, once again, that is for me uncharted territory.
 
@Bloodmarks. I think I'd better let the programming side over to you and people who know what they are talking about.

I would like to place this general remark:
Vectors are very important in computer vision, and for this reason, in object recognition. That does not mean that they play any role in the brain.
Algorithms, of necessity, are sequential, even in parallel computing. The best analogy that I can think of is a multi-pins lock where all the pins have to be activated at the same time to open it. Imagine the brain as a very large collection of these locks. Sometimes you push the wrong pins, or only a part of what you are supposed to push, and then you found out that you have opened another lock than the one intended.
You could of course have multiple processing units, each linked to a pin, pushing each its pin at the same time. But it would mean a dedicated line to each pin for each processing unit. And then we would really be speaking of a hardware network.
The independence of software from hardware is an essential principle in computer science. It is the, reasonable, conviction that a problem that can be solved on one hardware, can be solved on any hardware. Therefore, that an algorithm that can be solved by a neural network, should be solvable on a classic (Von Neuman) machine.
I personally think that the brain is the exception to this rule.
Let us take the idea of a vector: not only the direction, but also the length, are significant for the distinction of one vector from the other. The length, on a classical computer, can only be computed sequentially. Even on a gpu, the sequential order remains.
We are now confronted with a modern version of Zeno's paradox (http://en.wikipedia.org/wiki/Zeno's_paradoxes): before the cpu/gpu can compute the length of a vector, it must compute the half of its length, and then again half of this half...
But for a brain, each computation may be an object already in memory, and is, at the start, a different object.
Computation of intermediate lengths is therefore not possible. Unless you believe that the brain somehow already knows the end result, and when to stop with its computations and turn them into sensations. A classical homunculus fallacy (Homunculus argument - Wikipedia, the free encyclopedia).
The only way for the brain, it would seem, to perceive the length of a vector, is to do that holistically, at once. It is a lock that only can open if all pins are pushed at the same time. Which is the principle of any hardware key.
This only means that our program must not pretend to illustrate the way brains recognize objects. But that digital computers, and non hardwired neural networks, need their own solutions. In their case, the programmer is the homunculus, and we can avoid the fallacy.
 
I forgot in my enthusiasm to mention that the idea of precise neurons and neurons that react to a range of stimuli is the abc of vision. Rods for light sensitivity, and cones for acuity/precision. I would like to say to my defense that what I have in mind is quite different. When I am speaking of neurons with a wider range, I am not thinking of how sharp the vision of an object can be, since, in fact, we only see a sharp angle or a smooth curve. We do not see both at the same time. But the image of a sharp angle and that of a smooth curve are connected with each other, which permits us to recognize for instance square fonts and smooth ones as representing the same characters.
Still, please bear in mind that my remarks will always be sketchy at best. I have no intention of replacing textbooks on vision, but only to express, whenever I feel the need, my own ideas on some parts of the subject.
 
I have another confession to make, or at least repeat: even if you have read carefully everything I have written on the subject, you would still have no idea how the brain builds an object out of the signals it receives from the eye. I mean, the idea of neurons that are sensitive to variations of the same pattern, even if it is new -and i am certainly not sure about that-, is in itself not enough to construct an object. Also, it is not obvious how this can be reconciled with the claim that objects are recognized or perceived holistically, at once. Textbooks, that of Goldstein included ( which by the way is at its eighth edition, so it is probably used in many universities and colleges) offer no real help on this subject. A chapter, starting to explain how features can be used to explain object perception, degenerates very quickly in discussion about neural plasticity and the fact that many parts of the brain are involved in perception.
As far as neuroplasticity is concerned, the whole thing could be interpreted very differently without hurting the facts: big parts of the (visual) cortex could easily be considered as being simply the memory of past (visual) experiences. But the model of (groups of) neurons as processing units is too much ingrained to make this interpretation credible.
We could for instance consider the primary visual cortex as the first place where (some) visual experiences are stored, while the other parts of the brain take care of the circumstances surrounding these experiences. Nowhere would we need to refer to any mysterious computation that nobody yet has ever given an example of.
Just think about this: if we accept that the only fact relevant to a neuron is, in last instance, whether it is activated or not, all we need is a link between optic neurons and visual sensations. Once the link for instance between a certain groups of neurons and a certain sensation of red has been established, we do not need the optical neurons anymore. We can use the neuron(s) coding for that sensation in any other part of the brain.
An extra argument for my model is that, according to the work of Nobel-prize winners, Hubel and Wiesel, The same features are found everywhere in the visual cortex: mostly bars of different orientation. If the brain is a computer, then that is quite an inefficient way of using global features. Whereas if the cortex is seen as mostly memory, then it would make perfect sense that the same global features keep showing up everywhere.
 
Sensations are, so i assume, created by activated neurons. But imagine that you are looking at a white smooth wall with even lighting. Is there a central location for the sensation of WHITE, or is this sensation also distributed in the brain? The first possibility would certainly be preferable, as it would be easier to comprehend. Push on that button, and you see WHITE, push another one and you see BLUE. The direct connection between a sensation and a location in the brain would certainly not solve the mystery of sensations, but it would make it more tractable. The alternative, the same sensation can be present at different locations in the brain, would make it much more difficult to understand the link between matter (neurons), and the mental ( sensations).
In the first case, the mental could easily be compared to empirical events like the creation of water with 2H + O. Not something we can explain either, but certainly something that we can easily live with, even as philosophers. The second case is quite a conundrum, as I hope to show you.
Let us take the example of the white wall again. All the neurons in the retina, cones and rods, in and out of the fovea, are receiving the same stimulus, and, supposedly, producing the same sensation all over the field of vision. Eye movements will probably only reinforce this sensation of WHITE. Every textbook mentions the distributed nature of vision, the putative fact that different processing takes place in different locations (a lot of talk of a dorsal, temporal or ventral path. Or the What, How, and Where of visual processing), but I could not find anything on this question. I will surely keep looking, but for now, allow me to speculate.
The Lateral Geniculate Nucleus (LGN) is the next step after the retina in the brain as far as vision is concerned. The LGN is considered as being, without going into the details, on a more or less one to tone correspondence with the retina (retinotopy). For our example, it means that we will find that all retinal neurons that code for color will be coding for WHITE.
But how are the different parts of the brain where those neurons, reside, to know how to use the code WHITE?
It could be automatic, and in this case there would be no difference with a central location, the local computations would each have its own copy. (Very "object oriented" like). And because the same value is stored in multiple places, the brain would more easily recover from lesions or other damage.
But that would mean that a big part, maybe even the biggest part of the brain, is used for redundancy backup. That I find hard to believe, without being able to rule it out.
The only other alternatives are, a limited number of copies, the minimum being one for each sensation.
In the latter case, a link has to be established between the neurons producing the sensation, the (location of the) sensation itself, and the circumstances in which is has been produced.
If we consider each event as a set of bodily sensations, thoughts, emotions and actions, we could use the same line of reasoning and end up with memory locations in which all those elements would be present. It would then be reasonable to assume that the corresponding memories would be kept closer to the organ(s) producing them, and that would beautifully account for the distribution of functions in the brain.
The only things that would seem to need any form of computation are our thought processes themselves. Something I hope to deal with very soon.
 
The idea of a holistic perception seems to be in straight opposition to that of perception through the putting together of different features to build an object in the brain/mind. But is that so?
Focus and attention can separate an object from its environments, and even a part of an object from the other parts. Holistic perception does not oppose this idea, in fact it can be applied to the features themselves.
The question is whether the brain uses a bottom-up or a top-down approach.
I believe the first approach, bottom-up -start with the features and build an object out of them- is something the brain cannot do in normal, everyday perception. Exceptions are when, looking at a scene, you combine features from different objects to create/perceive an object that is not really there. One author gave the example of looking at 2 men on the street and seeing a bald man with a beard. When she looked more closely, it appeared that one man was bald, and the other one had a beard. I personally had such an experience while reading the web version of a newspaper. I read words that were not there, and when I looked again it appeared that I had combined letters from different words and formed words not on the screen. So, how is that possible if perception is a top-down process only?
Let me first state that what we are dealing with here is a kind of optical illusion. In other words, it is a "malfunction" of the perceptual system. I know that psychology textbooks are full of examples of illusions that are then used to draw conclusions about vision, and I think that this is fundamentally wrong. Reverse engineering the brain is difficult enough, and while reverse engineering "mistakes" can also be very fruitful when looking for exploits in an application, it does not tell us much about the program, unless we already know what it was supposed to do without the mistake.
Second, The imaginary object does not necessarily have to be built up bottom-up. If, for one reason or another, the wrong neurons in different parts of the brain are linked, we can have the same top-down perception of the imaginary object as we would have of a real one. In fact, the existence of this illusion is the best argument for a top-down process. Because if we did use features to build up objects in our perception, the number of cases where it could wrong would be enormous! Need I remind you of Murphy's law?
This argument is reinforced by the following: I have already mentioned the difficulty of letting the brain use vectors in its operations. Each length represent as such a different vector, and it would take a supervising program to know when to stop and when to go on computing.
The same analogy can be used for the example of the key mentioned earlier. A hardware key has many common features, but also a distinctive configuration of features. When perceived holistically, the key forms a unique object, but when approached bottom-up, the brain would have to know beforehand which features are unique to this key before starting to build it up. Otherwise, the chance of building the wrong key in perception would be much greater than the chance of building the right one.
 
I found a very interesting project, also one with which IBM is involved
Frontiers | The Emergent Connectome. What I found also particularly edifying is the remark that the form of neurons is mostly determined by the place a neuron takes in a group. In other words, neurons can shift shape to fit more snugly among their brothers. It makes Goldstein's assertion that "the neural circuits involved in creating a ?face-detecting? neuron must be extremely complex" sound a little less credible. Do "face detecting neurons" really exist?
You can consider faces as features of bigger "objects", or as objects in their own right. The fact that social relationships are so essential to humans makes faces a central feature, and it would not be surprising if, indeed, some parts of the brains mirror this importance. I find it very encouraging that Goldstein, who holds a very different view of the brain as I do, tends to a conclusion I can certainly relate to. Talking about the so-called specialization of neurons, he concludes, after reviewing a couple of theories, that: "There is, however, a great deal of evidence that learning can shape the response properties of neurons that respond best to complex visual features." Which concurs nicely with my idea that the so-called processing areas are no more than memory locations of past experiences. We also both agree that the idea that a single neuron can represent an object (the so-called grandmother cell) should not be taken too seriously.
 
Status
Not open for further replies.
Back
Top