What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
Navigation meshes and object recognition.
[I have been reading up on mesh navigation and pathfinding, but I have certainly not become an expert overnight. So, If I say something stupid, please correct me.]
If you have ever developped your own black&white pictures, you know that one way of intensifying the contrast of grey shots is to make a copy of the negative a couple of times. This way you make of even the most drab snapshot an astonishing piece of art.You can also print the shot in negative: blacks become whites, and vice versa. Also a nice experiment. Mesh file are something like that: everything disappears but the walkable space. That is necessary to preserve memory and garantee speed.
But there is a downside to mesh files, one every botter knows about: getting stuck because something has not been mapped properly, or, what is as likely, has been added after the file was created.The choice of the right series of action to get unstuck depends on the obstacle. And that is exactly the information that is missing. After all, the obstacle is supposed to be walkable space, and as such, is just a part of the many possible paths the toon can follow. I thought that, somehow, the information could be read as a negative from the mesh file: if everything the mesh files contain is walkable, then it should be possible to get the contours of the obstacle. But I made a logical mistake. The contours of the obstacle would only be "present" if the obstacle was correctly mapped. But then, the toon would not get stuck.
I think now that the only information we have is the one that appears on the screen. That would mean that, to unstuck the toon, HB would need, in real-time, to analyze the center of the screen, where the toon always appears. Of course, if it is a long and very high cliff, the whole screen, and more, would need to be analyzed. But, in this case we would be talking about a major flaw in the mesh file, and not a stuck situation. So, I will leave these extreme cases out of the picture, so to speak.
Object recognition is already a few decades old. One of the pioneers, a genius who, unluckily for the world, died very young, was David Marr. His book. "Vision", of 1982, is considered a classic and a must read for anyone interested in computer vision. There are very few theories that do not, one way or another, make use of his insights. But I am afraid that it is much too complicated and advanced for what we are trying to do: to un-stuck a toon.
What I am thinking about is much more simple: the obstacles are usually of a different color than the walkable space. So, if it would be possible, somehow, to superpose both groups of colors, we could update the walkable space and compute a new path fast enough.
But like I said, this is new territory for me, even though I have been researching vision for quite a long time, within my study of (artificial) intelligence and consciousness.
 
I don't know what to tell you, AI researchers very often use private servers to test their programs without fear of being banned. Game publishers want to make a profit, and gamers, not botters, are their source of income. With the new trend of in-game purchasing and use of real money in-game, publishers will be even more inclined to defend against botters. I suppose that many gold-sellers are wow employees, the temptation is just too great. But that would be another reason to get botters.

Well, sure, but I'm not even really talking about full AFK botting (should have fleshed out my initial comments). The thing about MMOs that bores me to tears is the combat. Period. I don't know how many times over the last 10+ years I've hit a near similar set of keys in sequence over, and over, and over, and over, and over again in combat.

I love playing MMOs, but I hate the repetitive nature of the combat. So that's why I primarily bot. I bot so when combat happens I stand back and let it take over. At that point it is more about the routine I use, my gear, etc.

If they would add that (ala some of the console JRPG games that let you script your secondary characters in combat), I think that would go a long way for a lot of people.

Granted, that goes into a MUCH larger conversation on how combat in MMOs is inherently broken (hit tab, hit key sequence... rinse/repeat), but that's for a different thread.
 
All baby books (according to my wife then, and my daugheter-in-law now), affirm that babies cannot focus on objects because their eye muscles, or whatever, are not developed enough. I think that is completely bogus. The reason why babies cannot focus their sight is, as far as I am concerned, because they simply have no idea where, or on what to focus. When they look up to their mother while sucking on the breast, they do not see their mother. They have no idea what she looks like, and where her face ends and the wall or ceiling begins. They have no conception of objects whatsoever, and therefore cannot distinguish between all the visual sensations their little brains receive. It takes them a couple of months to slowly start recognizing faces, and not only voices or smells.
Vision is determined by three main attributes:
- colors (and all their properties),
- movement,
- gravity (inner ear proprioception).
Without the latter, babies would have even more difficulty learning to distinguish walls from floors and ceilings.
To get an idea how babies experience vision, take a screenshot from wow or anything else you want. Imagine that you want to write a program that would distinguish between the different objects on the screen.
Imagine also that, like me, your (high school) math just would not cut it.
All you have left are colors and how to use them (we forget for the time being about movement and gravity), to recognize objects. David Marr speaks of edges, and makes use of mathematic formulas to try and identify them in a scene. Researchers have been trying for almost 40 years to solve the problem of object recognition. And of course, a lot of progress has been made, but their products in no way approach the efficiency of animal/human vision.
Let us see how far we can get with this: search your picture for every edge that marks the demarcation between two objects. That is not difficult, is it? But the problem is: you are seeing objects already.
Now, forget about these objects, and concentrate only on edges, even within the boundaries of a single object.
Now you know how difficult it is to write an object recognition program! All the objects just seem to disintegrate in a countless myriad of minuscule spots, with nothing to hold them together.
I do not have a ready made theory on the subject, and I have been working on it for a very long time. But do not forget that movement, including eye and body movements, and gravity, are essential to vision. As always, I will keep in touch.
 
Well, sure, but I'm not even really talking about full AFK botting (should have fleshed out my initial comments). The thing about MMOs that bores me to tears is the combat. Period. I don't know how many times over the last 10+ years I've hit a near similar set of keys in sequence over, and over, and over, and over, and over again in combat.

I love playing MMOs, but I hate the repetitive nature of the combat. So that's why I primarily bot. I bot so when combat happens I stand back and let it take over. At that point it is more about the routine I use, my gear, etc.

If they would add that (ala some of the console JRPG games that let you script your secondary characters in combat), I think that would go a long way for a lot of people.

Granted, that goes into a MUCH larger conversation on how combat in MMOs is inherently broken (hit tab, hit key sequence... rinse/repeat), but that's for a different thread.
I understand you now, and I must say I completely agree with you. My first RPG was Dungeon siege, the first one. I loved it. Combat was something the game engine took care of. You could completely keep your attention on the surroundings and the strategy. Or, after having played a few times, just relax and enjoy the game like it was a movie.
Not everybody likes that. Many would find it boring. I honestly would not. Maybe I am too old, but I find no satisfaction whatsoever in killing one mob after the other. I Think that is what you mean too, right?
 
39) Sometimes the toon just stops somewhere and will not bulge. Not because the bot is stuck, but because it is waiting for something to happen. Like Torek, a questgiver that wanders around in ashenvale. It would be nice to know. I kept deleting the cache and restarting the bot for nothing.
 
I could make it a more general rule and say:
34 bis) Do not go looking for trouble!
Unchecking the kill between hotspots does not seem to dampen the bot's aggressiveness: it attacks mobs that are just going on their way to their final place (which happens a lot in MoP), and which could easily be left alone. This just makes the completing of quests longer and more hazardous. So, I would say, mobs and bosses that would not attack you unless you get really close, should be left alone if there is no reason to attack them.
What is the use on unchecking kill between hotspots, and use stealth always, if the toon kill everything even remotely on its path? Once again, questing is faster than grinding, so please stop grinding.
 
Brain slices are very interesting. They make of scientists, even brain surgeons, babies trying to make sense of all the visual sensations they receive, and turn them into identifiable objects: take this link for instance,
Rachel's brain
The first one i found searching google for brain slices. I have seen much better, and in colors, but it does not really matter. The slices shown are just examples, and you can always look for others.
The point is, just like babies, or an object recognition program, when scientist look at brain slices, they must make sense of a an indistinguishable mass, and turn it into different objects they can experiment with. The result is names like, thalamus, pineal gland, hippocampus, front lobe, etc.., that give the illusion that they know what they are talking about. I do not think they really do...yet.
In fact, HB developers have a better chance of writing a program to identify all wow objects, even those non-clickable, before the brain experts come with a definitive map of the brain.
 
A very old rule was:
9) don't walk around in stealth mode in town.
That is a typical rule that could as easily be taken care of by profiles instead the core program. I am running a rogue right now, again, and it just looks strange to see it go over to stealth right after talking ot a questgiver in a completely safe environment.
 
28) Don't use aoe when there are other (non-hostlie) mobs around.
[not if you don't want to die, resurrect, and die again]

edit: this rule, and many others, don't apply of course to super-toons (with super gear and enchants). But then, even the worst bot of all would do in that case. If you are so much stronger, you don't need to be smart.
a very simple way of implementing this rule is inputting a high number in AOE Spell Priority Count in the Class Config menu. Also something that could easily be taken care of by profile writers.
 
A bit for you Odarn. Research into creating an artificial retina discovered that in the human eye there is a hefty amount of pre-processing going on before the optic nerve.
The obvious and well known details of the human visual system gives us distance (from the steroscopic effect of two eyes). Colour information is only available from the focal point (cone cells are concentrated here). The rest of your vision is actually black and white (rod cells). Your brain actually fills in details from what you remember rather than what you see. This is confirmed by fMRI scans.
The interesting part though is in the layers of the retina. Before information is transmitted along the optic nerve there is some pretty nifty neural processing going on and what is actually received by the brain's visual cortex is a whole load of detail comprising edges and movement. The colour information is a very minor fraction of what we actually see. But colour is most of what we focus on. Probably because we're used to computer images with their pixel by pixel format.
This suggests to me that what we should be looking at for image recognition is contrast and parallax to identify individual objects. Unfortunately, short of having a dedicated field programmable gate array in every botting computer we are limited to CPU/GPU processing and that is woefully inefficient for such a task. Particularly if we want to use the computer for anything needing the GPU (like games).
 
@Shortround.
What you are saying is conform what you can read in every text book over vision. I am afraid that I do not share some common ideas about:
- the preprocessing that happens at eye level, the so-called amacrine, horizontal and bipolar cells.
There is certainly something going on at that level, but the explanations given do not make much sense. They presuppose that the eye cells, somehow, know what to do even before the brain has determined what it is dealing with. That is what philosophers call the homunculus fallacy.
- the term color as used by me is, I must confess, quite ambiguous. Light sensitivity would probably be more appropriate. That includes black, white and all shades of gray. In fact, it could even include movement.
- the story of parallax is even more confounding: we are supposed to get an impression of depth through the superimposition of the images from both retinas. That would mean that the third dimension is made of two 2d images. And that it is therefore only an illusion. I find this philosophically fascinating, but I do not think that is what modern scientist would like us to believe.
Contrast is not an object, even though it can be treated mathematically. It is the impression(s) left by visual sensations of different nature. The circular aspect of my explanation is just and indication of the difficulty of defining it: one part being lighter or darker than the other. Contrast is neither the lighter nor the darker part. It is not a third something that somehow exists in the world or the brain, but the effect of those simultaneous sensations. It is what I would call a black box.
As far as computer power is concerned, I agree with you that current theories make it impossible for botting program to do anything serious about object recognition, and still be usable for gaming.
I realize that my views are quite unorthodox, but I assure you that I have a reasonable knowledge of the current state of vision science. I am just not convinced that they are always right.
That does not mean that I have clearcut theories about vision and object recognition. I am still searching.
 
The receptive field of visual cells, whether in the retina or further up the visual path way is something of a mystery in the psychology text books. They talk of center and surround cells, neurons that react to horizontal bars, others that react to vertical bars with a certain inclination, and more of those attributes that should be illuminating, but add only to the confusion. After more than a century, half a century if we want to be generous and start with the 60's, there is not a single theory that could explain how the brain creates objects starting with those receptive fields. Text books and teachers just happily repeat what everybody has been saying all these years, and then they jump to exotic tests that have nothing to do with everyday perception. I am not saying that these tests are not meaningful, only that to base conclusion on them regarding the nature of vision/perception, is debatable at best. I wish somebody would prove me wrong, because that would mean that there is, somewhere, a theory that does more than contribute to the reputation of the writer and his academic career.

edit:
A lot hinges on the so called single neuron measurement. As the name implies, it is the,astounding, technique of inserting a probe in a living brain and measuring the reaction of a single neuron to, in case of vision, visual stimuli. The significance of this technique is, as far as I am concerned, really overrated. For instance, there is a discussion going on about a so-called grandmother cell. That would be a single neuron that would fire up when the patient is shown a picture of his grandmother, and only then. Some scientists, of impeccable reputation, seriously believe that such cells exists in the brain. We would then have cells that react only to our mother, princess Diana, or Buffy.
Many scientists do not believe that, and i strongly agree with them. Still, those same scientists are not afraid of drawing conclusions from single neuron measurements. I personally believe, a mere prejudice, I'm afraid, that not a single neuron can react or be excited alone without affecting other neurons, and be affected in return.
That is why I do not put much credence in the so-called receptive fields. And until somebody explains to me how they constitute objects, I will remain skeptical.
 
Sometimes the first jump is enough to get over the obstacle, but the the toon setps back to the same position it was in before strafing right or left and then forward again. The fact that the sequence is the same at every situation makes it a little awkward.
Regarding rule (15), except the jump, which I still think is a very useful trick, the devs could maybe experiment withe the duration of the strafe movements. I have the impression that very often, it is not necessary to back up or strafe for the whole 600ms. Why not try it with 100, for instance?
 
Navmesh and computer vision:
Pathfinding with mesh algorithms means that the toon is walking blind. It does not "see" where it is going, only which paths are listed as walkable. Computer vision is highly cpu intensive, and even with the use of gpgpu methods, probably prohibitive. Unless we found a way of drastically reducing the amount of data that the cpu/gpu have to process.
Computer images in a game world are built in stages, with each stage adding more details and realism to the scene. Something we do not need for navigation. After all, when we are running (or driving) at full speed, we really do not have the time to notice any non-essential details. As long as we are able to avoid obstacles on on our path, we are more than satisfied with the way we perceive things.
This makes me wonder is if there is a way of reverting an image back to its most primitive form: vertices, lines and primitive shapes. Such images would be easier to handle and, they would help create a map much more useful than a mesh file. But I honestly do not know (as yet) if that is possible at all.
 
Computer vision is highly cpu intensive, and even with the use of gpgpu methods, probably prohibitive. Unless we found a way of drastically reducing the amount of data that the cpu/gpu have to process.

there might be a way, AMD Radeon GCN cards (7000 and R9 families) have added SAD and QSAD instructions, they should improve image comparison and searching speed 4-16 times and they can be used from OpenCL or AMD mantle, with such huge speedup bots might be able to use real computer vision and not meshes


for more details check paper:
http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf
 
there might be a way, AMD Radeon GCN cards (7000 and R9 families) have added SAD and QSAD instructions, they should improve image comparison and searching speed 4-16 times and they can be used from OpenCL or AMD mantle, with such huge speedup bots might be able to use real computer vision and not meshes


for more details check paper:
http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf
Thank you for the tip. Still, I wonder if that would be enough for a botting program like HB. I have found an interesting link
https://sites.google.com/site/sbobovyc/writing/reverse-engineering/reversing-a-directx-game-part-1
It would seem that it is indeed possible to reverse engineer computer images. But I must admit that I have just started reading up on the subject of directx/opencl/opengl and I still have a long way to go I'm afraid. I hope somebody more advanced than me will pick up the idea and share his results on this forum.
 
You know my posts are not always very practical, at least in the short or even middle term. For those of you who like to think things through:
Imagine we are, 20, 50, or however many years further it would take to make bots completely human-like.
And I mean by that, bot with who you could converse over anything you like. Just like those computers they love to show on movies. A nice terminator, as it were. What would be the difference between such a computer and a human being. That is a question many philoophers have asked, and as you can imagine, there almost as many answers as there are philosophers. After all, just agreeing with your colleagues is not really helpful for your career. You must at least give the impression of originality!
Some philosophers think that it will never be possible, that a computer will always be different from a living, breathing being. These philophers would therefore never give "human" rights to an "artificial person", like the one in the last "Alien" movie.
Another philosopher, who specializes in polemics and rudeness (but nobody minds, that is just his meal ticket, and you need one in the academic jungle), thinks that we will be able to talk philosophy with computers, just like we do with our neighbor. I can't remember his name right away, but it will come back to me.
I think there is no reason why that should not be possible. After all, we are talking about the expression of ideas, and those are just actions. When we talk, we move all kind of muscles to produce sound, and the trick is to link those sounds to a lexicon. Something databases are very good at. Add some combinatorics to the whole and you have a philosophizing robot. Such a robot would then have emulated the way we talk and the way we link ideas in our brains. It could even come up with original ones. After all, originality never appears in a vacuum. It is just a matter of linking things that had not been linked before.
Will this computer then be the same as a human being? I will let you answer this question for yourself.
My tip? A non-living organism can emulate any behavior, and even be better at it than the original. So anything that can be considered a behavior is for grabs.
 
yep, renumbered the name finally, Daniel Dennett, author of "Consciousness explained". He forgot to put "away" at the end. He tries really hard to make us believe that it does not exist, but it is alright if we do like it really does exist. That's why he has absolutely no problem with believing that a robot could discuss philosophy. I believe that also, but I am a little more traditional: I think there is something like consciousness, even though I still have to find the one theory that would explain it right. There is one thing all thinkers agree about: Descartes was wrong, there is no seat of consciousness in the brain. That is, there is no single gland or bunch of neurons that we could point to. For people who like science fiction, maybe Penrose's theory of quantum consciousness would be worth reading. The guy is a renown mathematician (co-authored books with Stephen Hawking), but in his free time he likes to sit by the fire and meditate over the question of life, the universe and everything.
 
Is HB situation aware? You would not believe it, but it is! Here is the proof:
create a numan toon in the start zone, and then go stand somewhere near Paxton, the healer. Start HB,
settings,
development tools,
objects,
units.
You will see the list of all the npc's and mobs HB, via wow of course, can see. Even the rabbits. But not the other players i'm afraid.
That means that we could, theoretically, target any mob in any order, if we wanted to. I wonder why that is still not possible.
 
Status
Not open for further replies.
Back
Top