What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
29) Dismount under water.
[It does not make the toon any faster as far as i know, and it adds complications to Interactwith. Plus it looks kind of bot-obvious.]
edit: and since we are talking about flying:
30) Don't fly too low, or you might as well run as far as aggro is concerned!

An exception to rule 30 would be Coldarra. There the toon is every often attacked by the flying elite dragons. But right in Coldara, the toon flies too high!
Which just shows that general rules not guided by concrete knowledge are no guarantee for success!

One more general remark: we humans can learn only if we survive our mistakes! If we could rezz like toons, we would be much smarter than we are now. :)

edit2: my toon already died 5 times! I think I will stop kicks and finish it manually.
 
33) Why do you have to walk 20 or 30 m/yards before mounting?
I just had a toon running for almost 100m, and when it was ready to mount, it was already surrounded by mobs! I suppose that the toon is moved in advance, while in the background other processes are going on, but the result is very often not really desirable.
There is a general pattern that is rather disturbing because it is a dead giveaway in the presence of other players:
For instance, when the toon has to turn in quests and pick up others. it very often runs away after a turn in, and then immediately comes back for another turn in or pick up. It does not matter if it happens only once, but it really looks silly when done 2 or more times. I think that the movement which is generated as fill in between 2 events, should really be eliminated.
 
Were talking about the Turing test in my philosophy class right now.
I hope you are having fun! A very interesting problem he also talked about is the so-called stopping problem. How can the computer know that it has entered an infinite loop? In short, it cannot! This has been proven independently by 2 thinkers, 1 mathematician and 1 logician. You will probably hear about it in your class. Very technical, i am afraid i am not sure I understand the finest details :(
 
35) do not mount just to dismount a couple of seconds later.
Quote: "How about a check that's performed every time the bot is about to mount. If it will dismount in less than 5-10 yards for any reason, like a mob, it will just walk the extra distance.
It always looks so bad when the bot mounts, takes a step, then dismounts and proceeds on foot. imo at least"
From Lowkey, in Feedback for HB improvements.
 
and please? KEEP OUT OF THE LAVA!!!!!!!
This a a much more general problem that could be handled very easily: just check if the health of the toon is diminishing while not in combat! Then you can take care of specific cases:
lava, fire...
It will of course mean an extra check, and therefore processing time. That means that using it on a regular basis would be too expensive. But the possible situations (quest givers near a campfire) and zones are known. The checking could be reserved for these cases
 
36) The bot can learn from its actions and mistakes by a judicious use of the log.

[how, and how much it can learn, is open to discussion.]
 
A friend asked me what i meant by "even pigeons are capable of abstract thought".
The experiment goes something like this:
there are 2 levers and the pigeon does not know which one will give food. Above each lever is a circle drawn.
to get to the food the pigeon has to pick the lever with the bigger circle.
Once the pigeon(s) has been used to this configuration, it is then changed, with the previous bigger circle now being the smallest of the two. Pigeons chose automatically the biggest circle and not the circle they were used to.
That means they are choosing the relatively bigger circle, and not a circle based on a concrete shape or size. That is exactly what is meant in general by abstract thinking.
You must realize that Bigger Than (>) is a so-called primitive: you cannot explain it, just use it. When saying/seeing that a>b, you cannot give any reason why this is so without using (>), the same thing you were set out to explain.
Or you end up giving reasons which only make sense for beings dotted with the same same brain you have.
Stone a is bigger than stone b because stone a has more surface. So bigger than will mean having more surface. But what is more surface? Try to bring this argumentation to an end, you will end up with something like:
more surface:
- my eyes movements from one end to the other are different when looking at surface a than when looking at surface b.
- it takes two hands to hold stone a and only one hand to hold stone b.
- stone b fits in hole h but not stone a.
......
We only use these indirect "proofs" when the difference is really small, because otherwise we just can see/feel that 2 objects are different, with one bigger and the other smaller.

In computers bigger means: 1>0. It is a convention which works well with our world and experiences. It is not a definition of bigger than, since there are no such definitions.

So when pigeons choose the bigger circle, they are not computing surfaces or anything:
when they are looking at the first 2 circles, they experience something which we would be hard put to put into words. (what we were trying to express when we try to explain what bigger than means).
When they look at the second set of circles, they experience the same thing as they do when looking at the first set: the circle present in both tests triggers one set of experiences in the first case, and another in the second. Why?
Let's take a simple example: I give you two stones and tell you to put the lightest of the two down, which I then replace with another one, and ask you again to put the lightest down, and so on.
Whatever definition you choose of lighter/heavier, whatever neural explanation you will come up with (by heavier objects more muscles are involved, more neurons and...), it will still be a circular definition.
Let us accept the fact that when more neurons are involved with manipulating object a than with b, we then say that a is bigger, heavier, rounder.....
It is just as good a definition as any. But then, how do we know that more neurons are involved?
That's just it, we know it only after the fact. We can say that, according to our definition, another convention, more neurons had to be involved with object a than with object b (forget about brain scans for now).
More in one case might mean 500 vs 2000, or 2000 vs 3000. Which means we cannot define what "more" means except through other conventions (like higher up in the number series).
Does the brain have a way of counting or otherwise distinguishing the number of neurons involved in each situation? If you say yes, you believe in the little man in our brains: the homunculus (google it, very interesting). This little man also needs a little man in his brains, who also.....
So, we have now this:
More neurons (or whatever other explanation you think is better) ==> we say a is bigger. In other words we give a name to the sensation that more neurons are involved.
So there must be, somewhere in the brain, a "bigger than" processor that gets activated when more neurons are involved in case a than in case b. This processor would then point at object a, and that would allow us to say that object a is bigger than object b. "Processor" might be a strong word in this case. It is more of a scale really: it tips to the side holding the heaviest object.
Imagine you are living in a post apocalyptic era (skynet has almost obliterated humanity before committing suicide and disintegrating all machines), and you want to know which of 2 stones, which feel the same to you, is the heaviest. You throw a rope over a branch, attach each stone to an end, and... You must make sure that both stones are AT THE SAME HEIGHT before deciding which is heavier.
This is very interesting, because it means that, before you let Mother Nature take over and decide which is heavier, you have to tell her where to begin!
Did we land in relativity land without us knowing it? That is another theory i would really love to understand in its finest details, but since I do not, I will just leave it at that.
But i do not think that it makes the test any less "objective". An ancient Greek philosopher said that "Man is the measure of all things", but I think he went a little bit too far. After all, the begin situation (both stones at the same height according to you), is just one of infinite possibilities that Mother Nature can handle. It is not something that falls outside of natural laws. And the first thing scientist do is play with the begin situation and observe the different behaviors.
So back to our pigeons. Do they have also a "heavier-than scale"? Apparently they do, that is, if such a thing exists at all.
But then, we must also have a " same or equal scale", a "longer or shorter scale", and so on.
When you look at computer instructions, be they digital, or quantum, it always involve movement of data from the memory to the registers and vice-versa, and comparisons before the actual operations (add, sub, mult....). Data movements can be easily compared to human bodily movements, and the different comparisons look very much like our inner scales. The other operations are also easily identified with our own actions.
Computers have "equal", "bigger" or "smaller" for every kind of difference we humans have special words for (longer, redder, nicer, more interesting...).
Maybe that is the problem which computers face when trying to emulate humans: they just do not have enough inner scales. Their architecture is much too coarse. All the computer can say is: something is bigger than or equal to something else. So all the human nuances must be reduced to this single dimension.
We can now rephrase or goal of an intelligent computer: either we find a way of expressing human sensations and feelings with the coarse digital/quantum architecture, or we will have to give computers a personality. By that I mean not only a way of distinguishing all the differences man can distinguish, but also the emotions/feelings that go with them. We need more Marvins, and preferably not all as neurotic as he. We cannot of course endow computers with feelings and emotions, but maybe we can fake them?
 
25) Avoid resting and consuming food/drinks in the path of mobs you are supposed to avoid.

[the whole issues of avoid mobs is rather complicated. As far as I know. there is only one tool available: Blackspots. But that is a rather indiscriminate tool. Very often, the toon has to be in the area where the to avoid mob is. Maybe there should be more tools. like:
- preferred paths,
- preferably to avoid paths.
That would at least prevent the toon from picnicking on a dangerous spot! And also staying too long on dangerous paths, and thus preferably targeting mobs that are on safer grounds.

edit:A very good example of a place to avoid is a path where mobs respawn very fast. For instance, Needelrock Slag in Deepholm, or near the pearlefin village in mop.
I had to rescue my toon from needelerock stag because it kept dying on the path. And even manually, i died one more time because as soon as i rezzed, i got attacked by 3 or 4 mobs who just appeared from nowhere!
 
22) Leave inaccessible mobs alone.
[Blizzard creates intentionally mobs that you cannot touch (but who often can hurt you), which keep the bot standing as hypnotized in the same spot. There is one in the Murloc camp near the logging camp in elwynn forst, another in the fargo mine, one cultist on a platform in the blasted lands, and many more.
Also, when an npc, a questgiver, gets killed by pvp'ers, the toon keeps trying to communicate with it. This looks rather strange.]
This goes also for cases where the target is "not within sight" for one reason or another. For instance, when it is, partially, hidden by a door, a wall or some other object. The toon keeps fixated on the half hidden target, ignoring the other mobs attacking it.

edit: i just had a toon pursuing an evading mob, it had first ignored, back to its original starting place, all the while itself being pursued by another mob it had aggroed right before starting its useless chase!

edit2: I found this in ownedcore. To a question asked by a user (Ozius: Untargetable mobs), Game2mesh answered:

it's in UnitFlags field,find "UnitFlags" in this page:OwnedCore - World of Warcraft Exploits, Hacks, Bots and Guides ([WoW] Constant Data (Enums, Structs, Etc))

public uint UnitFlags
{
get { return ReadDescriptor<uint>(eUnitFields.Flags); }
}

if (UnitFlags & eUnitFlags.NotSelectable) > 0,means this unit cannot be selected.
 
37) After resurrecting, wait until health is at 100% before engaging mobs again. That would certain help not dying immediately again.
[not that it would really help in the northern barrens. It is still a killing filed for warriors and other classes. The official profiles are still a disaster]
 
I could make it a more general rule and say:
34 bis) Do not go looking for trouble!
Unchecking the kill between hotspots does not seem to dampen the bot's aggressiveness: it attacks mobs that are just going on their way to their final place (which happens a lot in MoP), and which could easily be left alone. This just makes the completing of quests longer and more hazardous. So, I would say, mobs and bosses that would not attack you unless you get really close, should be left alone if there is no reason to attack them.

Apparently looking for trouble is what the bot does. Ignoring quest mobs in the vicinity and going after other mobs that can easily be avoided seem to be the rule rather than the exception. It looks like a grinding behavior which has sneaked in questing.
 
If you google on "believable bots" you will find some very interesting things that are relevant to HB. One of them is the latest trend in AI: imitation of human behavior as a learning principle for bots. It comes very close to what I said in my first post in this thread, the need to incorporate common sense principles in the botting routines. We must of course realize that a product like HB is not a scientific experiment, and that it is governed very strongly by short term considerations. The most simple one being the fact that the application is used by players whose intentions are more... earthly. Like earning gold or money in a faster way than by playing the game themselves. HB users want a bot that is good enough to help realize this ambition without too much hassle. That is why not everything that AI theoreticians come up with is of interest to them.
Learning by imitation is, I am afraid, one of them. Scientists love mathematical formulas and they tend to believe that everything can be translated in one or more of those. They may be right, but it makes also their work less approachable by the average Joe Programmer. On the other hand, profiles can easily be made to reflect human behavior. All the writers need to do is to stop trying to apply the same formula to every situation:
1) pick up quest
2) go to location
3) interact with target(s)
4) turn in quest.
Don't get me wrong: there is nothing wrong with this chain of actions. Except that it is much too general to account for the diversity of the situations a bot has to deal with.
I am not saying anything I have not said before:
- Use the human player's experience in the profiles, or routines, or in whatever part of HB the devs will deem fit.
- Do not rely on abstract patterns for every situation.
- But most of all, build a bot that human players can customize quickly and efficiently.
 
i would like to talk about botting, chess and situational awareness. This, without getting overly technical or philosophical.
Chess is, I think, the most perfect example of the necessity of taking into account the consequences of one's own actions in a computer environment. That is, chess programs are, of necessity, situation-aware.
I have said before that situation awareness can be considered as a series of If Then conditions. That is only partly true. HB makes heavy use of IF Then conditions, but it can hardly be said to be situation aware. So what is the difference between HB and a chess program?
The most obvious thing is what I have just said: you must take into account what the effects of each action will be. By moving a piece you give the opponent new opportunities, and create new ones for yourself.
But unlike most wow situations (pet battles excepted), chess is a turn-based game. That makes it easier for chess applications to compute changes in the game situation.
That would mean that HB, to become situation-aware, would have to reconsider its steps after each action taken. Or preferably, like a chess program, first simulate the action and the opponent's reaction, before making any move.
I have no idea how much of a change that would entail in the core, and therefore whether it would be practical to implement. But we are talking about the future of botting, and it is certainly not the immediate future.
Right now, as far as i can understand it, some conditions are checked, then the toon is directed to a location, while the combat bot stands ready in case a mob initiates an attack or gets too close for comfort. But the main logic has been set once and for all. So situation-awareness plays no role whatsoever and the bot looks just like what it really is: a bot.
 
The lack of integration between the different parts of HB is a major concern that should be dealt with sometimes in a future not too far away. To give a simple example:
Toon going to location X, aggro'ing every mob on the way. If gather herbs/minerals is checked, it stops to gather and gets a whole tribe on its back.
The, temporary, solution is to keep questing and gathering separate. That makes a lot of sense with the bot being what it is. It is of course too bad that gathering provides not only a lot of XP, but is also a source of materials hard needed for profession leveling. But, once again, no need to complain about how things could be in the future. Gathering and questing at the same time is hazardous at best. So if you find the risks unacceptable, don't do it just yet. HB is just not ready for that.
What is more of my concern is that the lack of integration has an impact on questing as such, with no other activity involved.
If the toon is trying to gather quest items, it just seems oblivious of everything else. We then see it displaying the same behavior as described above when gathering herbs or minerals. It does not seem to care how many mobs it is aggro'ing on its way to the quest item.
The analogy between both behaviors makes me suspect that the same logic is used in both cases. This logic is flawed.
 
edit2: a good example is Corin's Crossing in EPL. The profiles i have seen have a lot of dying built in them because they attack the problem frontally. (Again I am talking about vanilla toons, I cannot judge about super toons). They end up very quickly in the center and get overwhelmed by the converging mobs. While, if you skirt the village, you can easily complete both quests with a vanilla toon without dying. Again, you have to control where your toon goes and pull your current target away from the main street. Classes with a pet have to take into account that the pet will take the fight to where the mob is very quickly, too quickly, defeating the purpose of this small time strategy.
I realize that this is nothing new, but it is not put in practice in the profiles nor in the combat bot.
Another good example for rule (31) is the quest pei back in Jade Forest. After my toon died a few times in vain, i looked it up in wowhead and found a very interesting tip:
- kite the colossus to the stairs, away from the boss. This way you are out of range and do not get your soul ripped. Then you can kill the colossus easily. Do not forget to set your pet on passive first, until the colossus is near the stairs.
 
Can neural networks (NN) be of any use to botting? I honestly would not know for sure, but i don't see any reason they shouldn't either. After all, NN's are just a different form of database programming:
You feed them data and some statistical laws, and they spew out their findings. The problem would be to translate wow data into statistical data sets. Something I know nothing about I am afraid.
The little I know is told very quickly: I told you before I did not believe neurons to possess any kind of intelligence, and they certainly have no knowledge of statistical laws. Which does not mean they cannot follow them of course.
Philosophers, when they talk about (artificial) consciousness, love to use the term "qualia" (plural of quale). By that, they mean sensations, emotions, fellings, anything that machines, and computers, do not and cannot have.. That is why some of them simply do not believe that these qualia really exist at all, or if they do, they do not think qualia have any function that could not be fulfilled by something that machines and computers can and do have.
I am rather a traditionalist in this, so I am convinced that what constitutes life (and every non physical aspect associated with it), is irreducible to matter.
But let us just assume that i am wrong, and that computers can have something that would be indistinguishable from consciousness or qualia. How could we build such a computer or program?
let me first state that almost all NN's, are simulated on digital computers. Real NN's, something resembling a bunch of neurons in a brain, are a rarity and play no role whatsover in the scientific debates.
But let's imagine such an NN, but then one that does not look like other NN's at all. A neural network that would make no use at all of mathematical or statistical laws. How would such an apparatus (functionally) look like?
Let us consider a specific group of qualia: colors. These can be considered purely as physical phenomena with properties like frequency, hue, etc. Things that you can set on your monitor. But we can also consider them as that mysterious quality that is sensation of color. A camera reacts to physical properties, a brain probably does the same also, but somehow it creates these weird qualia. We do not need a spectrometer to say that two colors are different or the same. Even if they are very close to each other in term of their physical properties. Do we have a spectrometer in the brain, the same way we have a scale that tells us when two things are of the same weight?
Like I said before, we consider a and b as having the same weight because we associate a certain sensation/feeling (quale) with the concept of sameness. Maybe it works the same way with colors: each color creates a distinctive sensation, and we then associate a name with this sensation.
So what are those sensations? Honestly? I have no idea, I just know that I have them, and I suspect that you do too.
It would be like asking what matter is, and then getting a lecture over atoms, electrons, quarks and pizza's. At the end, you still do not know what matter is. Matter is what matter does, would be the credo of any scientist.
How about we apply the same rule to qualia: qualia are what qualia do.
Such an approach is called simply the psychological approach. Psychologist cannot explain what qualia are, but they go a long way in explaining what their effects on human behavior can be.
How would a psychological approach of NN's look like? What first comes to mind is that pure mathematical or statistical laws would make place for psychological laws or knowledge. But that does not tell us how we would go about building such a bot led by psychology.
If condition C then apply psychological law L? Is that it?
How about this?
1) Let us make an exhaustive list of all physical sensations (sight, hearing, smell...), emotions (love, hate, disgust, jealousy, ambition, like, dislike....) that a living being or human can have. And let us assign to each of these element a black box.
2) Let us link to each black box certain kinds of behaviors, event, objects or whatever would make sense in a human being.

We would then have created a program with a personality. This program would react in a certain way when confronted with certain situations, and its reactions would be different from another program for which we would have created a different database.
Now, let us imagine that we are capable of creating a program that takes into consideration all of the above. The black boxes are just that, empty place holders that refer to a certain type of objects, events and behaviors. But that does not stop them from determining any future behavior based on the history of the artificial person/program.
That certainly needs some explanation:
let us take pain for instance. If we touch a hot stove by accident, we immediately withdraw our hand.
Why is that? What made us do that: the heat or the pain? We are able to resist the heat and the pain if we have to. For instance, we would not let go of a hot pan if it could fall on our child. We would suck it up and move first the pan away from the kid. So it is, within certain limits, neither the heat nor the pain that ultimately determines our behavior.
We obviously need another cause for our behavior. In this case it may be very simple: we do not like pain and if something causes it, we learn how to avoid it.
Do we really need that extra something? Does pain not include the fact that we do not like it? Well, if that were the case, there would be no masochists.
event--> pain ---> desirable ---> reaction
Or more generally:
event--> sensation/feeling/--> evaluation-->reaction.
I would like to propose a very simplistic approach to the problem of evaluation: something is either desirable or undesirable. If two things are (un)desirable at the same time, a choice is made based on the past of the artificial personality, and its goals, be they short or long term.
What we must realize is that, just like with real human beings, there are no (obvious) mathematical or statistical rules to determine which choices will be made. All depend on the original personality and the experiences the person has made. That means we can turn a healthy person into a masochist or a sadist. And maybe even a Dexter into a law-abiding citizen! The details do not really matter, it all depends on what psychological theories we prefer, but the principles remain the same: genetics + experience.
To get back to black boxes and how they can determine behavior even if they are empty placeholders: they are the sum of genetics and experience based rules. A specific individual sensation of red is assigned a specific link, and that link becomes associated with different other sensations, objects and events that have occurred in the history of the program.
Likewise, pain can be associated with certain physical events or objects (sharp, hot, traumatizing...) and certain emotions (also black boxes). A black box is nothing more that all the links it has with everything else.
But since our main interest is botting, we are more interested in learning principles that in psychological idiosyncrasies.
The model takes that very easily into account: the bot/player does not want to die and wants to achieve its goals. What it needs to do is learn the kind of situations that kill it, to better avoid them my dear, and look for situations that help it achieve its goals.
Undesirable situations:
- confronting elite mobs, or more than x mobs at the same time
- straying too far from the path to the goal
- ...
Desirable situations are then easy to imagine.
What this means is that, in theory, we could create bots that play like cowards, always looking for the safest way to get what they want. Or pure dare-devils who jump head first in any situation, like an adrenaline addict. By the way, that is how HB usually plays.
 
The ultimate goal of botting - as far as AI is concerned- is a toon that never dies. We are far from
the finish line I am afraid. And before you retort that dying is certainly not a problem, that:
- not getting banned,
- leveling up fast,
- gathering mats and
- winning epic gear and gold...
are what makes botting worthwhile, know this: I agree with you.

But building a bot that never dies because it is too damn smart, is what developers must constantly keep in mind if they want to improve the way other goals are achieved. Accepting that the toon sometimes must die is certainly not wrong. One must not try to achieve the end goal overnight. But taking death lightly is as bad in botting as it is in real life.
And I know that my toons, after enough experience in wow gaming, died much less often when I was playing the game, than they do now with HB.
 
I'm waiting for a game company to actually embrace botting and build in an entire scripting and coding support into their game client native. That way people can program in everything themselves.
 
I'm waiting for a game company to actually embrace botting and build in an entire scripting and coding support into their game client native. That way people can program in everything themselves.
I don't know what to tell you, AI researchers very often use private servers to test their programs without fear of being banned. Game publishers want to make a profit, and gamers, not botters, are their source of income. With the new trend of in-game purchasing and use of real money in-game, publishers will be even more inclined to defend against botters. I suppose that many gold-sellers are wow employees, the temptation is just too great. But that would be another reason to get botters.
 
37) After resurrecting, wait until health is at 100% before engaging mobs again. That would certain help not dying immediately again.
[not that it would really help in the northern barrens. It is still a killing filed for warriors and other classes. The official profiles are still a disaster]

And if you die in deepholme, do not drop from the sky when rezzing. You will just die again or be killed immediately by the first mob.
 
Status
Not open for further replies.
Back
Top