torn between 2 decisions, for a while. But unlike computer programs, they get tired of the uncertainty really quick. They are then confronted with the following possibilities:
- choose a path at random. Or seemingly at random, unconscious criteria taking care of the final decision.
- run away from the conflicting situation. That is, not to choose at all.
easy, how i see it instead of always choosing optimal solution give each solution "optimality weight" and the bigger the weight the higher chance solution will be selected
in your sample with 2 close-by mobs, if they are same distance (10 yards each) each has weight 1/10=0.1 note we are getting 1/10 instead of 10 for weight since bigger weight = closer mob (sum does not have to be 1 in this case it is 0.2) than do R=0.2*RND() and if its 0-0.099999 select first if its 0.1-0.1999999 select second one (RND() will return number between 0 and 0.99999, and newer 1
now if one node is 11 yards away and other 10 yards away first has weight 1/12 = 0.090909 second 1/10 = 0.1 sum of weight is 0.1909 ... so first one will have slightly lower chance to be selected, but even one that is further will be selected (less optimal solution) which is good since character always/perfectly selecting mob that is closer can be kinda suspicious since humans don't have ruler and even if they did they make mistakes when making "judgement calls"
even if one mob is 1 yard away and another 100 yards away second one will still be selected approx. 1% times, it might seem strange, but it would happen rarely, plus humans are not perfect, maybe he did not see mob, maybe he did not want to kill that specific mob "just because" ...
also weights don't have to be linear based on distance they can be exponential or even be based on multiple values/complex formulas
for example weight for each mob node could depend both on distance and "dynamic blacklist weight" i propose it as a way to replace blacklisting from "try 3 times if fail blacklist for 10 min" to something like (W = weight of node, C1 = some constant) W = distanceW - failureRate for this specific node or mob (for node failure to gather for mob does it "reset" because of pathing issues)
failureRate = f(t1) + f(t2) + f(t3) + f(t4) + f(t5) (where t1, t2, t3, t4, t5 are how long ago in seconds was first failure, second failure, third failure, fourth failure and fifth failure (not just last 5 but all we saved for last 2 weeks or whatever "memory period" we configured)
f(x) is function that gives (negative) weight based on how long ago failure to collect node/kill mob happened so if failure happened twice in last 5 seconds failureRate will be less than 0 (node will not even be tried) but if it was even 5 times but all of them last week failureRate will be for example 0.9, and we will try again
Unlike the human brain, computer programs run on the cpu are very limited in the number of possibilities they can take into consideration in real game time.
i was looking at AMD MANTLE and latency improvements it brings, with acceptable latency (compared to CPU still higher but around 100 times better than DirectX/OpenGL/OpenCL) and huge throughput last generation GPU brings to table we might afford to do many more calculations both for pathfinding and decision making, one 290X has 5.6 TFLOPS (MADD exactly what we need for pathfinding and decision making)
Like I very often said, I really do not think that there is a general algorithm somewhere that could help us solve this particular dilemma.
some algorithm based on human/bee behaviour could be good enough, not perfect but still better than fixed/programed one
The problem for the devs is finding the time to translate this experience in an efficient program. And, as we all know, time is money.
true but if botting brings you enough money to support you you might as well invest time to improve your "job"