What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

the Turing test

Status
Not open for further replies.
But not the other players i'm afraid.
when your writing plugin/botbase/behaviour you can see players too (as long as they are not rogues shadowmelded, or phased)

That means that we could, theoretically, target any mob in any order, if we wanted to. I wonder why that is still not possible.

it is possible my multiquest plugin can target anything in any order, try tanking in dungeon and when someone pulls threat from you bot will also start switching targets like crazy ... OFC singular generally does not do it by default when it is not needed
 
when your writing plugin/botbase/behaviour you can see players too (as long as they are not rogues shadowmelded, or phased)



it is possible my multiquest plugin can target anything in any order, try tanking in dungeon and when someone pulls threat from you bot will also start switching targets like crazy ... OFC singular generally does not do it by default when it is not needed
which plugin is that?
edit: to be clear, we are talking about targeting quest mobs in any order, not first mobs of quest1, then those of quest2..., or vice versa. And how do you get that done in profiles? i did not think that was possible, and that is the whole point.
 
Seagulls and fuzzy logic. What do they have in common? Nothing really, as far as I know. When I go shopping, i cross a small park and there you have city ducks and city seagulls. About the latter, when you approach them closer than let's say 3 meters, they fly away. Of course, it is never precisely 3 meters, so I suppose that one could say that they use fuzzy logic for their decision making. After all, "more or less" is exactly what fuzzy logic is all about. The point though, is that it is not the same "more or less" for all seagulls. Some are cowards and fly away first, while a single hero looks at you challengingly and waits till the last moment to show you its butt in the air. And then again, the same hero, if it ever makes a dubious encounter (like bored teenagers), will probably become a coward afterwards. So fuzzy logic alone would not cut it. We would need also some genetic algorithms, not to mention some learning rules. Sigh, I had no idea seagulls were so complicated. No wonder botting programs are such a pain to write.
 
which plugin is that?
edit: to be clear, we are talking about targeting quest mobs in any order, not first mobs of quest1, then those of quest2..., or vice versa. And how do you get that done in profiles? i did not think that was possible, and that is the whole point.

sorry not plugin, botbase, and it does not support XML profiles at all, quests have to be written in C#, and yes it does exactly that, does quests in paralell pulling multiple mobs from multiple quest and AOEing them, i made it for dailies since in case of dailies it can mean 2-3 times increase in speed

sorry misunderstanding, i did not mean quest (XML) profiles can do that (i know they cant because that is way they are made - unfortunately) i meant HB is able to do that but quest bot does not use that ability
 
sorry not plugin, botbase, and it does not support XML profiles at all, quests have to be written in C#, and yes it does exactly that, does quests in paralell pulling multiple mobs from multiple quest and AOEing them, i made it for dailies since in case of dailies it can mean 2-3 times increase in speed

sorry misunderstanding, i did not mean quest (XML) profiles can do that (i know they cant because that is way they are made - unfortunately) i meant HB is able to do that but quest bot does not use that ability
is your botbase sharable or only for private use? Which would be of course completely understandable.
 
is your botbase sharable or only for private use? Which would be of course completely understandable.

sorry it is private, but if i could do it in a week i am sure whoever is maintaining questbot is able to do it too in even less time
 
To do or not to do... Humans, like toons, are sometimes confronted with impossible dilemmas. Every choice seems the wrong one, or, more positively, is any good than any other choice. Toons tend, in this sort situations, to get stuck in an infinite loop. Humans will also go back and forth, torn between 2 decisions, for a while. But unlike computer programs, they get tired of the uncertainty really quick. They are then confronted with the following possibilities:
- choose a path at random. Or seemingly at random, unconscious criteria taking care of the final decision.
- run away from the conflicting situation. That is, not to choose at all.
The last possibility can be very efficient in computer terms. Instead of computing the different possibilities, the toon could just temporarily discard the conflicting choices, in the hope that the problem will solve it self. That would certainly be the case if it is stuck at equal distance between 2 mobs. Choosing a third path would be a way of indirectly getting closer to one of the mobs.
Unlike the human brain, computer programs run on the cpu are very limited in the number of possibilities they can take into consideration in real game time. That is what programs do: they reduce the complexity of the world the toon has to "live" in. The trick is to find the right balance: too much reduction screams BOT!, and not enough can also do the same. Like I very often said, I really do not think that there is a general algorithm somewhere that could help us solve this particular dilemma. Only experience can provide us with rules that we can apply to more than one situation. The problem for the devs is finding the time to translate this experience in an efficient program. And, as we all know, time is money.
 
torn between 2 decisions, for a while. But unlike computer programs, they get tired of the uncertainty really quick. They are then confronted with the following possibilities:
- choose a path at random. Or seemingly at random, unconscious criteria taking care of the final decision.
- run away from the conflicting situation. That is, not to choose at all.
easy, how i see it instead of always choosing optimal solution give each solution "optimality weight" and the bigger the weight the higher chance solution will be selected

in your sample with 2 close-by mobs, if they are same distance (10 yards each) each has weight 1/10=0.1 note we are getting 1/10 instead of 10 for weight since bigger weight = closer mob (sum does not have to be 1 in this case it is 0.2) than do R=0.2*RND() and if its 0-0.099999 select first if its 0.1-0.1999999 select second one (RND() will return number between 0 and 0.99999, and newer 1

now if one node is 11 yards away and other 10 yards away first has weight 1/12 = 0.090909 second 1/10 = 0.1 sum of weight is 0.1909 ... so first one will have slightly lower chance to be selected, but even one that is further will be selected (less optimal solution) which is good since character always/perfectly selecting mob that is closer can be kinda suspicious since humans don't have ruler and even if they did they make mistakes when making "judgement calls"

even if one mob is 1 yard away and another 100 yards away second one will still be selected approx. 1% times, it might seem strange, but it would happen rarely, plus humans are not perfect, maybe he did not see mob, maybe he did not want to kill that specific mob "just because" ...

also weights don't have to be linear based on distance they can be exponential or even be based on multiple values/complex formulas

for example weight for each mob node could depend both on distance and "dynamic blacklist weight" i propose it as a way to replace blacklisting from "try 3 times if fail blacklist for 10 min" to something like (W = weight of node, C1 = some constant) W = distanceW - failureRate for this specific node or mob (for node failure to gather for mob does it "reset" because of pathing issues)

failureRate = f(t1) + f(t2) + f(t3) + f(t4) + f(t5) (where t1, t2, t3, t4, t5 are how long ago in seconds was first failure, second failure, third failure, fourth failure and fifth failure (not just last 5 but all we saved for last 2 weeks or whatever "memory period" we configured)

f(x) is function that gives (negative) weight based on how long ago failure to collect node/kill mob happened so if failure happened twice in last 5 seconds failureRate will be less than 0 (node will not even be tried) but if it was even 5 times but all of them last week failureRate will be for example 0.9, and we will try again

Unlike the human brain, computer programs run on the cpu are very limited in the number of possibilities they can take into consideration in real game time.

i was looking at AMD MANTLE and latency improvements it brings, with acceptable latency (compared to CPU still higher but around 100 times better than DirectX/OpenGL/OpenCL) and huge throughput last generation GPU brings to table we might afford to do many more calculations both for pathfinding and decision making, one 290X has 5.6 TFLOPS (MADD exactly what we need for pathfinding and decision making)


Like I very often said, I really do not think that there is a general algorithm somewhere that could help us solve this particular dilemma.
some algorithm based on human/bee behaviour could be good enough, not perfect but still better than fixed/programed one


The problem for the devs is finding the time to translate this experience in an efficient program. And, as we all know, time is money.
true but if botting brings you enough money to support you you might as well invest time to improve your "job"
 
easy, how i see it instead of always choosing optimal solution give each solution "optimality weight" and the bigger the weight the higher chance solution will be selected

in your sample with 2 close-by mobs, if they are same distance (10 yards each) each has weight 1/10=0.1 note we are getting 1/10 instead of 10 for weight since bigger weight = closer mob (sum does not have to be 1 in this case it is 0.2) than do R=0.2*RND() and if its 0-0.099999 select first if its 0.1-0.1999999 select second one (RND() will return number between 0 and 0.99999, and newer 1

now if one node is 11 yards away and other 10 yards away first has weight 1/12 = 0.090909 second 1/10 = 0.1 sum of weight is 0.1909 ... so first one will have slightly lower chance to be selected, but even one that is further will be selected (less optimal solution) which is good since character always/perfectly selecting mob that is closer can be kinda suspicious since humans don't have ruler and even if they did they make mistakes when making "judgement calls"

even if one mob is 1 yard away and another 100 yards away second one will still be selected approx. 1% times, it might seem strange, but it would happen rarely, plus humans are not perfect, maybe he did not see mob, maybe he did not want to kill that specific mob "just because" ...

also weights don't have to be linear based on distance they can be exponential or even be based on multiple values/complex formulas

for example weight for each mob node could depend both on distance and "dynamic blacklist weight" i propose it as a way to replace blacklisting from "try 3 times if fail blacklist for 10 min" to something like (W = weight of node, C1 = some constant) W = distanceW - failureRate for this specific node or mob (for node failure to gather for mob does it "reset" because of pathing issues)

failureRate = f(t1) + f(t2) + f(t3) + f(t4) + f(t5) (where t1, t2, t3, t4, t5 are how long ago in seconds was first failure, second failure, third failure, fourth failure and fifth failure (not just last 5 but all we saved for last 2 weeks or whatever "memory period" we configured)

f(x) is function that gives (negative) weight based on how long ago failure to collect node/kill mob happened so if failure happened twice in last 5 seconds failureRate will be less than 0 (node will not even be tried) but if it was even 5 times but all of them last week failureRate will be for example 0.9, and we will try again



i was looking at AMD MANTLE and latency improvements it brings, with acceptable latency (compared to CPU still higher but around 100 times better than DirectX/OpenGL/OpenCL) and huge throughput last generation GPU brings to table we might afford to do many more calculations both for pathfinding and decision making, one 290X has 5.6 TFLOPS (MADD exactly what we need for pathfinding and decision making)



some algorithm based on human/bee behaviour could be good enough, not perfect but still better than fixed/programed one



true but if botting brings you enough money to support you you might as well invest time to improve your "job"
1) Weights are what is used in traditional decision trees/patterns, as well as neural networks. They promise to fulfill the same function as emotions and (moral) values do by humans. One big drawback is their lack of meaning for the acting agent, the toon in our case. They represent the standpoint of the (programming) observer. I think they should be replaced by agent-centric values. It is the continuation of a previous discussion about the so-called consciousness/awareness of programs. Which I believe is impossible as such. By faking such an awareness we would still keep using weights. Only, instead of statistical/mathematical criteria, they would be based on values which we deem important for the toon:
- survival.
- non detection,
- ease of access
- ...
Those "values", instincts, or whatever you want to call them, would form the pillars on which every (new) behavior is based. Learning can happen, just like with living beings, though direct experience, or through experience of others.
The difference with the traditional approach is that instead of giving weights directly to behaviors, we are giving them weights relative to another value. The idea is somehow similar to the intuition behind the creation of the so-called "hidden layer" in neural networks.
A complete exposition of this theme would take hundreds of pages, and we would most probably not exhaust the subject. I will then just reiterate the necessity of a "meaningful" system instead of a statistical model based on implicit values. For instance, your are juggling numbers to define an optimal solution, whereas I would advocate the analysis of the situation in which the toon is, and based on general principles and specific experience (empirical data), make a decision.

2) Mantle, Cuda, or any gpu based computing is definitely the way of the future. That alone would give meaning to my previous comment. In a cpu-centered program those considerations are in practice meaningless.

3) I look forward to a gpu-based version of Honorbuddy.
 
2) Mantle, Cuda, or any gpu based computing is definitely the way of the future. That alone would give meaning to my previous comment. In a cpu-centered program those considerations are in practice meaningless.
true, i would just prioritize mantle because it reduces GPU call latency a lot compared to OpenGL/OpenCL/DirectX so it is possible to call it as some kind of "dll"


3) I look forward to a gpu-based version of Honorbuddy.
there is really no need for that, all HB does is read some variables/listen to some events in one memory space ( WOW) and make it available to us in another (C# bot code), it also does path finding but we already are talking about replacing HB pathfinding with our custom

in other words HB would more or less for us be some kind of API that we call and parts we do use would mostly be copying of memory from one process to other so not really CPU intensive or good candidates for acceleration by moving to GPU

what would need to be on GPU is neural network, pathfinding and in general most of decision making (i assume that is part that gets my CPU to 25% when i do raids with HB) and CPU will be there for WOW itself and any HB processing that is needed,

also all this should be done by community not HB team so that it can stay opensource, because huge parts of it are still usable for other games both from HB company and from other companies/people, bots can be just API/shells for reading game state and sending inputs
and AI library (running on GPU mostly) would be used by some kind of user script (hopefully C# or C++) by botters

so botters would use 3 parts
- Bot for specific game bought by some company or person, usually closed source and very game specific
- Opensource AI library running on GPU mostly, not game specific
- Script in C# or XML or whatever format that sets goals, objectives, priorities and everything else for AI library that is written by user himself/herself and/or some community person like Kick/Botanist (very game specific code)

(AI library would probably be included with bots so that user does not have to download it separately OFC but it would be opensource)
also multiple "scripts" packages with user code should be composable/mergeable so that repair addon can be written by one person while quest behaviour is written by second and quest profile by third, interrupt handler/CC by fourth, and user can run all of them at same time
 
true, i would just prioritize mantle because it reduces GPU call latency a lot compared to OpenGL/OpenCL/DirectX so it is possible to call it as some kind of "dll"



there is really no need for that, all HB does is read some variables/listen to some events in one memory space ( WOW) and make it available to us in another (C# bot code), it also does path finding but we already are talking about replacing HB pathfinding with our custom

in other words HB would more or less for us be some kind of API that we call and parts we do use would mostly be copying of memory from one process to other so not really CPU intensive or good candidates for acceleration by moving to GPU

what would need to be on GPU is neural network, pathfinding and in general most of decision making (i assume that is part that gets my CPU to 25% when i do raids with HB) and CPU will be there for WOW itself and any HB processing that is needed,

also all this should be done by community not HB team so that it can stay opensource, because huge parts of it are still usable for other games both from HB company and from other companies/people, bots can be just API/shells for reading game state and sending inputs
and AI library (running on GPU mostly) would be used by some kind of user script (hopefully C# or C++) by botters

so botters would use 3 parts
- Bot for specific game bought by some company or person, usually closed source and very game specific
- Opensource AI library running on GPU mostly, not game specific
- Script in C# or XML or whatever format that sets goals, objectives, priorities and everything else for AI library that is written by user himself/herself and/or some community person like Kick/Botanist (very game specific code)

(AI library would probably be included with bots so that user does not have to download it separately OFC but it would be opensource)
also multiple "scripts" packages with user code should be composable/mergeable so that repair addon can be written by one person while quest behaviour is written by second and quest profile by third, interrupt handler/CC by fourth, and user can run all of them at same time
I look forward to your contributions to an open source bot. People, and me certainly, can learn very much from you.
 
unfortunately I do believe in open source and using open source, but i personaly don't like working for free,
so when i make something like that i would probably keep it for myself (i know, i am "stingy" its my character flaw :) )

on the other hand i am willing to give ideas to other people, so you will still see me in this thread
 
unfortunately I do believe in open source and using open source, but i personaly don't like working for free,
so when i make something like that i would probably keep it for myself (i know, i am "stingy" its my character flaw :) )

on the other hand i am willing to give ideas to other people, so you will still see me in this thread
That is also a valuable form of contributing. I do not consider myself as a programmer, so I suppose I am doing the same as you in a way, or at least, trying to.
 
A few posts ago I talked about the receptive fields of neurons, and how it was still unclear how that could help us understand how the brain recognizes objects. Some neurons react to vertical bars with a certain inclination within a range. For instance, more of less vertical. While others react only to horizontal bars. There are more specific neurons, each with its specific receptive field, but I would like to keep it simple and talk about the fact that neurons do not seem to react to precise patterns, but more to a family of patterns.
I might have an explanation I have not encountered before (which does not mean it does not already exist), so you are reading this at your own peril!
I have been thinking for quite a while about the seemingly obvious problem how we are able to recognize for instance two different circles as circles. You remember the story of the pigeons capable of abstract thought? Well, how come pigeons and humans are capable of recognizing a greater cirlce in different situations? First of all, how do we recognize the fact that we are dealing with the same kind of objects. It might seem a dumb question, but when you think how difficult it is for a computer program to recognize two different, handwritten, instances of the same letter, you will understand that there is nothing obvious about this.
B and B are easily recognizable for a human, but an OCR program (Optical Character Recognition) works best if both patterns are already known. That is, if the problem has already been solved by humans.
Even in your own handwriting, the same letters are written slightly differently each time, and it takes a smart program to ignore the differences and concentrate on the relevant similarities.
What does that have to do with the neurons' receptive fields? Well, it is a long story so I will give you the short version.
The fact that neurons react to slightly different inclinations or curves or whatever, is what makes it possible for living organisms like pigeons and humans, to discard small, or even not so small, differences between objects. There is no OCR program in the brain, just neurons that react the same way to similar objects.
So, in a way, receptive fields are not really a means to identify objects, but more to recognize the kind they belong to. This is the difference between being able to read an unknown handwriting, and being able to distinguish it from other handwritings as a unique pattern. Anyone who can read is capable, most of the time, to do the first, but it takes an expert to do the second.
To understand what I mean, try drawing circles of the same and of different sizes on a sheet of paper, and imagine a computer trying to make sense out of them. Squiggles would also be a good example. One thing is I think obvious, nobody could (easily) identify any of the circles or squiggles. But they would have no trouble recognizing any of them.

edit: That receptive fields, as they are presented in textbooks, do not explain how we identify objects is I think pretty obvious. After all, once we have learned the difference between B and B, we have no trouble distinguishing between them.
 
The OCR analogy can maybe help us understand what kind of operations take place in the retina. That is, how information is coded before being sent to the rest of the brain (the retina is considered as being a part of the brain). B and B must not only be be recognized as being similar, they also must be distinguished from each other. This might mean that at least 2 kinds of neurons must be involved: one with a variable receptive field, and one with a precise one.
Another operation that is probably undertaken at this level is what is called color consistency. Personally, I think that this operation has at least 2 dimensions.
1) neutralization of luminance or light intensity: Bright red must be recognizable as belonging to the same family as a red that is less intensely lit.
2) Color nuances: darker and lighter red, under the same lighting conditions, must also be recognized as family members.

We find here the same distinction between "recognizing" an element as belonging to a specific group, and "identifying" it as a unique element.
The advantage of these two functions is that they do not involve any kind of intelligence at the level of the retina. All you need are neurons that react to specific stimuli, something that evolution can easily tinker given time.
It is of course possible that these two functions are fulfilled by the same kind, or even more kinds of neurons, in different configurations. That is a question that only empirical research can answer.

edit: it should be color CONSTANCY, not CONSISTENCY.
 
While Open Source is good in theory, it has proven to be pretty much a failure in this community. The only projects that have managed to really stay active are closed source or paid on some level.

just wanted to clarify, main reason it should be open source is because pathfinding/neural networks and similar stuff is very game independent, and there is really no point repeating same work for every game or every bot company out there, its like using OpenAL for sound or PhysX for physics, or Unity as game engine for your game, it might be also closed source i guess but how are you going to have sharing of this very common code between different bots and games?
 
Open source is an expression that is used almost exclusively for software. What we must not forget is that science in general can only progress in a free environment where researchers can publish their results and be evaluated by their peers. This openness does not contradict nor prevent the existence of proprietary inventions and programs. In fact, the latter rely on the former. We need the same openness in Botland to achieve true progress. Nobody asks you to reveal your trade secrets, but marking everything as secret is the best way not only to keep reinventing the wheel, but also to keep repeating the same mistakes over and over again. This is, as far as I am concerned, pretty obvious in the case of Honorbuddy. Phelon said:
"You also have problems with API changing and things getting shuffled or not working for an extended period of time in HB. It is frustrating to say the least to make an on going project for this bot."
I say, this is what you get when you declare everything a secret. The problems the HB team is facing are not specific to HB nor to WoW. An open discussion of the algorithms used by both parties would encourage more people to look for better algorithms and advance botting in general.
This is what is happening with object recognition and learning algorithms, just to name a few. The Field of AI, which the botters are part of, is teeming with lively discussions, bringing new insights every day. Only the botting community is still caught in its web of secrecy and shortsightedness.

edit: This is what i find most frustrating in a site like ownedcore. They remind me of priests castes of the antiquity with their secrecy and elitism. Anyone looking to learn about botting is rebuked with laugher and derision. First you must prove yourself as being worthy of membership by discovering on your own the sacred scrolls. And once you have done that, do not hope on more help from your new brethren. Be glad if they abase themselves to throwing a few hints in your direction. It is this mentality that determines the pathetic quality of botting programs. In fact, their high-priest, Apoc (who started beautifully by offering the results of his research to the community), states it explicitly: noobs, do not bother us with your petty questions! Go fight the dragons first, and then we will talk....Maybe.
 
A very interesting and thorough textbook is Sensation and Perception by Bruce Goldstein. it also shows perfectly the pattern I described: a complete explanation of receptive fields - much less simplistic than mine-, combined with the complete absence of any explanation of how we perceive objects. It teaches you a lot about neurons and the different parts of the brain associated with vision. It gives also many example of illusions that are explained by receptive fields, convergence and lateral inhibition (which i did not mention at all). In all, it is, like I said very thorough.
One problem that vision researchers face when studying vision, is that they do not know (yet) where to look for the brain-copy of the scene that the subject is watching at a given moment. Suppose you are wearing a futuristic brain imaging headset that registers everything you look at and saves it on disk. It would then not be too hard to look for correspondences between the brain-copy and the real scene. There is of course the phenomenon called retinotopy, which is the projection of the neurons in the retina in/to the brain. But that does not tell us much on the end product.
One could say that, as long as brain imaging has not attained a resolution high enough to register each neuron individually, all you will be reading about vision or any other brain function, is highly suspect.
Nowadays, almost every researcher considers the brain a kind of (parallel) computer. So they are all looking for programs and algorithms that could explain how the brain works. I think they should give up this model, and turn back to the one used in the seventeenth century: God as a clockmaker. Looking at the brain as a very ingenious and complex, mechanical, clock would at least procure a fresh perspective.
 
You may think that my remarks all all theoretical or philosophical, and you would be right, partially. I do not always have in mind botting when I am thinking about brain processes, but the subject is always present in the background.
First, let me state that for object recognition to be possible at all, we need an fps-setting. The camera has to be zoomed in just far enough to hide the toon and show what it is facing. Second, the camera has to be adjusted each time to show what the toon is facing. Wow zooms out automatically and does not follow the fps perspective at all time.
Now, let us see how to use all the remarks I made about vision.
1) objects, shapes, perspectives, can all be considered as color (or gray) configurations.
2) objects are grouped in families to which neurons respond in a similar way
3) vision cannot be considered in isolation from other brain processes.
That last point is really a pain. How are we to begin if everything is connected to everything?
Well, let us be practical. There are wow databases that we can use, and we can, hopefully, complement them with our own data.
Let us say you want to identify a specific npc in the toon's vision field.
You can make use of the shot in the database, and compare it to what is shown in the center of the screen (Do not forget to adjust the camera!). That is, unluckily, easier said then done. There are hundreds of articles on the subject, and they all show that even such a direct comparison is far from simple.
If the npc is not centered in the right way, other parts of the environment might get confused with the npc itself, and throw any analysis astray. This should remind us of the problem each baby faces when it is looking up at its mother while sucking on the breast. Where does her face start, and where does it end? There is no way for the baby to resolve this conundrum by considering only that scene. It has to see its mother from different angles, under different lighting conditions, and surrounded by different objects. Only then will it be eventually able to discriminate between the mother and other objects.
For our program, that would mean different camera angles and zooms. Which creates of course innumerable new problems, one of which being the fact that not every object looks alike seen from different angles. Apparently we would need many shots, taken from different angles and different zooms, of the same npc, to be able to recognize it in different situations.
But let's not forget rule nr3, that states that vision cannot be isolated from other brain processes. What if that was not purely a vision problem? Who says that we actually recognize an object whatever the angle? That maybe the case with familiar objects and shapes, but what about objects we only have seen once or twice? Are we using common features to all these angles to recognize/identify an object? Maybe, but there is no consensus on what those features could be, and how we put them together to form objects. So. let us go back to the idea of vision as a partial process. Suppose we are talking to each other, and then we start walking alongside each other. My view of you is constantly changing, and it would be hard to believe that I am recognizing/identifying each angle. But I know who I am talking to, so I do not need to rely on my vision to tell me it is still you.
So, for the sake of simplicity, let us assume that extra-visual activities are necessary for recognizing/identifying and object. In this case, we want to be facing the npc before we can say anything about it. The same way we would walk to a person in a bar whom we think we have recognized from the back, and look at their face to make sure.
Now that we are facing the npc, and the shot is well centered, we have to determine if it is the right one.
For that, we will have first to determine the family to which the object belongs: is it a humanoid, an inanimate object, a beast...?
That will help us distinguish it from the background and other possible kinds. Which means that we will need a general description of the kind of objects the npc belongs to. For both actions, the principle of similarity can be applied.
We must not forget that we are not trying to emulate the brain, but only a model of the brain. So, even if we are biologically wrong, if it helps us solve the problem, that is perfectly acceptable.
A shot of the npc must be analyzed and similarity principles must be distilled from it and other similar shots/npc's. That is I think the most intensive part of the project. Once we have our similarity principles, then, it should not be that difficult to find identifying characteristics of our npc.
This is of course all theoretical. I wonder if anyone is up to the challenge to try and write a program for this?
 
Status
Not open for further replies.
Back
Top