Navigation meshes and object recognition.
[I have been reading up on mesh navigation and pathfinding, but I have certainly not become an expert overnight. So, If I say something stupid, please correct me.]
If you have ever developped your own black&white pictures, you know that one way of intensifying the contrast of grey shots is to make a copy of the negative a couple of times. This way you make of even the most drab snapshot an astonishing piece of art.You can also print the shot in negative: blacks become whites, and vice versa. Also a nice experiment. Mesh file are something like that: everything disappears but the walkable space. That is necessary to preserve memory and garantee speed.
But there is a downside to mesh files, one every botter knows about: getting stuck because something has not been mapped properly, or, what is as likely, has been added after the file was created.The choice of the right series of action to get unstuck depends on the obstacle. And that is exactly the information that is missing. After all, the obstacle is supposed to be walkable space, and as such, is just a part of the many possible paths the toon can follow. I thought that, somehow, the information could be read as a negative from the mesh file: if everything the mesh files contain is walkable, then it should be possible to get the contours of the obstacle. But I made a logical mistake. The contours of the obstacle would only be "present" if the obstacle was correctly mapped. But then, the toon would not get stuck.
I think now that the only information we have is the one that appears on the screen. That would mean that, to unstuck the toon, HB would need, in real-time, to analyze the center of the screen, where the toon always appears. Of course, if it is a long and very high cliff, the whole screen, and more, would need to be analyzed. But, in this case we would be talking about a major flaw in the mesh file, and not a stuck situation. So, I will leave these extreme cases out of the picture, so to speak.
Object recognition is already a few decades old. One of the pioneers, a genius who, unluckily for the world, died very young, was David Marr. His book. "Vision", of 1982, is considered a classic and a must read for anyone interested in computer vision. There are very few theories that do not, one way or another, make use of his insights. But I am afraid that it is much too complicated and advanced for what we are trying to do: to un-stuck a toon.
What I am thinking about is much more simple: the obstacles are usually of a different color than the walkable space. So, if it would be possible, somehow, to superpose both groups of colors, we could update the walkable space and compute a new path fast enough.
But like I said, this is new territory for me, even though I have been researching vision for quite a long time, within my study of (artificial) intelligence and consciousness.
[I have been reading up on mesh navigation and pathfinding, but I have certainly not become an expert overnight. So, If I say something stupid, please correct me.]
If you have ever developped your own black&white pictures, you know that one way of intensifying the contrast of grey shots is to make a copy of the negative a couple of times. This way you make of even the most drab snapshot an astonishing piece of art.You can also print the shot in negative: blacks become whites, and vice versa. Also a nice experiment. Mesh file are something like that: everything disappears but the walkable space. That is necessary to preserve memory and garantee speed.
But there is a downside to mesh files, one every botter knows about: getting stuck because something has not been mapped properly, or, what is as likely, has been added after the file was created.The choice of the right series of action to get unstuck depends on the obstacle. And that is exactly the information that is missing. After all, the obstacle is supposed to be walkable space, and as such, is just a part of the many possible paths the toon can follow. I thought that, somehow, the information could be read as a negative from the mesh file: if everything the mesh files contain is walkable, then it should be possible to get the contours of the obstacle. But I made a logical mistake. The contours of the obstacle would only be "present" if the obstacle was correctly mapped. But then, the toon would not get stuck.
I think now that the only information we have is the one that appears on the screen. That would mean that, to unstuck the toon, HB would need, in real-time, to analyze the center of the screen, where the toon always appears. Of course, if it is a long and very high cliff, the whole screen, and more, would need to be analyzed. But, in this case we would be talking about a major flaw in the mesh file, and not a stuck situation. So, I will leave these extreme cases out of the picture, so to speak.
Object recognition is already a few decades old. One of the pioneers, a genius who, unluckily for the world, died very young, was David Marr. His book. "Vision", of 1982, is considered a classic and a must read for anyone interested in computer vision. There are very few theories that do not, one way or another, make use of his insights. But I am afraid that it is much too complicated and advanced for what we are trying to do: to un-stuck a toon.
What I am thinking about is much more simple: the obstacles are usually of a different color than the walkable space. So, if it would be possible, somehow, to superpose both groups of colors, we could update the walkable space and compute a new path fast enough.
But like I said, this is new territory for me, even though I have been researching vision for quite a long time, within my study of (artificial) intelligence and consciousness.