Re: Physics navigation and other questions
Posted: Sun May 01, 2011 8:30 am
hehe. I'm currently working on a nasty bug concerning zoom in/out functionality. Then it's to TLPath integration and then lights and sensors which brings me to my next question: Is there a somewhat efficient AND realistic way to implement visual and aural sensors?
Is there a kind of standard way of doing it? Googling around didn't turn out anything
Right now I'm in the design phase: I made up a somewhat complicated equation about vision that would take into account the Field of View, light conditions, npc light sensitivity (the longer it stays in a particular light condition the more it gets used to it), visual sensitivity zones (motion sensitivity is bigger in our peripheral vision whereas we have strong binocular stationary object vision on a small arc right in front of us) and distance (phew!) and give a chance of detection. All that in a visual detection module. Then it's up to the visual recognition module to figure out what it is that got detected, friend/foe/object. That too is gonna get an aweful lot of complexity, I'm sure. I'm planning of maintaining a model of the world for each NPC, that gets constantly updated through its sensors.
About the visual detection module, I'm thinking of running the whole updateSensors() function once every 5 or more frames because of all that complexity. I'm thinking of drawing direct lines from each npc to all other npcs and player and if an occlusion check is false (no obstacles in the way) then it's up to calculate the big function. There are also a lot of conditions that could simplify the math a lot. What are your opinions/experience on the matter? Am I thinking this the right way or will I have to gut it at the end because of performance?
Is there a kind of standard way of doing it? Googling around didn't turn out anything
Right now I'm in the design phase: I made up a somewhat complicated equation about vision that would take into account the Field of View, light conditions, npc light sensitivity (the longer it stays in a particular light condition the more it gets used to it), visual sensitivity zones (motion sensitivity is bigger in our peripheral vision whereas we have strong binocular stationary object vision on a small arc right in front of us) and distance (phew!) and give a chance of detection. All that in a visual detection module. Then it's up to the visual recognition module to figure out what it is that got detected, friend/foe/object. That too is gonna get an aweful lot of complexity, I'm sure. I'm planning of maintaining a model of the world for each NPC, that gets constantly updated through its sensors.
About the visual detection module, I'm thinking of running the whole updateSensors() function once every 5 or more frames because of all that complexity. I'm thinking of drawing direct lines from each npc to all other npcs and player and if an occlusion check is false (no obstacles in the way) then it's up to calculate the big function. There are also a lot of conditions that could simplify the math a lot. What are your opinions/experience on the matter? Am I thinking this the right way or will I have to gut it at the end because of performance?