What we have covered since the midterm exam. Chapters 7, 8, 9, 10, additional material Chapter 7 Raven game 4 types of weapons; what happens when a projectile hits a robot. triggers; examples; giver triggers steering behaviors: seek, arrive, wall avoidance, separation perception, sensor omnipotence, sensory nescience need for short-term memory selecting target; how regulators allow different components update at a slower than the game update rate. update method Chapter 8 Navgraphs, course grained graphs, fine grained graphs, flood fill algorithms, line of sight; cell-space method path-finding, uses both A* and Dijkstra, edge annotations general approach to get from A to B when A and B are not on navgraph general approach to find nearest target of given type path smoothing reducing cpu overhead; shortest path lookup table time-sliced path planning; CycleOnce hierarchical planning sticky situations; detecting Chapter 9 goal-driven behavior; atomic goals, composite goals, goal stack examples atomic: Wander, TraverseEdge, SeekToPosition, DodgeSideToSide examples composite: FollowPath, MoveToPosition, AttackTarget, Goal_NegotiateDoor goal arbitration; Goal_Think six strategy goals computing desirability use of random numbers to give robots "personality" use a queue rather than a stack to process commands as goals Chapter 10 Fuzzy Logic; set membership, crisp sets, degree of membership common shapes of membership function, fuzzification fuzzy definition of AND, OR, NOT fuzzy linguistic variables fuzzy rules, how applied defuzzification; mean of maximum (MOM), centroid, average of maxima Combs' method, what problem it avoids Additional material Behavior trees Conditions and Actions composite tasks: Selection, Sequence, RandomSequence, RandomSelector decorators: Limit, UntilFail Parallel, libraries of behavior trees Goal oriented Behavior Characters have goals that psychologists call drives Strength of a goal is called insistence Choose action that reduces strongest drive the most Multiple drives can be reduced at same time; now are actions selected to reduce multiple drives the most? Rule based systems global database, condition-action rules cycle: match new facts, resolve conflict set, execute one rule rules are refractory, modular, natural RETE network rule priority simple version of a rule based system (if-elseif-... that is repeatedly executed) monkey gets banana examples Goal Oriented Action Planning Find sequence of up to maxdepth actions that produced best future world (lowest discontentment), do first action in that sequence how that procedure can be time sliced IDA* algorithm Iterative deepening A* A blend between procedure for goal oriented action planning and A* Like goap in that it does depth-first search of a tree of possible future worlds to a maximum depth of maxdepth; like A* in that it estimates cost of final path through a world and stops increasing a path when goal world is reached. Explores each path until estimated cost exceeds cutoff limit. Repeats this with larger and larger cutoff value until best path is found. Tactical and Strategic AI tactical locations such as cover, shadow, sniping, exposed Monte Carlo method of estimating fuzzy membership of waypoint into type of tactical location automatically generating waypoints condensation algorithm for reducing number of waypoints influence maps map flooding Frag maps convolution Coordinated action top-down approach bottom-up approach military tactics