Sample Questions for the second test ------------------------------------ 1. Fuzzy Control (35%) (From the paper by Kreinovich and Nguyen and slides) What is traditional control? How does fuzzy control differ from traditional control? What is a control strategy. Why can't we simply extract the control strategy from an expert? Translate the following (see the apper) fuzzy rules to control strategies. What is the difference between standard logic and fuzzy logic. What is a membership function? What is the intuition behind defuzzification? 2. Planning vs Graph Search; STRIPS planning (15% appx) (From any book on Artificial Intelligence, and class notes if you made any.) Why can't we do planning by explicitly representating a search graph and seraching for a path to the goal node in that graph? What does STRIPS notation buys us. Write the precodnitions, add list and delete list of the action of flying from location X to location Y. 4. Planning with observations (10% appx) (From the paper titled, Formalizing narratives ...; also class notes if you made any.) When is STRIPS inadequate? What is the difference between initial situation and current situation. When is it ok to plan from the initial situation? When should we plan from the current situation. Describe an architecture for an agent that has to achieve a plan in a dynamic environment? 5. Conformant planning in presence of incompleteness; Planning with sensing (20% appx) (From the paper titled: Formalizing sensing actions ...; and class notes if you made any.) What assumption STRIPS makes about the initia state. What if that assumption is not true? What do we mean by a correct plan when the initial state is incomplete? Explain conformant planning? What is a sensing action? How does it differ from a reguar action? How do we distinguish between the real world, and the agent's knowledge about the real world? Is the notion of `state' in STRIPS adequate for that? If not, what kind of representation will be adequate. You may be given a plan structure (a; if p then a1 else a2 etc) and asked if what is given is a plan or not. If not, why not? 7. Reactive control: k - correctness (10% appx) (From the paper, Maintainability ...; and class notes if you made any.) When do we need reactive control. What do we mean by saying that a control is k-correct with respect to a goal, and a set of environmental actions? What are the two criteria for this correctness. 8. Probabilistic effect of actions; Markov Decision processes (20% appx) (From the slides.) What is a policy. Give the formula for the expected cost of a policy. Explain the intuition behind the formula. Give the Bellman's equation for the minimum cost policy? How is the traditional Operation research approach to solve Bellman's equation different from the RTDP approach? Explain the RTDP approach. What is a big asumption in MDPs. What happens if we do not have enough sensors to figure out the state we are in?