CSE 591 Autonomous Agents: Theory and practice

Fall 2001

T Th 1:40 -- 2:55 PM; ECG G 335
 

The meanings of the word `agent' in dictionary.com are


 1. one that acts or has the power or authority to act.
 2. one empowered to act for or represent another.
 3. a means by which something is done or caused.
 4. a force or substance that causes a change.
 5. a representative or official of a government or administrative  department of a government.
 6. a spy.
 7. (in linguistics) the noun noun phrase that specifies the person  through whom or the means by which an action effected. 

The meanings of the word `autonomous' in dictionary.com are


 1. not controlled by others or by outside forces.
 2. independent in mind or judgement or government; self  directed.
 3.
    a. independent of the laws of another state or government;     self-governing.
    b. of or relating to a self-governing entity.
    c. self-governing with respect to local or internal affairs.


In our context, an artificial computer-driven agent is an entity
that acts and that causes change. Since we would like to have
some control over the agent, lest it makes us its slave, the qualifier
`autonomous' in the context of an artificial computer-driven agent
means that its actions are not micro-managed by other entities (people),
but we do have some control over it. In other words an autonomous
artificial computer-driven agent, which we will simply refer henceforth
as an autonomous agent, can take high-level directives and goals and
can figure out what actions it needs to take and execute those actions.
To be able to do this among other things, an autonomous agent must
understand high level directives, must know what actions it can do and
what changes these action would cause, must have knowledge about the
environment it is in and how it might change or react to its
actions, must have the reasoning ability to figure out what it
needs to do to achieve its goal, need to revise its plans if the
environment does no co-operate, need to be able to assimilate its
observations and make conclusions that it missed observing or can
not directly observe and learn from its interaction with the
environment.


In this course we will cover the theory behind endowing the above
capabilities to an agent. In particular,


1. We will describe several languages which can be used to express
directives or goals to an agent. These languages will be based on
temporal logic, and we will have notion such as temporal goals,
and maintenance goals.


2. We will describe action description languages using which an agent
can express the impact of its actions on the environment.


3. We will describe languages which express an agents plan of action.


4. We will define when an agents plan of action, based on its
knowledge of the type (2), satisfies the directives or goals of
type (1), assuming that the environment is co-operative.
We will discuss how to create simple plan of actions.


5. We will then remove the assumption about the co-operativeness
of the environment and describe a language which can be used to record
observations. We will then define `observation assimilation' and
revising the original plan of action to create new `plans from the
current situation'.


6. We will generalize the above (1-5) to include `knowledge producing
actions', which in their purest forms do not change the world, but
change the agent's knowledge about the world. We will describe how
agents can make plans using such knowledge producing actions to
achieve various kinds of goals: achievement goals, knowledge goals,
diagnostic goals.


7. We will discuss several agent architectures: deliberative,
reactive and hybrid and how and where they use 1-6.


8. We will consider agents in stochastic worlds, and/or with
actions with stochastic effect. We will consider the issue of
learning in such a scenario.


9. We will discuss the complexity (how hard it is) of the various
tasks (such as planning) in 1-8 that an agent must do.


10. Finally, there will be a project where students will be
required to develop a software agent based on 1-8.


 Home Work + small assignments  20-30%
 2 Exams                                       50%
 project                                         20-30%


There will be one exam in the middle of the semester and one on the last
day of the class. There will not be any exam on the day of the finals.


Study material: Will be based on several papers and chapters from
the following two books, and slides.


1. Heteregenous agent systems. Subrahmanian, Bonatti, Dix, Eiter,
Kraus, Ozcan and Ross. MIT Press. 2000.


2. Knowledge in action: logical foundations for specifying and
implementing dynamical systems. Raymond Reiter. MIT Press. 2001.


Other books that may be useful for the project.


a. Multi-agent systems: a modern approach to distributed AI. Ed:
Gerhard Weiss. MIT Press. 1999.


b. Software Agents. Ed: Jeffrey M. Bradshaw. AAAI Press/MIT Press.
1997.


c. Intelligent Agents, several volumes. Springer.