CSE 471/598 - Introduction to Artificial intelligence

Project II - Requirement Specifications

Decision Tree

The project specification may have few minor modifications. If there is any, we will modify this page and make announcements at myASU.

Please note: The highest standards of academic integrity are expected of all students. The failure of any student to meet these standards may result in suspension or expulsion from the university and other sanctions as specified in the academic integrity policies of the individual colleges. Violations of academic integrity include, but are no limited to, cheating, fabrication, tampering, plagiarism, or facilitating such activities. If you are not sure of something or in doubt then please contact the TA or the professor.

Deadline: 4/25/05 Monday

In this project you are required to implement decision tree.

Induction of Decision Trees

Decision Trees are one type of classifiers widely used by the machine learning community. The label 'y' of an example 'X' is obtained by passing the example down from the root to the leaf, testing the appropriate attribute at each node and following the branch corresponding to the attribute's value in the example. A decision tree is learned by starting at the root and recursively replacing the leaves following a feature selection algorithm until a certain stopping criterion is met.

Problem

Your program should have the following functions:

Entropy Measure

As discussed in the class and the book.

It is the percentage of examples that are not classified correctly by the classifier.

Classification Error

<>Classification Error = (1 - accuracy)*100 (as x%), where accuracy = (a + e + i )/N

Confusion matrix

It is a matrix showing the predicted and actual classifications. A confusion matrix is of size L x L, where L is the number of different label values. The following confusion matrix is for L=2:

actual \ predicted

negative

positive

Negative

a

b

Positive

c

d

Confusion Matrix

Class1 Class2 Class3

Class1 a b c

Class2 d e f

Class3 g h i

Datasets

The datasets will have the format as this file. (Parity5.arff)

All the attributes will be nominal (not necessarily Boolean) and the class label will also be discrete (not necessarily Boolean). There will not be any missing values in the data. The attribute named "class" is the class attribute.

More information on the file format can be obtained at http://www.cs.waikato.ac.nz/~ml/weka/arff.html. Both the training and test datasets will have the same format. Below is a Lisp function for reading from a stream till a certain delimiter is found. You are encouraged to use your own function, or read the file in a different way.
(defun read-till-delimiter (stream delimiter)
(with-output-to-string (str)
(loop for ch = (read-char stream nil stream)
until (or (eq ch stream) (eql ch delimiter))
do (princ ch str))))

Project Submission Guidelines

You need to submit the soft copy of the project via myASU Assignments section.

Also, you should turn in a hardcopy report either in class or at the CSE department office. The report should be brief and analytical. It should contain discussion on any topic on decision trees, for example, any other impurity measures, any method of pruning the decision tree, etc. It should also contain the difficulties you encounter, interesting observations, etc.

Grading

The distribution of grades for the project is :

Code - 80%

Project Report - 20%