Symbolic and Neural Learning Algorithms: An Experimental Comparison
| dc.contributor.author | Shavlik, Jude W | en_US |
| dc.contributor.author | Mooney, Raymond J | en_US |
| dc.contributor.author | Towell, Geoffrey G. | en_US |
| dc.date.accessioned | 2012-03-15T16:50:50Z | |
| dc.date.available | 2012-03-15T16:50:50Z | |
| dc.date.created | 1989 | en_US |
| dc.date.issued | 1989 | en_US |
| dc.description.abstract | Despite the fact that many symbolic and neural network (connectionist) learning algorithms are addressing the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perceptron and back-propagation neural learning algorithms have been performed using several large real-world data sets. Back-propagation performs about the same as the other two algorithms in terms of classification correctness on new examples, but takes much longer to train. The effects of the amount of training data, imperfect training examples, and the encoding of the desired outputs are also empirically analyzed. Suggestions for handling imperfect data sets are described and empirically justified. Symbolic and neural approaches work equally well in the presence of noise, while back-propagation does better when examples are incompletely specified. Back-propagation is better able to utilize a distributed output encoding, although ID3 is also able to take advantage of this representation style. | en_US |
| dc.format.mimetype | application/pdf | en_US |
| dc.identifier.citation | TR857 | |
| dc.identifier.uri | http://digital.library.wisc.edu/1793/59144 | |
| dc.publisher | University of Wisconsin-Madison Department of Computer Sciences | en_US |
| dc.title | Symbolic and Neural Learning Algorithms: An Experimental Comparison | en_US |
| dc.type | Technical Report | en_US |
Files
Original bundle
1 - 1 of 1