Emergent behavior in neuroevolved agents
Loading...
Date
Authors
Maresso, Brian
License
DOI
Type
Thesis
Journal Title
Journal ISSN
Volume Title
Publisher
University of Wisconsin--Whitewater
Grantor
Abstract
Neural networks have been widely used for their ability to create generalized rulesets for a given set of training data. In applications where no such training data exists such as new video games, they are often overlooked in favor of hard-coded artificial intelligence behaviors. By applying a genetic algorithm instead of the traditional back-propagation technique, neural networks can develop video game AI while not requiring training data or preexisting knowledge of their environment. This natural approach leads to more natural ‘human-like’ behavior both in terms of the learning process and in qualitative analysis. In this thesis, we evaluate the ability of neuroevolved videogame AI to show human-like learning patterns and adaptability to new environments. For applications in videogames or computer simulations, we explore how a set of hyperparameters can be modified to achieve the desired level of intelligence or difficulty- a key factor for videogame AI. The prospect of changing hyperparameters then passively waiting for training to complete offers an attractive alternative to hard-coded AI where new behaviors would have to be actively written and tested. Our evaluations focus on evolving AI for a simple car racing videogame. We found that our approach was indeed capable of creating a suitable AI for our test environments, and we were able to evolve new behaviors using both old ones (adaptation) and new ones (learning from scratch). Our process could be repeated for other game genres or applications, presumably with similar success.
Description
This file was last viewed in Microsoft Edge.