An Image-To-Speech iPad App
| dc.contributor.author | Zhu, Xiaojin | |
| dc.contributor.author | Rosin, Jake | |
| dc.contributor.author | Jun, Kwang-Sung | |
| dc.contributor.author | Dyer, Charles R. | |
| dc.contributor.author | Maynord, Michael | |
| dc.contributor.author | Tiachunpun, Jitrapon | |
| dc.date.accessioned | 2012-07-27T21:05:10Z | |
| dc.date.available | 2012-07-27T21:05:10Z | |
| dc.date.issued | 2012-07-26 | |
| dc.description.abstract | We describe an iPad app which assists in language acquisition and development. Such an application can be used by clinicians for human developmental disabilities. A user drags images around on the screen. The app generates and speaks random (but sensible) phrases that matches the image interact. For example, if a user drags an image of a squirrel onto an image of a tree, the app may say "the squirrel ran up the tree." A key challenge is the automated creation of "sensible" English phrases, which we solve by using a large corpus and machine learning. | en |
| dc.identifier.citation | TR1774 | en |
| dc.identifier.uri | http://digital.library.wisc.edu/1793/61884 | |
| dc.publisher | University of Wisconsin-Madison Department of Computer Sciences | en |
| dc.title | An Image-To-Speech iPad App | en |
| dc.type | Technical Report | en |