Avril Kenney (Massachusetts Institute of Technology)
When people are acquiring language, it is not always clear from context which words refer to which objects, and the examples they see for any particular word will not all be the same. Previous approaches to the problem of learning or building a lexicon have characterized it as creating a mapping between a fixed set of words and a fixed set of meanings. I will describe simulations of lexicon development using an agent-based model in which word-meanings are distributions over a continuous feature space; agents communicate about objects and infer the meanings of words based on the example objects they see. Agents in the model are able to develop a shared lexicon, and the meanings they reach are efficient given what objects tend to be present. Even when the objects are entirely random, the lexicons that develop tend to result in a near-optimal probability of communicative success. If new agents are being introduced into the population, the lexicon will continue to change, and will have periods of stability punctuated by periods of fluctuation.
Mentor: Eric Smith
SFI Host: Ginger Richardson