Learnability and semantic universals

Main Article Content

Shane Steinert-Threlkeld
Jakub Szymanik

Abstract

One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals.


BibTeX info

Article Details

Section
Main Articles