Boards, Advisory Committees, Professional Organizations
Governing board member, Cognitive Science Society (2015 - Present)
How do we learn to communicate using language? I study children's language learning and how it interacts with their developing understanding of the social world. I use behavioral experiments, computational tools, and novel measurement methods like large-scale web-based studies, eye-tracking, and head-mounted cameras.
Language comprehension is more than a process of decoding the literal meaning of a speaker's utterance. Instead, by making the assumption that speakers choose their words to be informative in context, listeners routinely make pragmatic inferences that go beyond the linguistic data. If language learners make these same assumptions, they should be able to infer word meanings in otherwise ambiguous situations. We use probabilistic tools to formalize these kinds of informativeness inferences-extending a model of pragmatic language comprehension to the acquisition setting-and present four experiments whose data suggest that preschool children can use informativeness to infer word meanings and that adult judgments track quantitatively with informativeness.
View details for DOI 10.1016/j.cogpsych.2014.08.002
View details for PubMedID 25238461
Newborn babies look preferentially at faces and face-like displays, yet over the course of their first year much changes about both the way infants process visual stimuli and how they allocate their attention to the social world. Despite this initial preference for faces in restricted contexts, the amount that infants look at faces increases considerably during the first year. Is this development related to changes in attentional orienting abilities? We explored this possibility by showing 3-, 6-, and 9-month-olds engaging animated and live-action videos of social stimuli and also measuring their visual search performance with both moving and static search displays. Replicating previous findings, looking at faces increased with age; in addition, the amount of looking at faces was strongly related to the youngest infants' performance in visual search. These results suggest that infants' attentional abilities may be an important factor in facilitating their social attention early in development.
View details for DOI 10.1016/j.jecp.2013.08.012
View details for Web of Science ID 000329955000002
View details for PubMedID 24211654
A recent probabilistic model unified findings on sequential generalization ("rule learning") via independently-motivated principles of generalization (Frank & Tenenbaum, 2011). Endress critiques this work, arguing that learners do not prefer more specific hypotheses (a central assumption of the model), that "common-sense psychology" provides an adequate explanation of rule learning, and that Bayesian models imply incorrect optimality claims but can be fit to any pattern of data. Endress's response raises useful points about the importance of mechanistic explanation, but the specific critiques of our work are not supported. More broadly, I argue that Endress undervalues the importance of formal models. Although probabilistic models must meet a high standard to be used as evidence for optimality claims, they nevertheless provide a powerful framework for describing cognition.
View details for DOI 10.1016/j.cognition.2013.04.010
View details for Web of Science ID 000322803200013
View details for PubMedID 23774636
Word frequencies in natural language follow a highly skewed Zipfian distribution, but the consequences of this distribution for language acquisition are only beginning to be understood. Typically, learning experiments that are meant to simulate language acquisition use uniform word frequency distributions. We examine the effects of Zipfian distributions using two artificial language paradigms-a standard forced-choice task and a new orthographic segmentation task in which participants click on the boundaries between words in contexts. Our data show that learners can identify word forms robustly across widely varying frequency distributions. In addition, although performance in recognizing individual words is predicted best by their frequency, a Zipfian distribution facilitates word segmentation in context: the presence of high-frequency words creates more chances for learners to apply their knowledge in processing new sentences. We find that computational models that implement "chunking" are more effective than "transition finding" models at reproducing this pattern of performance.
View details for DOI 10.1016/j.cognition.2013.02.002
View details for Web of Science ID 000319087000013
View details for PubMedID 23558340
Recovering discrete words from continuous speech is one of the first challenges facing language learners. Infants and adults can make use of the statistical structure of utterances to learn the forms of words from unsegmented input, suggesting that this ability may be useful for bootstrapping language-specific cues to segmentation. It is unknown, however, whether performance shown in small-scale laboratory demonstrations of "statistical learning" can scale up to allow learning of the lexicons of natural languages, which are orders of magnitude larger. Artificial language experiments with adults can be used to test whether the mechanisms of statistical learning are in principle scalable to larger lexicons. We report data from a large-scale learning experiment that demonstrates that adults can learn words from unsegmented input in much larger languages than previously documented and that they retain the words they learn for years. These results suggest that statistical word segmentation could be scalable to the challenges of lexical acquisition in natural language learning.
View details for DOI 10.1371/journal.pone.0052500
View details for Web of Science ID 000313320900027
View details for PubMedID 23300975
One of the most astonishing features of human language is its capacity to convey information efficiently in context. Many theories provide informal accounts of communicative inference, yet there have been few successes in making precise, quantitative predictions about pragmatic reasoning. We examined judgments about simple referential communication games, modeling behavior in these games by assuming that speakers attempt to be informative and that listeners use Bayesian inference to recover speakers' intended referents. Our model provides a close, parameter-free fit to human judgments, suggesting that the use of information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication.
View details for DOI 10.1126/science.1218633
View details for Web of Science ID 000304406800035
View details for PubMedID 22628647
Language for number is an important case study of the relationship between language and cognition because the mechanisms of non-verbal numerical cognition are well-understood. When the Pirahã (an Amazonian hunter-gatherer tribe who have no exact number words) are tested in non-verbal numerical tasks, they are able to perform one-to-one matching tasks but make errors in more difficult tasks. Their pattern of errors suggests that they are using analog magnitude estimation, an evolutionarily- and developmentally-conserved mechanism for estimating quantities. Here we show that English-speaking participants rely on the same mechanisms when verbal number representations are unavailable due to verbal interference. Followup experiments demonstrate that the effects of verbal interference are primarily manifest during encoding of quantity information, and-using a new procedure for matching difficulty of interference tasks for individual participants-that the effects are restricted to verbal interference. These results are consistent with the hypothesis that number words are used online to encode, store, and manipulate numerical information. This linguistic strategy complements, rather than altering or replacing, non-verbal representations.
View details for DOI 10.1016/j.cogpsych.2011.10.004
View details for Web of Science ID 000300813300003
View details for PubMedID 22112644
Mental abacus (MA) is a system for performing rapid and precise arithmetic by manipulating a mental representation of an abacus, a physical calculation device. Previous work has speculated that MA is based on visual imagery, suggesting that it might be a method of representing exact number nonlinguistically, but given the limitations on visual working memory, it is unknown how MA structures could be stored. We investigated the structure of the representations underlying MA in a group of children in India. Our results suggest that MA is represented in visual working memory by splitting the abacus into a series of columns, each of which is independently stored as a unit with its own detailed substructure. In addition, we show that the computations of practiced MA users (but not those of control participants) are relatively insensitive to verbal interference, consistent with the hypothesis that MA is a nonlinguistic format for exact numerical computation.
View details for DOI 10.1037/a0024427
View details for Web of Science ID 000299584100015
View details for PubMedID 21767040
Children learning the inflections of their native language show the ability to generalize beyond the perceptual particulars of the examples they are exposed to. The phenomenon of "rule learning"--quick learning of abstract regularities from exposure to a limited set of stimuli--has become an important model system for understanding generalization in infancy. Experiments with adults and children have revealed differences in performance across domains and types of rules. To understand the representational and inferential assumptions necessary to capture this broad set of results, we introduce three ideal observer models for rule learning. Each model builds on the next, allowing us to test the consequences of individual assumptions. Model 1 learns a single rule, Model 2 learns a single rule from noisy input, and Model 3 learns multiple rules from noisy input. These models capture a wide range of experimental results--including several that have been used to argue for domain-specificity or limits on the kinds of generalizations learners can make-suggesting that these ideal observers may be a useful baseline for future work on rule learning.
View details for DOI 10.1016/j.cognition.2010.10.005
View details for Web of Science ID 000293312400007
View details for PubMedID 21130985
The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and the number of words in the languages being learned. Although the results are intuitive from the perspective of a language learner (longer sentences, less training, and a larger language all make learning more difficult), standard computational proposals fail to capture several of these results. We describe how probabilistic models of segmentation can be modified to take into account some notion of memory or resource limitations in order to provide a closer match to human performance.
View details for DOI 10.1016/j.cognition.2010.07.005
View details for Web of Science ID 000283979000001
View details for PubMedID 20832060