The basic goal of our research is understanding the computational and statistical foundations of human inductive inference, and using this understanding to develop both better accounts of human behavior and better automated systems for solving the challenging computational problems that people solve effortlessly in everyday life. We pursue this goal by analyzing human cognition in terms of optimal or "rational" solutions to computational problems. For inductive problems, this usually means developing models based on the principles of probability theory, and exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition. We test these models through experiments with human subjects, looking at how people solve a wide range of inductive problems, including learning causal relationships, acquiring aspects of linguistic structure, and forming categories of objects.

Probabilistic models provide a way to explore many of the questions that are at the heart of cognitive science. As rational solutions to a problem, they can indicate how much information an "ideal observer" might extract from the available data, and provide information about the nature of the constraints that are needed in order to guarantee good inductive inferences. By making it possible to associate discrete hypotheses with probabilistic predictions, they allow us to explore how statistical learning can be combined with structured representations. By enabling us to define models of potentially unbounded complexity, they can also be used to answer questions about how well the complexity of these structured representations is warranted by the data. Finally, the extensive literature on schemes for constructing computationally efficient approximations to probabilistic inference provides a source of clues as to psychological and neural mechanisms that could support inductive inference, and new experimental methods for collecting information about people's beliefs and inductive biases.

The working hypothesis that probability theory gives a formal account of human inductive inference establishes connections between cognitive science and current research in machine learning, artificial intelligence, and statistics. This means that probabilistic models of cognition can establish a route for ideas in these disciplines to be explored as explanations for how people learn, and for our investigation of human cognition to inform the development of new methods for making automated systems that learn.

Introductory papers on probabilistic models

More details on specific areas of research appear below...

Causal induction

Learning causal relationships is a classic inductive problem, requiring people to make an inference that goes beyond the data. We use tools from computer science and statistics, such as causal graphical models, to formalize this problem and explore the knowledge that guides human causal induction. This work is supported by the Air Force Office of Scientific Research.

Papers on causal induction

Probabilistic reasoning

The errors that people make when reasoning about chance have historically been one of the main arguments against accounts of cognition based on Bayesian statistics. Our work shows that probabilistic models can work remarkably well as accounts of how people make predictions, detect randomness, and identify coincidences. This work is supported by the Air Force Office of Scientific Research.

Papers on probabilistic reasoning

Similarity and categorization

Determining the operations and representations that lead us to consider objects similar and to classify them into categories is a central problem in cognitive psychology. By formulating this problem in statistical terms, we use tools from Bayesian statistics to explore the kinds of structure that support people's similarity judgments, to examine how the data that people see influence how they form categories, and to answer questions about how people might meet the computational challenges posed by probabilistic inference over structured representations. This work is supported by the Air Force Office of Scientific Research.

Papers on similarity and categorization

Statistical models of language

Language acquisition has been the focus of many debates in cognitive science, particularly about inductive biases -- the constraints that are needed to explain how people learn language from the available evidence. We use Bayesian models to explore how assumptions about the nature of language affect learning of basic components of linguistic structure, such as words and their properties. Linguistic corpora are also one of the best sources of information about the statistical structure of the environment in which we live, and models that capture this structure can be used to make predictions about language processing and memory, and to solve problems such as identifying which documents are similar to one another or when people are talking about a specific topic in a meeting. This work is supported by the National Science Foundation and DARPA.

Papers on statistical models of language

Nonparametric Bayesian statistics

People are able to form representations of increasing complexity as more data become available to them, forming more clusters or identifying more features of objects as the number of objects increases. Nonparametric Bayesian statistics provides a rational account of this process, making it possible to define probabilistic models that grow in complexity as more data becomes available. Our research explores applications of these ideas in cognitive science, as well as new nonparametric models that can be used in machine learning and statistics. This work is supported by the National Science Foundation.

Papers on nonparametric Bayesian statistics

Cultural evolution and iterated learning

Most of cognitive science focuses on individual learners. Our research on cultural evolution asks how the properties of individual learners affect the outcome of transmission of information among large numbers of people. In particular, we explore how the inductive biases of Bayesian learners influence the outcome of processes of "iterated learning," where the data seen by one learner are generated by another learner. We combine mathematical analyses of these processes with experiments in the laboratory, using iterated learning as a first step towards a deeper understanding of processes of cultural evolution, and as a novel method for exploring the inductive biases of human learners. This work is supported by the National Science Foundation.

Papers on cultural evolution and iterated learning

© 2024 Computational Cognitive Science Lab  |  Department of Psychology  |  Princeton University