Faculty |
|
 |
Tom Griffiths, Lab Director
(webpage)
|
|
Postdocs |
|
 |
Jianqiao Zhu
I am interested in understanding the rational and computational principles of human cognition, particularly in relation to judgment and decision-making processes. My work has been motivated by my love for probability and how it can be applied in the psychology of chance. This includes the idea that coherent judgments should adhere to the same mathematical structure as probabilities. However, it is not uncommon for human psychology to violate the rules of probability theory when making judgments about chance. To address this discrepancy between normative and descriptive aspects of judgmental probability and decision theory, I use a combination of computational statistics and Bayesian machine learning, with the Bayesian sampler model being my most recent approach.
|
|
 |
Tom McCoy
(webpage) What type of computational system is the mind? I approach this question from the perspective of language, spanning the divide between linguistics and artificial intelligence. I focus on two core topics: reconciling neural and symbolic computation (how can a neural network, such as the brain, represent language - a domain traditionally viewed as symbolic?) and characterizing people's learning biases (how do people learn nuanced linguistic phenomena from so little data?). Through these research directions, I aim to bring AI and cognitive science into closer contact, so that progress in AI can improve our understanding of human cognition, and so that insights from cognitive science can inform the construction of more robust AI systems.
|
|
 |
Evan Russek
Everyday decisions often require solving large computational problems. An impressive feature of human cognition is the ability to arrive at good approximate solutions to these problems. These approximations work quite well in a large set of situations, yet can sometimes lead to mistakes. In my research I’m interested in understanding the types of strategies and representations that allow us to reach these good solutions. I’m also interested in understanding whether trade-offs between making good decisions and preserving resources can help us understand choice biases, habits, and what situations our thoughts are drawn to when we decide. In the past, I’ve studied these questions using a mix of computational modeling of behavior and neuroimaging. Recently, I’ve been utilizing large online datasets of individuals playing board games like chess.
|
|
 |
Jake Snell
(webpage) I am interested in building machine learning algorithms that are adaptive, robust, and reliable. Much of my work centers on meta-learning and continual learning, where a learning algorithm must quickly adapt to new circumstances. I enjoy using tools such as deep learning, Bayesian nonparametrics, state-space models, and distribution-free uncertainty quantification to solve these challenging tasks. I received my Ph.D. in 2021 from the machine learning group at the University of Toronto under the supervision of Richard Zemel.
|
|
 |
Ilia Sucholutsky
(webpage) I’m fascinated by deep learning and its ability to reach superhuman performance on so many different tasks. I want to better understand how neural networks achieve such impressive results… and why sometimes they don’t. Recently, I've been focused on improving deep learning in small data settings. The current paradigm in AI research is to train large models on large datasets using massive computational resources. While this trend does lead to improvements in predictive power, it leaves behind the multitude of researchers, companies, and practitioners who do not have access to sufficient funding, compute power, or volume of data. I'm interested in developing data-efficient methods that can help rectify this growing divide.
|
|
 |
Cameron Turner
I am interested in the evolution of cognition. For animals to make successful decisions they must use information from the environment; including using information to learn. For instance, if a dove wants to avoid hawks adaptively they should both learn what a hawk looks like, and detect cues indicating if a hawk is present. I believe much about cognitive evolution can be understood by thinking about the quality and outcomes of using information. I also have a particular interest in social learning, which results from using information from others. I employ mathematical models to study how selection affects cognition, I also conduct empirical research to study how learning operates. I am part of the Diverse Intelligences project that aims at discovering why animals differ in intelligence.
|
|
 |
Bonan Zhao
(webpage) I am fascinated by the diversity of concepts and ideas that people can come up with. I draw insights from computational models of concept learning and hypothesis generation, and test my theories using online behavioral experiments. In particular, I study how people reuse their existing knowledge to grow new ideas, and how such processes, under resource constraints, lead to diverse learning outcomes for different individuals in the same environment.
|
|
Graduate Students |
|
 |
Gianluca Bencomo
(webpage) Many of the computational problems humans solve are some flavor of Bayesian inference. Yet, it is difficult to reconcile how Bayesian computations can be made by the algorithm we implement: a neural network. I am interested in various forms of approximate Bayesian inference and meta-learning, with the hope of bridging this gap.
|
|
 |
Carlos Correa
(webpage) Human behavior has rich structure, resulting from the resourceful combination of many decision-making strategies like online planning, action hierarchies, and heuristics. How do people learn and use these varied decision-making strategies? I’ve primarily focused on studying these questions in the context of hierarchical planning, developing a normative theory of hierarchy choice and experimental paradigms to measure explicit representations of hierarchy choice. I am also developing computational models to understand how heuristic strategies for decision-making are learned.
|
|
 |
Matt Hardy
(website) Every day people encounter a neverending set of complicated decisions, difficult tradeoffs, and unforeseeable developments. How do people navigate this complexity and uncertainty? I study this question using computational modeling, behavioral experiments, and observational data analysis. I am especially interested in investigating psychological phenomena in individuals situated in social networks and groups. Cognition is often studied as an isolated process and uses results from simple, individual-level experiments to predict behavior in real-world domains. However, people rarely make decisions in isolation, and many of life’s dilemmas would be impossible or intractable to solve alone. A better understanding of the relationship between individual and group cognition is key to understanding how people thrive in the complexity and uncertainty the real world presents.
|
|
 |
Ham Huang
(website) My general interest of research is in the computational cognitive science of human aggregate minds. How do the cognitive properties of each individual human mind and brain create emergent properties of human interactions and group behaviors and how does information from group and interactive settings shape individual cognitive mechanisms? How does working collaboratively as a group make some computational problems easier and what new computational problems it uniquely imposes?
|
|
 |
Sreejan Kumar
(website) One hallmark of human cognition is the ability to form abstract representations to solve complex problems with relatively small amounts of data and strongly generalize these abstractions to other problems. Cognitive scientists have said that the acquisition of these abstractions can be modeled by Bayesian inference over discrete, symbolic, and structured representations such as graphs, grammars, predicate logic, programs, etc. However, some cognitive scientists have argued that abstract knowledge can be modeled as emergent phenomenon from statistical learning of distributed, sub-symbolic systems with relatively unstructured representations. Modern deep learning research have developed a variety of architectures for distributed, sub-symbolic systems that can solve difficult tasks, but the conditions in which these systems can emerge human-like abstractions is unclear. I am interested in combining symbolic probabilistic models with novel cognitive tasks that require abstract problem solving to formally characterize how humans acquire and utilize abstract knowledge. I am also interested in utilizing the same paradigm to figure out the architectures and training regimes in which modern deep learning systems can solve these problems with human-like abstractions.
|
|
 |
Alexander Ku
(website) I'm interested in understanding the computational and statistical basis for language acquisition. In particular, mental representations of language and how those representations are grounded in similarity, categorization, and perception. My research broadly falls under the umbrella of computational cognitive science, natural language processing, and machine learning.
|
|
 |
Ryan Liu
(website) The central focus of my research is analyzing and exploring how large language models can transform how our society communicates and learns information. Within this, my two overarching agendas are 1. To create more efficient and effective means for us to communicate and learn, and 2. To ensure that our communication and learning methods remain genuine and authentic to our own experiences.
|
|
 |
Raja Marjieh
How do humans derive semantic representations from perceptual objects? What are the computational principles underlying their structure? How can we characterize them? In my research, I engage with these problems by leveraging large-scale online experiments and designing paradigms that implement a human instantiation of various algorithms from physics and machine learning. I am also interested in understanding how these representations are modulated by social interactions, especially in the context of creative and aesthetic processes, such as what constitutes a pleasant chord or melody.
|
|
 |
Elizabeth Mieczkowski
(website) I study how humans collaborate to solve complex tasks, taking inspiration from multiprocessing systems and computer architecture to formulate precise computational theories that can be tested on human behavior. Currently, I am interested in division of labor and how it can be related to parallel versus serial processing. When and how do groups of people parallelize tasks to effectively minimize time and energy usage? How do we maintain global task coherence when dividing subtasks amongst groups?
|
|
 |
Kerem Oktar
(website) My research aims to clarify the psychological and computational basis of disagreement – across scales, domains, and agents – from definition to intervention. I also study decision-making; in particular, I am interested in understanding people's preferences for relying on intuition vs. deliberation. I take a two-pronged approach to studying these questions. To generate theories, I combine insights from analytical philosophy, probability theory, and empirical psychology. To test these theories, I use behavioral experiments, computational models, and statistics.
|
|
 |
Sunayana Rane
To create AI systems that behave in ways we expect, or even share our "values," we need alignment at a more fundamental level. I work primarily on conceptual alignment between AI models and human cognition. How can we train AI systems to understand concepts in a human-like way? At what level (e.g. representational, behavioral) is alignment necessary to produce the behavior we expect? Can we use cognition-inspired methods to better understand AI models by contextualizing their behavior with respect to child and human behavior?
|
|
 |
Ted Sumers
(website) Language is the bedrock of human society, yet despite intensive study its role in cognition remains mysterious. My research combines reinforcement learning and language games to explore human communication in complex decision-making settings. In particular, I'm using formalisms from reinforcement learning to develop rational models of joint action. I'm applying these insights to both advance our understanding of uniquely human cognition (e.g., how language supports cultural evolution) and develop artificial intelligence which can interact successfully with people (e.g., improving natural language interfaces for value alignment).
|
|
 |
Andrea Wynn
(website) I am primarily interested in exploring two topics: how to build machine learning models that are aligned with human moral values, and how to build more robust and effective machine learning models by drawing on models of human cognition. How can we ensure that machine learning models and humans are aligned in their definitions of success? Why are some tasks extremely difficult for machine learning models to complete, yet seem to be second nature for humans? These are some of the questions that I seek to answer in my research.
|
|
 |
Xuechunzi Bai
(website) Broadly, I am interested in applying computational methods and formal models to classic social psychological ideas. Currently, I am interested in two topics; both of them investigate the collateral damage from an otherwise functional approach. In one line of research, I examine how inaccurate stereotypes can result from rational explorations. In another line of research, I explore how social inequality can emerge from rational resource transmissions.
|
|
 |
Liyi Zhang
(website) Deep learning is powerful, yet its reasoning process is elusive. Meanwhile, Bayesian probabilistic models provide an elegant way of explicitly summarizing human understanding and can join with deep learning in different ways. This observation motivates me to work on topics including but not limited to: using probabilistic models as a proxy to distill knowledge from and instill knowledge to deep learning models, evaluating and improving deep learning’s uncertainty estimation, and developing scalable and effective approximate inference methods for probabilistic models.
|
|
 |
Shelley Xia
I am widely interested in cognitive-inspired AI. Specifically, I wish to improve the efficiency, reliability and interpretability of the current AI models using valuable insights from human cognition. Currently I am working on adding metareasoning to the inference process in deep neural networks.
|
|
Lab Manager |
|
 |
Logan Nelson
Abstract representations provide a basis for generalization, reasoning, inferences from sparse data, etc. Bayesian probabilistic models capture abstraction well, but it is less clear how neural networks could embody and manipulate such abstractions. I hope to illuminate this problem through novel ways of visualizing and probing neural networks. Furthermore, I am interested in improving the interpretability of deep learning systems through these approaches.
|
|