Faculty |
|
 |
Tom Griffiths, Lab Director
(webpage)
|
|
Postdocs |
|
 |
Dilip Arumugam
(webpage) My research is primarily centered around understanding principled, practical approaches for achieving data efficiency in reinforcement learning. This often takes the form of directed research into the specific generalization, exploration, and credit assignment challenges faced by reinforcement-learning agents. In recent years, I've grown fond of information theory as a suite of tools which facilitate rigorous analysis while also remaining amenable to the design of scalable agents. I’m also interested in how insights for engineering sample efficiency into computational decision-making agents fruitfully informs our understanding and reverse engineering of sample efficiency in biological decision-making agents.
|
|
 |
Jianqiao Zhu
(website) I am interested in understanding the rational and computational principles of human cognition, particularly in relation to judgment and decision-making processes. My work has been motivated by my love for probability and how it can be applied in the psychology of chance. This includes the idea that coherent judgments should adhere to the same mathematical structure as probabilities. However, it is not uncommon for human psychology to violate the rules of probability theory when making judgments about chance. To address this discrepancy between normative and descriptive aspects of judgmental probability and decision theory, I use a combination of computational statistics and Bayesian machine learning, with the Bayesian sampler model being my most recent approach.
|
|
 |
Ionatan Kuperwajs
(website) I’m interested in understanding how people make decisions and plan sequences of actions in complex environments. Despite the ubiquity of sequential decision-making in naturalistic behavior, the study of the cognitive mechanisms underlying such decisions has been primarily limited to relatively simple tasks. Meanwhile, artificial intelligence has developed powerful algorithms to solve a wide array of problems in large state spaces. In my research, I aim to bridge this gap by applying computational methods to and building process-level models of human behavior in tasks where evaluating every possible course of action is intractable. I’ve approached this problem by leveraging massive online data sets of participants playing combinatorial games.
|
|
 |
Ella Qiawen Liu
(website) My research explores how human and artificial minds flexibly draw similarities between seemingly unrelated concepts and make analogical inferences. I’m particularly interested in how language influences our perception and formation of similarities, the evolution of word meanings, and the role of cross-domain mappings in shaping how we think, judge, communicate, and innovate. I combine empirical methods with computational modeling to investigate these questions.
|
|
 |
Evan Russek
(website) Everyday decisions often require solving large computational problems. An impressive feature of human cognition is the ability to arrive at good approximate solutions to these problems. These approximations work quite well in a large set of situations, yet can sometimes lead to mistakes. In my research I’m interested in understanding the types of strategies and representations that allow us to reach these good solutions. I’m also interested in understanding whether trade-offs between making good decisions and preserving resources can help us understand choice biases, habits, and what situations our thoughts are drawn to when we decide. In the past, I’ve studied these questions using a mix of computational modeling of behavior and neuroimaging. Recently, I’ve been utilizing large online datasets of individuals playing board games like chess.
|
|
 |
Jake Snell
(webpage) I am interested in building machine learning algorithms that are adaptive, robust, and reliable. Much of my work centers on meta-learning and continual learning, where a learning algorithm must quickly adapt to new circumstances. I enjoy using tools such as deep learning, Bayesian nonparametrics, state-space models, and distribution-free uncertainty quantification to solve these challenging tasks. I received my Ph.D. in 2021 from the machine learning group at the University of Toronto under the supervision of Richard Zemel.
|
|
 |
Cameron Turner
I am interested in the evolution of cognition. For animals to make successful decisions they must use information from the environment; including using information to learn. For instance, if a dove wants to avoid hawks adaptively they should both learn what a hawk looks like, and detect cues indicating if a hawk is present. I believe much about cognitive evolution can be understood by thinking about the quality and outcomes of using information. I also have a particular interest in social learning, which results from using information from others. I employ mathematical models to study how selection affects cognition, I also conduct empirical research to study how learning operates. I am part of the Diverse Intelligences project that aims at discovering why animals differ in intelligence.
|
|
|
 |
Akshay Jagadish
(webpage) What are the fundamental building blocks of human and machine intelligence? To answer this question, my research takes two complementary directions: building scalable, sub-symbolic models of cognition in humans (and machines) following frameworks such as meta-learning, reinforcement learning, ecological adaptation, and resource-rationality; and developing AI-driven methods to uncover symbolic programs that explain human (and machine) behavior along with the internal representations that guide them. Before moving to Princeton, I spent six wonderful years in Tübingen, Germany, where I completed a Ph.D. in Computer Science and an M.Sc. in Computational Neuroscience, working closely with Eric Schulz and Marcel Binz.
|
|
Graduate Students |
|
 |
Max Gupta
(webpage) Human learners can often outperform machines in data-limited regimes by making creative use of cognitive mechanisms like mental simulation, analogy, and relational reasoning. I am interested in how human learners spontaneously form the representations necessary to support such processes and how we can build machines that learn to learn more effectively by simulating such processes. To this end, I am interested in the underlying computations and representations that drive abstract reasoning in humans and machines, with the goal of nudging them into closer alignment.
|
|
 |
Ham Huang
(website) My general interest of research is in the computational cognitive science of human aggregate minds. How do the cognitive properties of each individual human mind and brain create emergent properties of human interactions and group behaviors and how does information from group and interactive settings shape individual cognitive mechanisms? How does working collaboratively as a group make some computational problems easier and what new computational problems it uniquely imposes?
|
|
 |
Alexander Ku
(website) My research explores how human intelligence reflects the structure and statistics of natural and artificial curricula, and how this insight can inform the development of artificial intelligence. While computational neuroscience and machine learning often focus on identifying architectural inductive biases that influence learning, my goal is to identify algorithmic inductive biases that are shaped by data. I'm particularly interested in understanding the curricular factors that enable symbolic reasoning and abstraction in neural networks.
|
|
 |
Ryan Liu
(website) The central focus of my research is analyzing and exploring how large language models can transform how our society communicates and learns information. Within this, my two overarching agendas are 1. To create more efficient and effective means for us to communicate and learn, and 2. To ensure that our communication and learning methods remain genuine and authentic to our own experiences.
|
|
 |
Raja Marjieh
(website) How do humans derive semantic representations from perceptual objects? What are the computational principles underlying their structure? How can we characterize them? In my research, I engage with these problems by leveraging large-scale online experiments and designing paradigms that implement a human instantiation of various algorithms from physics and machine learning. I am also interested in understanding how these representations are modulated by social interactions, especially in the context of creative and aesthetic processes, such as what constitutes a pleasant chord or melody.
|
|
 |
Elizabeth Mieczkowski
(website) I study how humans collaborate to solve complex tasks, taking inspiration from multiprocessing systems and computer architecture to formulate precise computational theories that can be tested on human behavior. Currently, I am interested in division of labor and how it can be related to parallel versus serial processing. When and how do groups of people parallelize tasks to effectively minimize time and energy usage? How do we maintain global task coherence when dividing subtasks amongst groups?
|
|
 |
Sunayana Rane
To create AI systems that behave in ways we expect, or even share our "values," we need alignment at a more fundamental level. I work primarily on conceptual alignment between AI models and human cognition. How can we train AI systems to understand concepts in a human-like way? At what level (e.g. representational, behavioral) is alignment necessary to produce the behavior we expect? Can we use cognition-inspired methods to better understand AI models by contextualizing their behavior with respect to child and human behavior?
|
|
 |
Phoebe Zeng
(website) If math was erased from human memory, it seems likely we'd reinvent a similar mathematics. My research aims to interrogate the nature of this tight connection between human cognition and human mathematics, in hopes that it can teach us how to design artificial mathematicians.
|
|
 |
Liyi Zhang
(website) Deep learning is powerful, yet its reasoning process is elusive. Meanwhile, Bayesian probabilistic models provide an elegant way of explicitly summarizing human understanding and can join with deep learning in different ways. This observation motivates me to work on topics including but not limited to: using probabilistic models as a proxy to distill knowledge from and instill knowledge to deep learning models, evaluating and improving deep learning’s uncertainty estimation, and developing scalable and effective approximate inference methods for probabilistic models.
|
|
Lab Manager |
|
 |
Kathryn McGregor
I am interested in using insights from human behavior to better understand, use and build AI models. I am attracted to looking at these questions through the lenses of planning, decision making and collaboration. By distilling human processes into computational models, I hope to make more efficient and interpretable programs.
|
|