Faculty |
|
|
Tom Griffiths, Lab Director
(webpage)
|
|
Postdocs |
|
|
Dilip Arumugam
(webpage) My research is primarily centered around understanding principled, practical approaches for achieving data efficiency in reinforcement learning. This often takes the form of directed research into the specific generalization, exploration, and credit assignment challenges faced by reinforcement-learning agents. In recent years, I've grown fond of information theory as a suite of tools which facilitate rigorous analysis while also remaining amenable to the design of scalable agents. I’m also interested in how insights for engineering sample efficiency into computational decision-making agents fruitfully informs our understanding and reverse engineering of sample efficiency in biological decision-making agents.
|
|
|
Jianqiao Zhu
(website) I am interested in understanding the rational and computational principles of human cognition, particularly in relation to judgment and decision-making processes. My work has been motivated by my love for probability and how it can be applied in the psychology of chance. This includes the idea that coherent judgments should adhere to the same mathematical structure as probabilities. However, it is not uncommon for human psychology to violate the rules of probability theory when making judgments about chance. To address this discrepancy between normative and descriptive aspects of judgmental probability and decision theory, I use a combination of computational statistics and Bayesian machine learning, with the Bayesian sampler model being my most recent approach.
|
|
|
Ionatan Kuperwajs
(website) I’m interested in understanding how people make decisions and plan sequences of actions in complex environments. Despite the ubiquity of sequential decision-making in naturalistic behavior, the study of the cognitive mechanisms underlying such decisions has been primarily limited to relatively simple tasks. Meanwhile, artificial intelligence has developed powerful algorithms to solve a wide array of problems in large state spaces. In my research, I aim to bridge this gap by applying computational methods to and building process-level models of human behavior in tasks where evaluating every possible course of action is intractable. I’ve approached this problem by leveraging massive online data sets of participants playing combinatorial games.
|
|
|
Evan Russek
(website) Everyday decisions often require solving large computational problems. An impressive feature of human cognition is the ability to arrive at good approximate solutions to these problems. These approximations work quite well in a large set of situations, yet can sometimes lead to mistakes. In my research I’m interested in understanding the types of strategies and representations that allow us to reach these good solutions. I’m also interested in understanding whether trade-offs between making good decisions and preserving resources can help us understand choice biases, habits, and what situations our thoughts are drawn to when we decide. In the past, I’ve studied these questions using a mix of computational modeling of behavior and neuroimaging. Recently, I’ve been utilizing large online datasets of individuals playing board games like chess.
|
|
|
Jake Snell
(webpage) I am interested in building machine learning algorithms that are adaptive, robust, and reliable. Much of my work centers on meta-learning and continual learning, where a learning algorithm must quickly adapt to new circumstances. I enjoy using tools such as deep learning, Bayesian nonparametrics, state-space models, and distribution-free uncertainty quantification to solve these challenging tasks. I received my Ph.D. in 2021 from the machine learning group at the University of Toronto under the supervision of Richard Zemel.
|
|
|
Cameron Turner
I am interested in the evolution of cognition. For animals to make successful decisions they must use information from the environment; including using information to learn. For instance, if a dove wants to avoid hawks adaptively they should both learn what a hawk looks like, and detect cues indicating if a hawk is present. I believe much about cognitive evolution can be understood by thinking about the quality and outcomes of using information. I also have a particular interest in social learning, which results from using information from others. I employ mathematical models to study how selection affects cognition, I also conduct empirical research to study how learning operates. I am part of the Diverse Intelligences project that aims at discovering why animals differ in intelligence.
|
|
|
Bonan Zhao
(webpage) I am fascinated by the diversity of concepts and ideas that people can come up with. I draw insights from computational models of concept learning and hypothesis generation, and test my theories using online behavioral experiments. In particular, I study how people reuse their existing knowledge to grow new ideas, and how such processes, under resource constraints, lead to diverse learning outcomes for different individuals in the same environment.
|
|
Graduate Students |
|
|
Ruaridh Mon-Williams
I’m interested in how humans collaborate and how AI agents can support them in collaborative scenarios. My research currently involves conducting large-scale behavioural experiments across a variety of tasks, AI agents and individuals to elucidate the mechanisms underpinning human interactions and to help advance human-AI teamwork.
|
|
|
Gianluca Bencomo
(webpage) Many of the computational problems humans solve are some flavor of Bayesian inference. Yet, it is difficult to reconcile how Bayesian computations can be made by the algorithm we implement: a neural network. I am interested in various forms of approximate Bayesian inference and meta-learning, with the hope of bridging this gap.
|
|
|
Max Gupta
(webpage) Human learners can often outperform machines in data-limited regimes by making creative use of cognitive mechanisms like mental simulation, analogy, and relational reasoning. I am interested in how human learners spontaneously form the representations necessary to support such processes and how we can build machines that learn to learn more effectively by simulating such processes. To this end, I am interested in the underlying computations and representations that drive abstract reasoning in humans and machines, with the goal of nudging them into closer alignment.
|
|
|
Ham Huang
(website) My general interest of research is in the computational cognitive science of human aggregate minds. How do the cognitive properties of each individual human mind and brain create emergent properties of human interactions and group behaviors and how does information from group and interactive settings shape individual cognitive mechanisms? How does working collaboratively as a group make some computational problems easier and what new computational problems it uniquely imposes?
|
|
|
Sreejan Kumar
(website) One hallmark of human cognition is the ability to form abstract representations to solve complex problems with relatively small amounts of data and strongly generalize these abstractions to other problems. Cognitive scientists have said that the acquisition of these abstractions can be modeled by Bayesian inference over discrete, symbolic, and structured representations such as graphs, grammars, predicate logic, programs, etc. However, some cognitive scientists have argued that abstract knowledge can be modeled as emergent phenomenon from statistical learning of distributed, sub-symbolic systems with relatively unstructured representations. Modern deep learning research have developed a variety of architectures for distributed, sub-symbolic systems that can solve difficult tasks, but the conditions in which these systems can emerge human-like abstractions is unclear. I am interested in combining symbolic probabilistic models with novel cognitive tasks that require abstract problem solving to formally characterize how humans acquire and utilize abstract knowledge. I am also interested in utilizing the same paradigm to figure out the architectures and training regimes in which modern deep learning systems can solve these problems with human-like abstractions.
|
|
|
Alexander Ku
(website) My research explores how human intelligence reflects the structure and statistics of natural and artificial curricula, and how this insight can inform the development of artificial intelligence. While computational neuroscience and machine learning often focus on identifying architectural inductive biases that influence learning, my goal is to identify algorithmic inductive biases that are shaped by data. I'm particularly interested in understanding the curricular factors that enable symbolic reasoning and abstraction in neural networks.
|
|
|
Ryan Liu
(website) The central focus of my research is analyzing and exploring how large language models can transform how our society communicates and learns information. Within this, my two overarching agendas are 1. To create more efficient and effective means for us to communicate and learn, and 2. To ensure that our communication and learning methods remain genuine and authentic to our own experiences.
|
|
|
Raja Marjieh
How do humans derive semantic representations from perceptual objects? What are the computational principles underlying their structure? How can we characterize them? In my research, I engage with these problems by leveraging large-scale online experiments and designing paradigms that implement a human instantiation of various algorithms from physics and machine learning. I am also interested in understanding how these representations are modulated by social interactions, especially in the context of creative and aesthetic processes, such as what constitutes a pleasant chord or melody.
|
|
|
Elizabeth Mieczkowski
(website) I study how humans collaborate to solve complex tasks, taking inspiration from multiprocessing systems and computer architecture to formulate precise computational theories that can be tested on human behavior. Currently, I am interested in division of labor and how it can be related to parallel versus serial processing. When and how do groups of people parallelize tasks to effectively minimize time and energy usage? How do we maintain global task coherence when dividing subtasks amongst groups?
|
|
|
Kerem Oktar
(website) My research aims to clarify the psychological and computational basis of disagreement – across scales, domains, and agents – from definition to intervention. I also study decision-making; in particular, I am interested in understanding people's preferences for relying on intuition vs. deliberation. I take a two-pronged approach to studying these questions. To generate theories, I combine insights from analytical philosophy, probability theory, and empirical psychology. To test these theories, I use behavioral experiments, computational models, and statistics.
|
|
|
Sunayana Rane
To create AI systems that behave in ways we expect, or even share our "values," we need alignment at a more fundamental level. I work primarily on conceptual alignment between AI models and human cognition. How can we train AI systems to understand concepts in a human-like way? At what level (e.g. representational, behavioral) is alignment necessary to produce the behavior we expect? Can we use cognition-inspired methods to better understand AI models by contextualizing their behavior with respect to child and human behavior?
|
|
|
Liyi Zhang
(website) Deep learning is powerful, yet its reasoning process is elusive. Meanwhile, Bayesian probabilistic models provide an elegant way of explicitly summarizing human understanding and can join with deep learning in different ways. This observation motivates me to work on topics including but not limited to: using probabilistic models as a proxy to distill knowledge from and instill knowledge to deep learning models, evaluating and improving deep learning’s uncertainty estimation, and developing scalable and effective approximate inference methods for probabilistic models.
|
|
Lab Manager |
|
|
Logan Nelson
I am fascinated by the inductive biases that support human cognition. The beauty of cognition is how much we can accomplish with such little resources. I am working toward distilling human-like representations into deep learning models, enabling them to be more data efficient, interpretable, and aligned with humans. Humans also benefit from (probably innate) inducitve biases for collaboration. I aim to experiment with distilling these into models before training, in contrast to the RLHF paradigm in alignment.
|
|