Li, M. Y., Grant, E., & Griffiths, T. L. (2023). Gaussian process surrogate models for neural networks. Proceedings of the 39th Conference on Uncertainty in Artificial Intelligence. (pdf)
Dasgupta, I., Grant, E., & Griffiths, T. L. (2022). Distinguishing rule- and exemplar-based generalization in learning systems. Proceedings of the International Conference on Machine Learning.(pdf)
Tuli, S., Dasgupta, I., Grant, E., & Griffiths, T. L. (2021). Are Convolutional Neural Networks or Transformers more like human vision?. Proceedings of the 43rd Annual Meeting of the Cognitive Science Society.(link)
Grant, E., Peterson, J. C., & Griffiths, T. (2019). Learning deep taxonomic priors for concept learning from few positive examples. Proceedings of the 41st Annual Conference of the Cognitive Science Society. (pdf)
Burns, K., Nematzadeh, A., Grant, E., Gopnik, A., & Griffiths, T. L. (2018). Exploiting attention to reveal shortcomings in memory models. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 378-380. (pdf)
Grant, E., Nematzadeh, A., & Griffiths, T. L. (2017). How can memory-augmented neural networks pass a false-belief task? Proceedings of the 39th Annual Conference of the Cognitive Science Society.(pdf)