I hold a PhD in Philosophy, awarded by the University of East Anglia in 2014. My doctoral studies focused on debates concerning the ontology and epistemology of modeling in psychology and neuroscience. My postdoctoral project at the Center for Philosophy of Science in Pittsburgh (2014-2015) was centered on understanding the relationship between the mechanistic and emergentist approaches promoted in cognitive psychology and systems neuroscience. Arguing for a broadly pragmatist framework in which to think about the heuristic and explanatory strategies used in scientific practice, I have focused primarily on how mathematics is used to support the relative independence of systemic approaches to neurobiological processes and to motivate the emergentist framework. In my current work, I investigate the epistemology of large scale neural simulations. I am interested to clarify the epistemic roles that computer simulations of vast collections of neural models play in generating predictions, explanations, and in guiding in silico experiments that would further support specific functional neuroscientific hypotheses. I also wish to explore how the questions raised by simulation methodologies in neuroscience relate to those discussed in climate science. I have also broad interests in philosophical accounts of explanation and analogical reasoning in both mathematics and empirical science.
Dates of Visit: October 2015 - September 2016
Project Title: The epistemology of large scale neural simulations
Project Description: Neural simulations have been used in neuroscience primarily for heuristic and predictive purposes. The development of large (brain) scale, multi-model simulations has been said to be able to fill in important details in modeling the human brain and to provide new insights into how its functioning and organization produces complex psychological behaviors. The project seeks to clarify the methodological and epistemological implications of using large scale simulations in the study of brain and behavior. What can we learn from large scale simulations? What types of errors are they prone to? How do they differ from other types of mathematical modeling used in neuroscience? Do large scale simulations represent a new type of modeling or are they better thought of as an engineering tool for organizing neuroscientific data and theory? Neural simulations have also been taken to extend and sometimes even to substitute more traditional experimentation in neuroscience. What are the common features and differences between in vivo and in silico experimentation? Can simulations produce new knowledge about their target systems? Should we think about simulations as intermediaries between theoretical and experimental methods?