Methodological and inferential issues with using large language models in social science
PBS Departmental Seminar Series
Methodological and inferential issues with using large language models in social science
The influential of large language models (LLMs) on social science research is already staggering, and only growing. In many ways, this is not surprising: LLMs are uniquely flexible, easy-to-access, and provide linguistic output which is easy-to-understand. However, with this flexibility and scope comes substantial exposure to misuse, misinterpretation, and misunderstanding. In this talk, Dr Jamie Cummings will describe some examples of this misuse and misinterpretation in research, as well as how inferences about what LLMs can and cannot do are lead astray by these issues. The talk will conclude with some reflections on how we can conduct better and more robust research seeking to evaluate the capacities of LLMs for social science research.
Dr Jamie Cummings is a metascientist and senior postdoctoral researcher at the University of Bern, currently a visiting scholar at the University of Oxford. Before this, Dr Cummings spent several years working as a postdoc and PhD student in psychology at Ghent University, Belgium. He is interested in research trustworthiness assessment, and develop tools to assist researchers in these purposes (including RegCheck, https://regcheck.app). Dr Cummings also part of the team leading the ERROR post-publication peer review bug bounty program (https://error.reviews). More generally, he is interested in the application of forensic metascience methods to investigate the use (and misuse) of large language models in behavioural research.
LSE holds a wide range of events, covering many of the most controversial issues of the day, and speakers at our events may express views that cause offence. The views expressed by speakers at LSE events do not reflect the position or views of the London School of Economics and Political Science.