As a student in the Department of Anthropology – or taking an Anthropology course – it is important that you understand how you can use generative AI in your assessments, so that you can do your best in your studies.
As a pedagogically diverse School with different disciplines, there is no one approach to the use of generative AI across LSE. Instead, the School has developed central guidance that allows different approaches to be taken by different academic departments and course convenors. You should familiarise yourself with the School's guidance, which is available here. Within the School's guidance, there are three positions to know about:
- Position 1: No authorised use of generative AI in assessment.
- Position 2: Limited authorised use of generative AI in assessment
- Position 3: Full authorised use of generative AI in assessment.
In Anthropology we follow
- Position 2: Limited authorised use of generative AI in assessment
Position 2
This means that generative AI tools can be used in specific ways for assessments in the Department of Anthropology.
The full Department of Anthropology Policy on Student Use of AI can be found below. You should read it carefully to understand what uses of AI are and are not permitted in the Department. Failing to follow this policy may result in an allegation of academic misconduct.
This policy applies only to courses taken in the Department of Anthropology. If you are taking a course from outside the Department, you are responsible for checking and abiding by the relevant policy for that course.
Department of Anthropology Policy on Student Use of AI
Artificial Intelligence tools, when guided by critical reflection and ethical engagement, can play a useful role in aspects of educational and scholarly practice. Yet, such tools can also be detrimental to learning outcomes and may degrade critical skills. The Department’s default position is that the use of AI by students is permitted for a limited range of specific purposes only, as detailed below. Exceptions to the default position may be made at the discretion of individual course convenors, in which case this will be clearly communicated via the course Moodle page.
The writing process. Text produced by any Generative AI system (e.g. ChatGPT, Claude, Copilot, Gemini, etc.) cannot be used in the writing of any summative or formative assessment. This includes using the output of AI prompts or searches directly, in edited or modified form, or citing or otherwise relying upon such output as a source for the purpose of supporting statements or arguments made in summative or formative work. The only acceptable use of AI in the writing process is for general advice about the essay as a form of expository writing and how to craft one well. For such acceptable uses, AI prompts should pertain to how to develop any essay in general, not an essay on a specific topic. Students should be aware that outputs to such prompts are likely to be quite generic.
Students must not upload samples of their own writing to be improved or rewritten by generative AI tools such as ChatGPT, Claude or Copilot. This includes using software to translate original essays from another language into English for submission. This restriction does not apply to tools whose functionality is restricted to making purely stylistic suggestions, such as Grammarly and the AI technology currently built into Microsoft Word. For more specific guidance on which forms of editorial assistance are acceptable, please consult the LSE’s Statement on Editorial Help for Students’ Written Work.
Submitting text for assessment purposes that has been produced in part or in whole by AI will be considered academic misconduct. If suspected, students may be called for a viva to explain their writing and research process. We may also use automated systems and other techniques to identify possible inappropriate use of AI tools. In cases of suspected misconduct or plagiarism, School regulations and rules will be applied. More information can be found in the LSE’s guidance on Assessment Discipline and Academic Misconduct.
Images. The use of AI tools to generate images (e.g. photos or diagrams) for inclusion as illustrations in certain forms of coursework (e.g. position pieces) is permitted (where appropriate), but must be correctly cited.
Learning support. The department permits the use of AI tools for certain tasks that support learning, much in the way that internet and database searches may be used. For example, AI tools may be used to explain terms, concepts or approaches; to help find research articles; to summarise trends in the literature; or to identify questions and avenues for further research. Students should also be aware that Generative AI tools are still highly unreliable, and frequently ‘hallucinate’ false or misleading information. This can include inventing sources that do not exist or attributing ideas to authors in incorrect ways. Students must therefore exercise caution when using any material generated in this way, and should be prepared to verify the accuracy of any outputs and apply good scholarly judgment.
Students must not upload course materials (including readings, syllabi, lecture slides, lecture recordings, or video transcripts) to Generative AI tools. Doing so can violate copyright rules and intellectual property rights; such practices are also likely to undermine the development of students’ critical skills in reading and analysis.
Classes and seminars. AI should not be used during classes or seminars, including for fact-checking or the auto-translation of spoken or written content, unless explicitly permitted by the instructor. Students are permitted to use AI tools to explore materials and test their understanding of ideas ahead of class discussions, but all contributions to discussions and presentations must be students’ own work, not AI-generated text that is read aloud.
Research and Fieldwork. The use of AI in original anthropological research and primary data analysis is generally permitted, with the proviso that doing so can raise ethical and legal issues that should be considered carefully on a case-by-case basis. Examples of permitted use may include using generative AI to stimulate questions or possible lines of enquiry for ethnographic fieldwork; transcribing and analysing recorded interviews, including via text-to-speech or speech-to-text software; synthesising and summarising field notes; using AI-powered tools for literature mapping and discovery (e.g. Undermind or Elicit); or tools currently built into popular software packages such as NVivo or EndNote. Researchers must take all necessary steps to protect the privacy and anonymity of research participants and must inform themselves of how any data they upload will be stored and/or used, e.g. for the purposes of training models.
At present, we recommend the use of either Anthropic’s Claude for Education or Microsoft Copilot for all permitted purposes, as these tools do not retain submitted data to train models under the terms of LSE’s current licenses.
A range of additional information and resources can be found on the LSE website. In particular, please refer to the Data Science Institute’s Guidance on the use of generative AI to support research; the Digital Skills Lab’s guidance on the use of Microsoft Copilot; the Eden Centre’s Artificial Intelligence, Education and Assessment page; and the Moodle self-study course Generative AI: Developing your AI Literacy.