Dr Ghita Berrada

Dr Ghita Berrada

Assistant Professor (Education)

Data Science Institute

Connect with me

Languages
Arabic, English, French, Spanish
Key Expertise
Graph mining; Ethical ML/AI; Decision Support Systems; Time series analysis

About me

Ghita Berrada is an Assistant Professorial Lecturer at the Data Science Institute and a visiting lecturer at the School of Life Course & Population Sciences at King’s College London.

In 2015, she completed a PhD in Computer Science from the University of Twente (Netherlands), whose goal was to design a medical data sharing platform, using EEGs (i.e electroencephalograms) as example of medical data, so as to maximise the accessibility of the data/evidence and reduce the misdiagnosis risk.

Prior to joining LSE, she was a Research Associate at the School of Informatics at the University of Edinburgh where she investigated the use of several categorical anomaly detection techniques to try and detect advanced persistent threats (i.e a type of cyberattacks) from operating system provenance graphs as well as a Research Associate at the School of Life Course & Population Sciences at King’s College London where she mainly worked on clinical decision support systems (e.g for cancer diagnosis or antibiotics prescription).

Broadly speaking, she is interested in 'decision support systems', in particular in domains such as healthcare or computer security. More specifically, her research to date has revolved around questions such as:

  • How do you design systems to assist users' decision-making when the data available is highly unbalanced (i.e the classes of interest are varied and barely represented in the data), highly uncertain and barely annotated? How do you design a good system when you are interested in the anomalies rather than the (majority) "normal class"?
  • How do you properly evaluate such systems?
  • Can we design "better" decision support systems by making use of provenance data/data provenance?
  • How do you present the results of such decision support systems in a way that is easily understandable by a human user (e.g visualization, explanations...)?
  • How do we make such systems "ethical" by design and avoid their possible (negative) externalities (e.g impact on human dignity) ?

Expertise Details

Graph mining; Anomaly detection; Ethical ML/AI; Explainability; Rule mining/FCA; Machine Learning; Provenance; Databases; Decision Support Systems; Time series analysis