Large language models (LLMs) are a groundbreaking AI technology which have recently emerged, powering services such as ChatGPT. However, there are major obstacles to their use in healthcare: with problems of reliability, trust, and ‘hallucinations’ of fabricated information. Particular care is needed as incorrect LLM outputs could lead to patient harm.
This studentship investigates the problem of how can we know when LLMs are accurate and reliable enough to use. The PhD will analyse diverse LLM outputs, and work with global experts in evidence synthesis and computer science to develop new methods for evaluating the quality of automated clinical guidelines. The PhD will then apply these methods to AI-generated evidence reports, to understand if and when they are ready to be used in practice.
This project will linked with the Wellcome Trust funded SOLACE-AI grant, which aims to pioneer AI-based evidence synthesis to tackle climate change health emergencies. The SOLACE-AI project aims to drastically reduce the time, effort, and cost of producing evidence syntheses. The syntheses created by SOLACE-AI will be on-demand, always up-to-date, and could address problems, countries, and issues of health equity which have been too-often ignored. Getting high quality evidence into the hands of those facing and dealing with climate health problems could lead to better decision-making, and ultimately improve the lives of affected community residents. Students will be embedded in this global project based at King’s, with the opportunity to work with collaborators (including in Africa, and US).

