Background
The advent of Large Language Models (LLMs) has revolutionized knowledge-intensive tasks by offering new tools to automate and streamline workflows. This development extends to the domain of academic research, where LLMs hold potential to support various processes, including literature reviews.
Structured Literature Reviews (SLRs) serve a critical role in advancing scientific fields by systematically collecting and analyzing a large body of existing literature to identify trends, gaps, and new avenues for future research. However, conducting SLRs is time-consuming and labor-intensive, requiring researchers to search through extensive databases, read large quantities of articles, and make informed decisions on which works are most relevant. Given these demands, there is an opportunity to explore whether LLMs can assist in automating parts of the SLR process, particularly in identifying relevant literature efficiently.
While previous research, such as Antu et al. (2023), has explored the use of LLMs in structured literature reviews, there is still insufficient evidence to confirm the reliability of LLMs in producing accurate and dependable literature reviews. The current body of knowledge lacks comprehensive studies that provide empirical support for the effectiveness of LLMs in accurately selecting relevant works from vast collections of research, leaving open the question of whether LLMs can be fully trusted to assist in the SLR process.
Research Goal
This thesis aims to investigate how reliably LLMs can be used to select relevant works from an existing body of literature? This question is critical for understanding the potential of LLMs to assist researchers in streamlining the literature review process as it could significantly reduce the time and effort required by researchers across various disciplines. If LLMs can consistently identify key literature with a high degree of accuracy, they could revolutionize research methodologies, making it more feasible to handle large-scale reviews. Such advancements would have wide-reaching implications in any field where literature aggregation is essential, from medicine to social sciences to engineering.
Methodology
To address the research question, we suggest to employ a comparative analysis by recreating existing Structured Literature Reviews (SLRs) using LLMs. By replicating previously conducted SLRs where the relevant literature has already been established, the accuracy of LLM-based selection can be quantitatively measured against the "ground truth" provided by human researchers. This approach allows for a rigorous assessment of how closely the LLMs' selections align with those made by experts, providing clear insights into the reliability of LLMs in the context of structured literature reviews.
We are looking for candidates who:
- Are currently enrolled in KIT bachelor's or master's program.
- Have a strong foundation in machine learning and artificial intelligence.
- Are proficient in programming languages such as Python and have experience with AI frameworks like HuggingFace (to deploy Large Language Models).
- Possess excellent analytical, problem-solving, and communication skills.
Details
Start: Immediately
Duration: 6 months
Language: English/German for communication, English for final thesis
Location: Up to you
How to Apply
Interested students should submit the following:
- A resume or CV highlighting relevant coursework, projects, and skills.
- A brief statement of interest explaining why you are interested in this project and how your background and skills make you a suitable candidate.
- Any relevant academic transcripts or references.
Contact Information
For more information or to submit your application, please contact us at:
Join us in pushing the boundaries of artificial intelligence and making contributions to language models in research. We look forward to working with talented and driven students who are ready to take on this exciting challenge.
References
Antu, Shouvik Ahmed et al. “Using LLM (Large Language Model) to Improve Efficiency in Literature Review for Undergraduate Research.” LLM@AIED (2023).