Abstract
Access to realistic system requirement specifications (SyRS) is often limited due to confidentiality, proprietary constraints, or the high cost of expert involvement. This presents a significant challenge for software engineering tasks that depend on structured, domain-specific artefacts for evaluation, such as in the context of constraint-based recommender systems. Recent advances in large language models (LLMs) offer a potential solution for generating synthetic substitutes for real-world requirements, but systematic methods for doing so remain under-explored.
This thesis investigates how LLMs can be used to generate high-quality Synthetic System Requirement Specifications (SSRS) in domains where real-world SyRS or expert input is unavailable. To address this, a structured, repeatable process called SSRS-Gen is introduced. The process integrates scientifically grounded prompt engineering techniques, custom evaluation metrics, and LLM-based self-assessment strategies. It was applied in the context of the SecuRe security recommender system and tested across ten distinct industry domains, iteratively generating and evaluating a total of 300 SSRS instances.
Results show that prompt patterns such as Template and Persona significantly improved structural consistency and contextual plausibility of generated SSRSs. Self-assessment techniques were effective in capturing structural completeness and identifying internal inconsistencies within the SSRS, but showed limitations when evaluating realism that requires nuanced and in-depth domain-specific knowledge. Expert evaluation revealed a high degree of alignment with LLM-based assessments in many cases, while also identifying recurring weaknesses such as oversimplification, generic phrasing, and overly optimistic requirements.
This work contributes an initial, structured approach to SSRS generation using LLMs and highlights both the promise and current limitations of automated SSRS generation. It provides a foundation for future research into the role of LLMs in requirements engineering and evaluation workflows.
Resources
Project information
Finished
Master
Florian Braun
2025-006