Home > Research > Publications & Outputs > AI-based Question Answering Assistance for Anal...

Links

Text available via DOI:

View graph of relations

AI-based Question Answering Assistance for Analyzing Natural-language Requirements

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Saad Ezzini
  • Sallam Abualhaija
  • Chetan Arora
  • Mehrdad Sabetzadeh
Close
Publication date14/05/2023
Host publication2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1277-1289
ISBN (print)9781665457026
<mark>Original language</mark>English

Abstract

By virtue of being prevalently written in natural language (NL), requirements are prone to various defects, e.g., inconsistency and incompleteness. As such, requirements are frequently subject to quality assurance processes. These processes, when carried out entirely manually, are tedious and may further overlook important quality issues due to time and budget pressures. In this paper, we propose QAssist - a question-answering (QA) approach that provides automated assistance to stakeholders, including requirements engineers, during the analysis of NL requirements. Posing a question and getting an instant answer is beneficial in various quality-assurance scenarios, e.g., incompleteness detection. Answering requirements-related questions automatically is challenging since the scope of the search for answers can go beyond the given requirements specification. To that end, QAssist provides support for mining external domain-knowledge resources. Our work is one of the first initiatives to bring together QA and external domain knowledge for addressing requirements engineering challenges. We evaluate QAssist on a dataset covering three application domains and containing a total of 387 question-answer pairs. We experiment with state-of-the-art QA methods, based primarily on recent large-scale language models. In our empirical study, QAssist localizes the answer to a question to three passages within the requirements specification and within the external domain-knowledge resource with an average recall of 90.1% and 96.5%, respectively. QAssist extracts the actual answer to the posed question with an average accuracy of 84.2%.