Enhancing User Trust and Interpretability in AI-Driven Feature Request Detection for Mobile App Reviews: An Explainable Approach

Mobile app developers struggle to prioritize updates by identifying feature requests within user reviews. While machine learning models can assist, their complexity often hinders transparency and trust. This paper presents an explainable Artificial Intelligence (AI) approach that combines advanced e...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 12; s. 114023 - 114045
Hlavní autoři: Gambo, Ishaya, Massenon, Rhodes, Lin, Chia-Chen, Ogundokun, Roseline Oluwaseun, Agarwal, Saurabh, Pak, Wooguil
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Mobile app developers struggle to prioritize updates by identifying feature requests within user reviews. While machine learning models can assist, their complexity often hinders transparency and trust. This paper presents an explainable Artificial Intelligence (AI) approach that combines advanced explanation techniques with engaging visualizations to address this issue. Our system integrates a bidirectional Long Short-Term Memory (BiLSTM) model with attention mechanisms, enhanced by Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). We evaluate this approach on a diverse dataset of 150,000 app reviews, achieving an F1 score of 0.82 and 89% accuracy, significantly outperforming baseline Support Vector Machine (F1: 0.66) and Convolutional Neural Network (CNN) (F1: 0.72) models. Our empirical user studies with developers demonstrate that our explainable approach improves trust (27%) when explanations are provided and correct interpretation (73%). The system's interactive visualizations allowed developers to validate predictions, with over 80% overlap between model-highlighted phrases and human annotations for feature requests. These findings highlight the importance of integrating explainable AI into real-world software engineering workflows. The paper's results and future directions provide a promising approach for feature request detection in app reviews to create more transparent, trustworthy, and effective AI systems.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3443527