An explainable approach for depression prediction using deep learning algorithms

dc.contributor.author Abura, Jerome
dc.date.accessioned 2026-01-04T12:46:16Z
dc.date.available 2026-01-04T12:46:16Z
dc.date.issued 2025
dc.description A dissertation submitted to the Directorate of Graduate Training in partial fulfillment of the award of Master of Science in Computer Science (MCSC) Degree of Makerere University
dc.description.abstract Depression is a pressing global health concern that poses significant challenges to mental health professionals. The condition’s severity is often exacerbated by stressful life events, including trauma, loss of loved ones, and social isolation. As sound mental health is essential for a nation’s development and societal transformation, early prediction and diagnosis of depression are crucial for effective treatment. Traditional diagnosis methods, relying on interviews and physical appearance, have limitations just like statistical tools used by psychiatrists to conclude whether a patient is depressed or not. The widespread availability of the Internet and computer devices has led to the development of computer- based methods for predicting and diagnosing depression. However, the black-box nature of Artificial Intelligence algorithms raise concerns among patients. This study explored the application of Explainable Artificial Intelligence methods to predict and improve diagnostic accuracy for depression. Using the Fer2013 dataset, we employed a convolutional neural network to predict depression based on facial emotional expressions. Our model correctly classifies 78.29% of the sampled facial emotions and achieves an overall accuracy of 52.97%. We identified disgust, sadness, anger, and fear as the most significant predictors associated with depression. To provide insights into the model’s predictions, we utilized two XAI explainers: LIME and SHAP. LIME emphasized the importance of local feature explanations, highlighting the role of individual facial features such as the curvature of the eyebrows in predicting depression. In contrast, SHAP focused on the global feature importance, revealing the overall contribution of each feature such as the presence of tearfulness to the model’s predictions. Our results show that both explainers offer distinct approaches to explaining prediction outcomes, with LIME providing more comprehensive explanations. This study contributes to the development of explainable AI methods for depression diagnosis and highlights the importance of transparency in AI-driven healthcare applications. Future research directions and recommendations are provided to further improve the accuracy and explainability of depression diagnosis models.
dc.identifier.citation Abura, J. (2025). An explainable approach for depression prediction using deep learning algorithms; Unpublished Masters dissertation, Makerere University, Kampala
dc.identifier.uri https://makir.mak.ac.ug/handle/10570/16153
dc.language.iso en
dc.publisher Makerere University
dc.title An explainable approach for depression prediction using deep learning algorithms
dc.type Other
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
ABURA-COCIS-Masters-2025.pdf
Size:
5.62 MB
Format:
Adobe Portable Document Format
Description:
Masters dissertation
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
462 B
Format:
Item-specific license agreed upon to submission
Description: