• Login
    View Item 
    •   Mak IR Home
    • College of Computing and Information Sciences (CoCIS)
    • School of Computing and Informatics Technology (CIT)
    • School of Computing and Informatics Technology (CIT) Collection
    • View Item
    •   Mak IR Home
    • College of Computing and Information Sciences (CoCIS)
    • School of Computing and Informatics Technology (CIT)
    • School of Computing and Informatics Technology (CIT) Collection
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Explainable deep learning for multiclass skin cancer detection

    Thumbnail
    View/Open
    Master's dissertation (5.603Mb)
    Date
    2024-12
    Author
    Bukenya, Andrew
    Metadata
    Show full item record
    Abstract
    Skin cancer mortality is escalating globally due to late identification and delayed diagnosis of cases. Lack of adequate specialists, faster and reliable approaches has contributed to the delay and tedious procedures involved in diagnosing the disease. Moreover its detection and classification remain essential. This study focused on implementing an explainable and mobile compatible model for skin cancer detection and classification by combining Deep Learning and Visual explanation methods. The benefits of this study significantly target low and middle-income countries (LMICS) especially those within Sub-Saharan Africa (SSA). Utilizing explainability techniques, the study mitigates the black-box problem in existing AI-based approaches. ISIC 2019 dataset which is compiled by the International Skin Imaging Collaboration (ISIC) [1] was used to train 4 pre-trained models and a custom CNN. MobileNetV2, NasNetMobile, DenseNet121, and EfficientNetV2S were fine-tuned with several hyperparameters, additional layers before training, validated, and then tested. Also the custom CNN was validated and tested to examine its competence. They achieved significant results on validation with 96.40%, 76.06%, 91.37%, and 94.90% accuracy respectively. Likewise the custom CNN achieved 81.52% accuracy. MobileNetV2 model outperformed other architectures followed by EfficientNetV2S. Explainability techniques including Grad-CAM, LIME, and SHAP were comprehended to determine the most suitable in unveiling the decision-making of the best model. The trained model could enhance the existing workflow by saving time, ensuring transparency through human-comprehensible visual features, and provisioning feasible mobile compatibility to aid domain professionals in making informed decisions.
    URI
    http://hdl.handle.net/10570/13969
    Collections
    • School of Computing and Informatics Technology (CIT) Collection

    DSpace 5.8 copyright © Makerere University 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of Mak IRCommunities & CollectionsTitlesAuthorsBy AdvisorBy Issue DateSubjectsBy TypeThis CollectionTitlesAuthorsBy AdvisorBy Issue DateSubjectsBy Type

    My Account

    LoginRegister

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    DSpace 5.8 copyright © Makerere University 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV