A promising AI based super resolution image reconstruction technique for early diagnosis of skin cancer.

Journal: Scientific reports
Published Date:

Abstract

Skin cancer can be prevalent in people of any age group who are exposed to ultraviolet (UV) radiation. Among all other types, melanoma is a notable severe kind of skin cancer, which can be fatal. Melanoma is a malignant skin cancer arising from melanocytes, requiring early detection. Typically, skin lesions are classified either as benign or malignant. However, some lesions do exist that don't show clear cancer signs, making them suspicious. If unnoticed, these suspicious lesions develop into severe melanoma, requiring invasive treatments later on. These intermediate or suspicious skin lesions are completely curable if it is diagnosed at their early stages. To tackle this, few researchers intended to improve the image quality of the infected lesions obtained from the dermoscopy through image reconstruction techniques. Analyzing reconstructed super-resolution (SR) images allows early detection, fine feature extraction, and treatment plans. Despite advancements in machine learning, deep learning, and complex neural networks enhancing skin lesion image quality, a key challenge remains unresolved: how the intricate textures are obtained while performing significant up scaling in medical image reconstruction? Thus, an artificial intelligence (AI) based reconstruction algorithm is proposed to obtain the fine features from the intermediate skin lesion from dermoscopic images for early diagnosis. This serves as a non-invasive approach. In this research, a novel melanoma information improvised generative adversarial network (MELIIGAN) framework is proposed for the expedited diagnosis of intermediate skin lesions. Also, designed a stacked residual block that handles larger scaling factors and the reconstruction of fine-grained details. Finally, a hybrid loss function with a total variation (TV) regularization term switches to the Charbonnier loss function, a robust substitute for the mean square error loss function. The benchmark dataset results in a structural index similarity (SSIM) of 0.946 and a peak signal-to-noise ratio (PSNR) of 40.12 dB as the highest texture information, evidently compared to other state-of-the-art methods.

Authors

  • Nirmala Veeramani
    School of Computing, SASTRA University, Thirumalaisamudram, Thanjavur, 613401, Tamil Nadu, India.
  • Premaladha Jayaraman
    School of Computing, SASTRA University, Thirumalaisamudram, Thanjavur, 613401, Tamil Nadu, India. premaladha@ict.sastra.edu.