A New AI Framework Detects Melanoma with 99% Accuracy

About 212,000 new cases of melanoma — the most serious form of skin cancer — will be diagnosed in the U.S. in 2025, according to the American Academy of Dermatology. If caught late, melanoma can spread to the lymph nodes and internal organs and become life-threatening.

To improve early detection, Northeastern University researchers turned to artificial intelligence. Divya Chaudhary, an assistant teaching professor of computer science at Northeastern’s Seattle campus, and Peng Zhang, a graduate student in the Khoury College of Computer Sciences, developed a new and highly efficient hybrid system called the SegFusion Framework to help doctors spot melanoma more quickly and accurately.

“If we can detect it early, we can save a lot of lives and help medical practitioners, clinicians in early diagnosis,” Chaudhary says.

Chaudhary and Zhang combined the capabilities of two powerful deep learning models that use many layers of connected algorithms to recognize patterns and make decisions from large amounts of data. One of them highlights suspicious spots in skin images, while the other analyzes those areas to decide whether they are cancerous.

When tested against other popular AI approaches, SegFusion consistently came out on top. On the International Skin Imaging Collaboration 2020 dataset, for example, it correctly identified melanoma with 99.01% accuracy, surpassing four traditional machine learning approaches developed by peers: ResNet-101+SVM (97.15%), NasNet (97.7%), InSiNet+U-Net (90.54%) and MobileNetV2 (98.2%).

Divya Chaudhary, an assistant teaching professor of computer science at Northeastern’s Seattle campus, says SegFusion could be adapted to detect other cancers, such as breast or lung cancer. Courtesy photo

Recent advances in skin cancer detection have increasingly relied on deep learning techniques. The Northeastern researchers explored various popular architectures and chose to use U-Net and EfficientNet as the backbones of their hybrid model. U-Net segments suspicious regions in skin images by drawing boundaries around potential problem areas. EfficientNet, designed to optimize both accuracy and speed, classifies those regions as cancerous or not.

To train the system, the researchers used two major dermatology image collections.  HAM10000 provided more than 10,000 images of pigmented lesions, making it ideal for training the segmentation model, while the ISIC 2020 dataset included more than 33,000 images labeled as melanoma or not.

Because melanoma cases made up only 1.8% of the ISIC 2020 dataset, the team balanced it by oversampling positive cases and undersampling negative ones. This ensured the model learned equally from both cancerous and noncancerous examples.

To connect the two models, the researchers built a “data bridge.” First, the segmentation model produces a black-and-white mask of a suspicious area. The bridge overlays the mask onto the original images so the second model can better analyze the highlighted region and classify it as cancerous or not.

The two models currently work one after the other, with some manual steps in between. The team’s goal is to merge them into a fully automated system that streamlines the process from image capture to diagnosis.

Looking ahead, Chaudhary and her students want to expand the system by adding patients’ health records, such as blood pressure and oxygen levels, to improve accuracy even further. They also hope to create an app for dermatologists, allowing the AI to run quietly in the background and assist with real-time decisions during checkups. 

And the potential doesn’t stop at skin cancer — Chaudhary says SegFusion could be adapted to detect other cancers, such as breast or lung cancer.

“The students are working with me on building a big cancerous framework,” she says. “You will be able to put any picture into several AI models we are building, and it will clearly tell you.”

Science & Technology

Recent Stories

Continue Reading