Spotlight News

Scientists show screening for skin disease can now be done on a standard laptop

A research group out of the Biomedical Engineering Department at the University of Houston have demonstrated the ability to train a standard laptop to identify diseased skin with extreme accuracy.

The research team, lead by Metin Akay, John S. Dunn Endowed Chair Professor of biomedical engineering, used a novel deep neural network architecture which the team then implemented using a standard laptop computer (2.5 GHz Intel Core i7). Deep neural networks (DNN) are artificial neural networks (ANN) with multiple layers between the input and output layers. The artificial intelligence organizes algorithms into the layers by which allows the network to make calculated decisions.

Deep neural network
In artificial intelligence, deep learning organizes algorithms into layers (the artificial neural network) that can make its own intelligent decisions – the UH version works on a standard laptop.

The most commonly used DNN in engineering is a Convolutional Neural Network (CNN). This a highly effective neural network in regards to training AI to analyze visual imagery, but there are limitations with this design.

“Due to the limited size of available training sets, the large size of the required layers, and time consuming computational graphical processing units (GPU), success of using Computational DNNs in biomedical applications has been limited,” the study states.

Akay’s group trained the network to be able to specifically identify systemic sclerosis (SSc), a rare autoimmune disease marked by hardened or fibrous skin and internal organs, with the ultimate goal of using the “approach as a rapid and reliable” method to assess the severity of diseased skin using images to help dermatopathologists.

Metin Akay and partner Yasemin Akay added layers and combined the UNet, a modified CNN architecture, in order to create their novel deep neural network. According to the data, their new deep learning architecture is far superior to the CNNs for classification of SSc images.

“After fine tuning, our results showed the proposed network reached 100% accuracy on the training image set, 96.8% accuracy on the validation image set, and 95.2% on the testing image set,” said Yasmin Akay, UH instructional associate professor of biomedical engineering.

The time to train one of these deep-learning networks? Only 5 hours.

The published paper states that the group “further investigated the efficacy of the MobileNetV2 architecture to assess the severity of SSc skin into early/mid or late stages of SSc. Our preliminary study suggests that the network architecture was capable of discriminating both early and late SSc images.”

Leave a Reply