The “deep learning” approach to artificial intelligence allows computers to perform diagnostic tasks with accuracy comparable to that of trained medical professionals. According to a 2017 study, a convolutional neural network trained to diagnose skin lesions had superior sensitivity and specificity for identifying melanoma compared with panels of dermatologists.1
Published in JAMA, a viewpoint piece by Dr. Geoffrey Hinton, PhD, of the Google Brain Team and department of computer science, University of Toronto, Ontario, Canada, described the technical basis for deep learning and its potential as a diagnostic tool.2
Deep learning approximates the capacity of the human brain to alter synaptic connections between neurons as new information is introduced. In modeling disease likelihood in patients, statisticians may utilize logistic regression to estimate the appropriate weights for each input variable, which correspond to disease characteristics. Neural networks are useful when the interactions between disease characteristics are complex and manifold.
To accommodate the interactions between disease features, neural networks often employ a layer of “hidden features,” and each layer depends on the features of the preceding layer. The network must identify which features are currently “active” in a case to determine the presence of disease. To parse out an appropriate set of hidden features, neural networks must repeatedly modify the weights of connections from input variables to hidden factors and from hidden factors to output variables.
Introduced in 1986, the “back-propagation” procedure allows neural networks to adjust these weights by “propagating” discrepancies between network output and target output through each connection and its respective weight.3 Per this procedure, neural networks substantially outperform learning done via trial-and-error.4 Convolutional neural networks expand upon the concept of hidden factors by making a separate copy of each feature detector for each position in the image. After the weights for each copy are updated through back-propagation, the respective weights are averaged. This average is assigned to each copy and information acquired from one part of an image is applied to every location in subsequent images.
Dr Hinton acknowledged that the hidden factors of neural networks may prove inscrutable to medical professionals, particularly when considering large datasets. It is difficult to tease out the individual features of a model, given that each layer is predicated on a preceding set of complex interactions. As computers become more powerful, however, their efficiency in interpreting massive datasets will only increase. The medical community should be aware of the potential applications for deep learning in healthcare. Image classification represents just one aspect of its utility.
- Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542 (7639):115-118. doi:10.1038/nature21056
- Hinton G. Deep Learning-A Technology With the Potential to Transform Health Care. JAMA. 2018;320(11):1101-1102.
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539
- Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323(6088):533. doi:10.1038/323533a0
This article originally appeared on Medical Bag