Videos

Automatic Feature Extraction from Hyperspectral Imagery using Deep Recurrent Neural Networks

Presenter
September 25, 2019
Abstract
Joshua Agar - Lehigh University Characterization of materials relies on measuring their stimuli-driven response after perturbation by an external energy source. These measurements generally involve either continuously changing the magnitude of the perturbation or the bandwidth/energy of the response which is measured resulting in data which has sequential or temporal dependence. Recent advances in high-speed sensors have allowed spectroscopic measurements to be conducted using a multitude of techniques (e.g., electron microscopy, atomic force microscopy, etc.) which also have high-spatial resolution. Coupling spectroscopic characterization with imaging allows researchers to directly probe structure-property relations at relevant length and time scales. Despite a boon in these multidimensional spectroscopic imaging techniques the size and complexity of the data being collected coupled with the dearth of downstream analysis approaches have limited the ultimate scientific contributions of these powerful experimental techniques. Here, we show how deep-recurrent neural networks can be used to automate the extraction of features of physically-important phenomena concealed within “big” multichannel hyperspectral data into focus for interpretation. Specifically, we will discuss the broad applicability of this approach to experimental techniques ranging from piezoresponse measurement of ferroelectrics, discovery of new conduction mechanisms at charged domain walls, and atomically-resolved electron energy loss spectroscopic of functional interfaces. The methodology developed paves the way for spectroscopic techniques wherein the conventional scientific methods of designing targeted experiments aimed at a specific hypothesis are supplanted by approaches which collect all seemingly relevant data, which can then be automatically interpreted to identify a hypothesis for empirical testing. Joshua C. Agar1, T. Ræder2, T.S. Holstad,2 D. Meier2, M. Taheri3
Supplementary Materials