EarNet: Biometric Embeddings for End to End Person Authentication System Using Transient Evoked Otoacoustic Emission Signals
作者:Akshath Varugeese, A. Shahina, Khadar Nawas, A. Nayeemulla Khan
摘要
Transient Evoked Otoacoustic Emissions (TEOAE) are a class of oto-acoustic emissions that are generated by the cochlea in response to an external stimulus. The TEOAE signals exhibit characteristics unique to an individual, and are therefore considered as a potential biometric modality. Unlike conventional modalities, TEOAE is immune to replay and falsification attacks due to its implicit liveliness detection feature. In this paper, we propose an efficient deep neural network architecture, EarNet, to learn the appropriate filters for non-stationary (TEOAE) signals, which can reveal individual uniqueness and long- term reproducibility. EarNet is inspired by Google’s FaceNet. Furthermore, the embeddings generated by EarNet, in the Euclidean space, are such that they reduce intra-subject variability while capturing inter-subject variability, as visualized using t-SNE. The embeddings from EarNet are used for identification and verification tasks. The K-Nearest Neighbour classifier gives identification accuracies of 99.21% and 99.42% for the left and right ear, respectively, which are highest among the machine learning algorithms explored in this work. The verification using Pearson correlation on the embeddings performs with an EER of 0.581% and 0.057% for the left and right ear, respectively, scoring better than all other techniques. Fusion strategy yields an improved identification accuracy of 99.92%. The embeddings generalize well on subjects that are not part of the training, and hence EarNet is scalable on any new larger dataset.
论文关键词:Deep neural network, Biometric embedding, Triplet loss, Person authentication system, Transient evoked otocoustic emission
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11063-021-10546-2