计算机科学
人工智能
面部识别系统
可解释性
面子(社会学概念)
水准点(测量)
背景(考古学)
深度学习
模式识别(心理学)
人工神经网络
三维人脸识别
机器学习
人脸检测
古生物学
社会科学
大地测量学
社会学
生物
地理
作者
Ankit Rajpal,Khushwant Sehra,Rashika Bagri,Pooja Sikka
标识
DOI:10.1007/s11277-022-10127-z
摘要
Face Recognition aims at identifying or confirming an individual's identity in a still image or video. Towards this end, machine learning and deep learning techniques have been successfully employed for face recognition. However, the response of the face recognition system often remains mysterious to the end-user. This paper aims to fill this gap by letting an end user know which features of the face has the model relied upon in recognizing a subject's face. In this context, we evaluate the interpretability of several face recognizers employing deep neural networks namely, LeNet-5, AlexNet, Inception-V3, and VGG16. For this purpose, a recently proposed explainable AI tool-Local Interpretable Model-Agnostic Explanations (LIME) is used. Benchmark datasets such as Yale, AT &T dataset, and Labeled Faces in the Wild (LFW) are utilized for this purpose. We are able to demonstrate that LIME indeed marks the features that are visually significant features for face recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI