TY - GEN
T1 - PointFaceFormer
T2 - 18th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2024
AU - Gao, Ziqi
AU - Li, Qiufu
AU - Wang, Gui
AU - Shen, Linlin
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Existing 3D point cloud-based facial recognition struggles to fully leverage both global and local information inherent in the 3D point cloud data. In this paper, we introduce the PointFaceFormer, the first Transformer model designed for 3D point cloud face recognition. It incorporates an attention mechanism based on dot product and cosine functions to construct a similarity Transformer architecture, which effectively extracts both local and global features from the point cloud data. Experimental results demonstrate that PointFaceFormer achieves a recognition accuracy of 89.08% and a verification accuracy of 76.93% on the large-scale facial point cloud dataset Lock3DFace, which is a new state-of-the-art in 3D face recognition. Furthermore, PointFaceFormer exhibits excellent generalization performance on cross-quality datasets. Additionally, we validate the effectiveness of the attention mechanism through ablation experiments, which justify the effectiveness of the proposed modules.
AB - Existing 3D point cloud-based facial recognition struggles to fully leverage both global and local information inherent in the 3D point cloud data. In this paper, we introduce the PointFaceFormer, the first Transformer model designed for 3D point cloud face recognition. It incorporates an attention mechanism based on dot product and cosine functions to construct a similarity Transformer architecture, which effectively extracts both local and global features from the point cloud data. Experimental results demonstrate that PointFaceFormer achieves a recognition accuracy of 89.08% and a verification accuracy of 76.93% on the large-scale facial point cloud dataset Lock3DFace, which is a new state-of-the-art in 3D face recognition. Furthermore, PointFaceFormer exhibits excellent generalization performance on cross-quality datasets. Additionally, we validate the effectiveness of the attention mechanism through ablation experiments, which justify the effectiveness of the proposed modules.
UR - http://www.scopus.com/inward/record.url?scp=85199479549&partnerID=8YFLogxK
U2 - 10.1109/FG59268.2024.10581966
DO - 10.1109/FG59268.2024.10581966
M3 - Conference contribution
AN - SCOPUS:85199479549
T3 - 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition, FG 2024
BT - 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition, FG 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 May 2024 through 31 May 2024
ER -