Skip to main content
Fig. 1 | EURASIP Journal on Information Security

Fig. 1

From: Cancelable templates for secure face verification based on deep learning and random projections

Fig. 1

(Top) BiometricNet+ during training: after face detection and alignment, pairs are given as input to FeatureNet+ which extracts discriminative features \(\textbf{f}_i \in \mathbb {R}^d\). The feature vectors are subtracted \({\textbf{f}} = \textbf{f}_1 - \textbf{f}_2\) (\({\textbf{f}} \in \mathbb {R}^{d}\)) and passed to MetricNet+ which maps \({\textbf{f}}\) onto the target distributions \(\textbf{z} \in \mathbb {R}^p\) in the latent space. (Bottom) BiometricNet+ during the test (i.e., authentication) phase: given a pair of aligned face images, we obtain 4 image pairs, i.e., \({P_1,P_2,P_3}\) and \(P_4\) by accounting for all the possible horizontal flip combinations; then, features are projected in separated random spaces and reconstructed using ISTA-Net [9], simulating the transmission of sensitive data over an unsecured channel; the corresponding output vectors in the latent space are computed and then aggregated to \(\bar{\textbf{z}}\); finally, aggregated features are compared to a threshold \(\tau\)

Back to article page