Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD AND A SYSTEM FOR FACE VERIFICATION
Document Type and Number:
WIPO Patent Application WO/2015/154206
Kind Code:
A1
Abstract:
Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden Identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.

Inventors:
TANG XIAOOU (CN)
SUN YI (CN)
WANG XIAOGANG (CN)
Application Number:
PCT/CN2014/000390
Publication Date:
October 15, 2015
Filing Date:
April 11, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TANG XIAOOU (CN)
International Classes:
G06K9/00
Foreign References:
US20130124438A12013-05-16
US20110150301A12011-06-23
CN103605972A2014-02-26
Attorney, Agent or Firm:
INSIGHT INTELLECTUAL PROPERTY LIMITED (InDo BuildingNo. 48A Zhichun Road, Haidian District, Beijing 8, CN)
Download PDF:
Claims:
What is claimed is:

1. An apparatus for face verification, comprising: a feature extracting unit configured to extract HIFs (Hidden Identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden Saver neuron activations of said ConvNets are considered as the HIFs; and

a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.

2. An apparatus of claim L further comprising:

a training unit configured to train the ConvNets for identity classification by inputting aligned regions of faces.

3. An apparatus of claim 1, wherein t e verification unit comprises:

an input layer configured to group the HIFs into a plurality of groups, each group containing HIFs extracted by same ConvNets;

a locally-connected layer configured to extract local features from each group of HIFs;

a fully-connected layer configured to extract global features from the previous extracted local features; and

an output neuron configured to calculate a single face similarity score from the extracted global features so as to determine if the two feature vectors are from the same identit or not based on the calculated score.

4. An apparatus of claim 1, wherein, for each of the ConvNets, the feature extracting unit is configured to input a particular region and its flipped counterpart to each of ConvNets so as to extract the "HIFs.

5. An apparatus of claim 4, wherein the verification unit is configured to concatenate all the extracted HIFs of each face to form thefeature vector for face verification.

6. An apparatus of claim 2, wherein each of the ConvNets comprises a plurality of cascaded feature extracting lavers and a last hidden layer connected to at least one of the feature extracting layers;

wherein the number of features in the current layer of ConvNets, where the features are extracted from the previous layer features of the ConvNets, continue to reduce along the cascaded feature extracting layers until said HIFs are obtained in the last hidden layer of the ConvNets.

7. An apparatus of claim 6, wherein each of the ConvNets comprises four cascaded feature extracting layers and a last hidden layer connected to third and a fourth feature extracting layers.

8. An apparatus of claim 2, wherein for each of the ConvNets, the training unit is further configured to

1) select a face image from a predetermined face training set;

2) determine an input and a target output for the ConvNet; respectively., wherein the input is a face patch cropped from the selected face and the target output is a vector of all zeros except the n-th position being 1 , where n is an identity index of the selected face;

3) input the face patch to the ConvNet to calculate an output by a process of forward propagation in the ConvNet;

4) compare the calculated output with the target output to generate an error signal;

5) back-propagate the generated error signal through the ConvNet so as to adjust

IS parameters of the ConvNet; and

6) repeat steps Q-5 ) until the training process is converged such that the parameters of the ConvNet are determined.

9. A method for face verification, comprising: extracting HIFs from different regions of each face by using differently trained ConvNets, wherei last hidden layer neuron activations of said ConvMets axe considered as the HIFs;

concatenating the extracted HtFs of each face to form a feature vector; and comparing two of the formed feature vectors to determine if they are from the same identity or not

10. A method of claim 9, further comprising:

training a plurality of ConvNets for identity classification by inputting aligned regions ef faces,

H . A method of claim 10, wherein, for each of the ConvNets., wherein the training further comprises:

1) selecting a face image from a predetermined face training set;

2) determining an input and a target output for the ConvNet, respectively, wherein the input is a face patch cropped from the selected face and the target output is a vector of all zeros except the n-th position being 1 , where n is an identity index of the selected face;

3) inputting the face patch to the ConvNet to calculate its output by a process of forward propagation in the ConvNet;

4) comparing the calculated output with the target output to generate an error signal;

5) back-propagating the generated error signal through the ConvNet so as to adjust parameters of the ConvNet; and 6) repeating steps l)-5) until the training process is converged such that the parameters of the ConvNet are determined.

12. A method of claim 9. wherein the comparing further comprises:

grouping the HIFs in the formed feature vectors into a. plurality of groups, each of which contains HIF s extracted by same ConvNets;

extracting local features from each group of HIFs;

extracting global features from the previously extracted local features; and calculating a single face similarity score from the extracted global features so as to determine if the two feature vectors are from the same identi ty or not based o the score.

13. A method of claim 9, wherein for each of the ConvNets, the extracting com prises:

inputting a particular region and its flipped counterpart to each of the ConvNets to extract the HIFs.

14. A method of claim , wherein the concatenating comprises:

concatenating all the extracted HIFs of each face to form a feature vector.

15. A method of claim 10, wherein each of the ConvNets comprises a plurality of cascaded feature extracting layers and a last hidden layer connected to at least one of the feature extracting layers;

wherein the number of features in the current layer of ConvNets, where the features are extracted ftOm the previous layer features of the ConvNets, continue to reduce along the cascaded feature extracting layers until said HIFs are obtained in the last hidden layer of the ConvNets.

Description:
A M ETHOD AND A SYSTEM FOR FACE VERIFICATION

Technical FieUi

[0001 j The present application relates to a method fo face verification and a system thereof.

Background

[0002} Many face verification methods represent faces by high-dimensional over-complete face descriptors like LBP or SIFT, followed by shallow face

verification models.

[0003] Some previous studies have further learned identity related features based on low-level features, in these processes, attribute and simile classifiers are trained to detect facial attributes and measure face similarities to a set of reference people, or distinguish the faces from two different people. Features are output of the learned classifiers. However, they used SVM (Support Vector Machine) classifiers, which are shallow structures, and their learned features are still relatively low-level.

[0004] A few deep models have been used for face verification. Chopra ei al used a Siamese architecture, which extracts features separately from two compared inputs with two identical sub-networks, taking the distance between the outputs of the two sub-networks as dissimilarity. Their feature extraction and recognition are jointly learned with the face verification target,

[0005] Although in the prior art, some of solutions used multiple deep

ConvNets to learn high-level fac similarity features and trained classifiers for face verification, their features are join tly extracted from a pair of faces instead of from a single face. Though highly discriminative, the face similarity features are too short and some useful information may have been lost before the final verification,

[0006] Some previous studies have also used the last hidden layer features of ConvNets for other tasks. Kri heysky et . aj illustrated that the last hidden layer of ConvNets, when learned with the target of image classification, approximates ί Euclidean distances in the semantic space, but with no quantitative results to sho how well these features are for image retrieval. Farabet et ai. concatenated the last hidden layer features extracted from scale-invariant ConvNets with multiple scales of inputs for scene labeling, Previous methods have not tackled the face verification problem. Also, it is unclear how to learn features that are sufficiently discriminative for the fine- grained classes of face identities.

Summary

[0007] In one aspect of the present application, disclosed is an apparatus for face verification, comprising:

a feature extracting unit configured to extract HIFs for different regions of faces by using different trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs; and

a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector; and then compare two of the formed feature vectors to determine if they are from the same identi ty or not.

|0008] In another aspect of the present application, disclosed is method for face verifi cation, comprising:

extracting HIFs from different regions of faces by using different trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs;

concatenating the extracted HIFs to form a feature vector; and

comparing two of the formed feature vectors to determine if they are from the same identity or not,

[0009] According to the present application, the apparatus may further comprise a training unit configured to train the ConvNets for identity classification by inputting aligned regions of faces.

[0010] In contrast to the existing methods, the present application classifies all the identities from the training set simultaneously. Moreover, the present application utilizes the last hidden layer activations as features instead of the classifier outputs. In our ConvNets, the neuron number of the last hidden layer is much smaller than that of the output, which forces the last hidden layer to learn shared hidden representations for faces of different people in order to well classify ail of them, resulting in highly discriminative and compact features,

( Ollj The present application may conduct feature extraction and recognition in two steps, in which the first feature extraction step learned wi th the target of face classification, which is a much stronger supervision signal than verification,

[0012] The present application uses High-dimensional high-level features for face verification. The HIFs extracted from different face regions are complementary, in particular, the feature is extracted from the last hidden layer of the deep ConvNets, which are global, highly-nonlinear, and revealing the face identities. In addition, different ConvNets learn from different visual cues (face regions). So they have to use different ways to judge the face identities and thus the HIFs are complementary.

[0 13] Exemplary non-limiting embodiments of the present invention are described below with reference to the attached drawings. The drawings are illustrative and generally not to an exact scale. The same or similar elements o different figures are referenced with the same reference numbers.

[0014] Fig. 1 is a schematic diagram illustrating an apparatus for face verification consistent with some disclosed embodiments.

[0015] Fig.2 is a schematic diagram illustrating an apparatus for face verification when it is implemented in software, consistent with some disclosed embodiments.

[0016] Fig. 3 is a schematic diagram illustrating examples of the cropped regions, consistent with a first disclosed embodiment,

[0017] Fig. 4 is a schematic diagram illustrating the detailed structure of the ConvNets, consistent with a second disclosed embodiment. [0018] Fig. 5 is a schematic diagram illustrating a structure of the neural network used for face verification. The layer type and dimension are labeled beside each layer. The solid neurons form a sub-network.

|001 ] Fig. 6 is a schematic flowchart illustrating face verification consistent with some disclosed embodiments.

0020] Fig. 7 is a schematic flowchart, illustrating the ste of SI 03 as shown in Fig. 6.

[0021 j Fig.8 is a schematic flowchart illustrating training process of CoavNets consistent with some disclosed embodiments.

Detailed Description

[0022] Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When appropriate, the same reference numbers are used throughout the drawings to refer to the same or like parts. Fig. 1 is a schematic diagram illustrating an exemplary apparatus 1000 for face verification consistent with some disclosed embodiments.

[0023] It shall be appreciated that the apparatu 1000 may be implemented using certain hardware, software, or a combination thereof . In addition, the embodi ments of the present invention may be adapted to a computer program product embodied on one or more computer readable storage media (comprising but not limited to disk storage, CD-ROM, optical memory and the like) containing computer program codes.

[0024] In the case that the apparatus 1 00 is implemented with software, the apparatus 1000 may include a general purpose computer, a computer cluster, a mainstream computer, a computing device dedicated for providing online contents, or a computer network comprising a group of computers operating in a centralized or distributed fashion. As shown in Fig. 2, apparatus 1000 may include one or more processors (processors 102, 104, 106 etc.), a memory 1 1 , a storage device 1 16, a communication interface 1 14, and a bus to facilitate information exchange among various components of apparatus 1000. Processors 102-106 may include central processing unit ("CPU"), a. graphic processing unit ("GPU"), or other suitable

information processing devices. Depending on the type of hardware bein used, processors 102-1 6 can include one or more printed circuit boards, and/or one or more microprocessor chips. Processors 102-106 can execute sequences of computer program instructions to perform various methods that will be explained in greater detail below.

[0025] Memory 1 12 can include, among other things, a random access memory ("RAM") and a read-only .memory ("ROM"). Computer program instructions can be stored, accessed, and read from memory 1 12 for execution by one or more of processors 102-106, For example, memory 1 12 may store one or more software applications. Further, memory 1 12 may store an entire software application or only a part of a software application that is executable by one or more of processors 102-106. It is noted that although only one block is shown in Fig, 1 , memory 1 12 may .include multiple physical devices installed on a central computing device or on different computing devices.

[0026] Referri g Fig. 1 again, where the apparatus 3000 is implemented by the hardware, it may comprise a feature extracting unit 10 and a verification unit 20. The feature extracting unit 10 is configured to extract HIFs (Hidden identity Features) for different regions of faces by using different trained ConvNets, wherein last hidden layer neuron acti vations of said ConvNets are considered as the HIFs, and the verification unit 20 configured to concatenate the extracted HIFs to form a feature vector; and then compare two of the formed vectors to determine if the two vectors are from the same identity or not.

[0027] For each of the ConvNets, th feature extracting unit 10 operates to input a particular region and its flipped counterpart to each of ConvNets to ex tract the HIFs. Fig.3 illustrates examples of the cropped regions, wherein the top 10 face regions are of the medium scales. The five regions in the top left are giobal regions taken from the weakly aligned faces, the other five in the to right are local regions centered around the five facial landmarks (two eye centers, nose tip, and two mouse corners). In bottom of Fig. 3, it is shown three scales of two particular patches.

£0028] According to one embodiment of the present application, each of the extracted HtFs may form a feature vector. The formed vector may have, for example, 1.60 dimensions as shown in Fig. 4. The verification unit 20 may concatenate ail the extracted H!Fs of each face to form a longer dimensional feature vector. For example, in the embodiment as shown in Fig, 4, the concatenated vector may be of 1 ,200 dimensions.

[0029] In embodiments of the present application, each of the ConvNets may comprise a plurality of cascaded feature extracting layers and a last hidden layer connected to at least one of th featur extracting layers, wherein the number of features in the current layer of ConvNets, where the features are extracted from the previous layer features of the ConvNets, conti ue to reduce along the cascaded feature extracting layers until said HiFs are obtained in the last hidden layer of the ConvNets. Fig. 4 further shows the detailed structure of the ConvNets with 39 3Ixk input. As shown in Fig A the ConvNets may contain four convolution^ layers (with

max-pooling) to extract features hierarchically, followed by the (fully-connected) HIF layer and the (fully connected) softmax output layer Indicating identity classes. The input to each of the ConvNets is 39x3 Jx k for rectangle patches, and 39x3 Ix k for square patches, where k ~ 3 for color patches and k - : 1 for gray patches. When the input sizes change, the height and width of maps i the following layers will change accordingly. Feature numbers continue to reduce along the feature extraction hierarchy until the last hidden layer (the HIF layer), where highly compact and predictive features are formed, which predict a much larger number of identity classes with only a few features. In Fig. 4, the length, width, and height of each cuboid denote the map number and the dimension of each map for all input, convoiutional, and max-pooling layers. Th inside small cuboids and squares denote the 3D convolution kernel sizes and the 2D pooling region sizes of convoiutional and max-pooling layers, respectively. Neuron numbers of the last two fully-connected layers are marked beside each layer.

10030] I practice, any face verification models could be used based on the extracted HEFs. Joint Bayesian and the neural network model are two examples. The verification unit 20 may be formed as a neural network shown in Fig. 5, which contains one input layer 501 taking the HIFs, one locally-connected layer 502, one fully-connected layer 503, and a single output neuron 504 indicating face similarities. The i put features are divided into 60 (for example) groups, each of which contains 640 (for example) features extracted from a particular patch pair with a particular ConvNet Features in the same group are highly correlated. One group of Neuron units (for example, two neurons as shown) in the locally-connected layer only connects to a single group of features to learn their local relations and reduce the feature dimension at the same time. The second hidden Saver is fully -connected to the first hidden layer to learn global relations. The single output neuron is fully connected to the second hidden layer. The hidden neurons are eLUs (for example) and the output neuron is sigmoid (for example). An illustration of the neural network structure is shown in Figure 5. For example, it may have 38,400 input neurons with 19,200 HIFs from each patch, and 4,800 neurons in the following two hidden layers, with every 80 neurons hi the first hidden layer locally connected to one of the 60 groups of input neurons.

f0031 j Dropout learning as well known in the art may be used for all the hidden neurons. The input neurons cannot be dropped because the learned features are compact and distributed representations (representing a large number of identities with very few neurons) and have to collaborate with each other to represent the identities well On the other hand, learning high-dimensional features without dropout is difficult due to gradient diffusion. To solve this problem, the present application first trains a plurality of (for example, 60) sub-networks, each of which takes features of a single group as input. A particular sub-network is illustrated in Fig, 5, the present application then uses the first-layer weights of the sub-networks to initialize those of the original network, and times the second and third layers of the original network with the first layer weights clipped.

|0032 ] The apparatus 1000 further comprises a training unit 30 configured to train a piurality of Co vNets for identity classification by inputting aligned regions of faces, as discussed in the above reference to Fia, 3. For each of ConvNets, Fia. 8 illustrates a schematic flowchart for training process consistent with some disclosed embodiments. As shown, i step SS0 i , a face image is selected from a predetermined face training set. In one embodiment, the face image may be selected randomly. In step S802, an input to the ConvNet will be determined. In particular, the input may be a face patch cropped from the face selected in S80.1. A target output for the ConvNet corresponding to the input will be also previously determined, which is a vector of all zeros except the n-th element of the vector being 1 , where n represents the identity inde of the identity class to which the selected face belongs.

[0033] And then in step S803, the face patch determined above is inputted to the ConvNet to calculate its output by a process of forward propagation, which may includes convolution operations and Max-pooling operations as discussed below in reference to formulas 1 and 2.

10(134 j In step S804, the calculated output is compared with the target output to generate an error signal between the calculated output and the target output. The generated error signal is then back-propagated through the ConvNet so as to adjust parameters of the ConvNet in ste S805, In step S806, it is determined if the training process is con verged, if yes, the process is terminated; otherwise it will repeat step S8O1-S805 until the training process is converged such that the parameters of the ConvNet are determined,

[0035] Hereinafter, the convolution operations and the Max-pooling operations as mentioned in the above will be further discussed.

|ΌΘ36| The convolution operation of each convolution^ layer of the ConvNets as shown in Fig. 4 may be expressed as

[0037] where A-' and y* are the i-th input map and the j-th output m , respectively, k" is the convolution kernel between the i-th input map and the j-th output map, * denotes convolution, h- is the bias of the j -th output map. Herein, ReLU nonlinearity y ~ rnax{0, x) is used for hidden neurons, which is shown to have better fitting abilities than the sigmoid function. Weights in higher convoiutional layers of the ConvNets are locaily shared to leam different mid- or high-level features in different regions. / - indicates a local region where weights are shared. Max-pooling as shown in Fig. 4 may be formulated as where each neuro in the i-th output map y* pools over aa sxs non-overlapping local region in the i-th input map x' .

|003S| The last hidden layer of Hi ' Fs may be fully-connected to at least one of convoiutional layers (after max-pooting). in one preferable embodiment, the last hidden layer of HIFs is fully-connected to bot the third and fourth convolution^ layers (after max-poolmg) such that it sees multi-scale features (features in the fourth convoiutional layer are more global than those i the third one). This is critical to feature learning because after successive down-sampling along the cascade, the fourth convoiutional layer contains too few neurons and becomes the bottleneck for information propagation. Adding the bypassing connections between the third convolutional layer (referred to as the skipping layer) and the last hidden layer reduces the possible information loss in the fourth convolutional layer. The last hidden layer may take the function

max 0, y x] · wl , + x · \≠ , + b . i (3 >

where *' , w' , x , w 2 denote neurons and weights in the third and fourth convolutional layers, respectively. It linearly combines features in the previous two convolutional layers, followed by ReLU non-linearity. j0039] The Co vNet output y t is a multiple-way (4349- way, for example) soft-max predicting the probability distribution over a plurality of (4349, for example) different identities. Taking the formed vector is of 160-dimensi nal and there are 4349 different identities as an example, the output j/,- may be formulized as: expij-0

( 4)

where y - ^T x^ · w + b f linearly combines the! 60 HIFs x j as the input of neuron j, and }·; is its output. The ConvNet is learned by minimizing - log/, , with the t-th target class. Stochastic gradient descent may be used with gradients calculated by back-propagation.

£0040] Fig. 6 shows a flowchart illustrating a method for face verification consistent with some disclosed embodiments, in Fig. 6, process 200 comprises a series of steps that may be performed by one or more of processors 302-106 or each module/unit of the apparatus 1000 to implement a data processing operation. For purpose of description, the following discussion is made in reference to the situation where each module/unit of the apparatus 1000 is made In hardware or the combination of hardware and software. The skilled in the art shall appreciate that other suitable devices or systems shall be applicable to carry out the following process and the apparatus 1000 are just used to be an illustration to carry out the process.

100 1 j At step S 101 , the apparatus 1000 operates to extracts HIFs from different regions ef faces by using different trained ConvNets, wherein last hidden layer neuron activations of said ConvNeis are considered as the HIFs. In one embodiment, the unit 10 of the apparatus 1000 may, for example, detects five facial landmarks, including the two eye centers, the nose tip, and th two mouth corners, with the facial point detection method proposed by the prior art. Faces are globally aligned by similarity transformation according to the two eye centers and the mid-point of the two mouth comers. Features are extracted from 60 (for example) face patches with 10 (for example) regions, three scales, and RGB or gray channels. Figure 3 shows the 10 face regions and the 3 scales of two particular face regions. The unit 20 trained 60

ConvNets, each of which extracts two 160~dimensional HIF vectors from a particular patch and its horizontally flipped counterpart. A special case is patches around the two eye centers and the two mouth corners, which are not flipped themselves, but the patches symmetric with them(for example, the flipped counterpart, of the patch centered on the left eye i derived by flipping the patch centered on the right eye).

10042] And then in step s ! 02, the apparatus 1000 operates to concatenate, for each of the second plurality of faces, the extracted HIFs to form a feature vector. In the example in which the training unit 30 trained a plurality of (60, for example) ConvNets, the feature extracting unit 30 may extract HIFs for different regions of faces by using these differently trained ConvNets, and then concatenate, for each of faces, the extracted HIFs to form a feature vector, the total length of which may be, for example, 19,200 ( 160 x 2 x 60 ) in case there are 60 ConvNets, each of which it extracting J 60x2 dimensions of HIFs, The concatenated HIFs are ready for the final face verification,

[0043] And then in ste SI 03, the apparatus 1000 operates to compare two of the formed vectors extracted from the two faces, respectively, to determine if the two vectors are from the same identity or not. In some of the embodiments of the present application, the Joint Bayesian technique for face verification based on the I-!IFs may be used. Joint Bayesian has been highly successful for face verification. It represents the extracted facial features x (after subtracting the mean) by the sum of two independent Gaussian variables

x ~ a + e , (5)

wh ere μ ~ N (ø, S a ) represents the face i dentity and t - N (ø, $,. ) represents the intra-personal variations. Joint Bayesian models the joint probability of two faces given the intra or extra-personal variation hypothesis, P(x (> xJ H ; ) and (¾ ,„Y ; j /,; ) . It is readily shown from Equation (5) that these two probabilities are also Gaussian with variations

and

S fl + S, : 0

( )

0 $.. + $..

respectively. $„ and S < can be learned from data with EM algorithm. In test, it calculates the likelihood ratio which has closed-form solutions and is efficient.

|OO01| Fig. 6 illustrates a flowchart to show how the neural network model as shown in Fig. 5 works in the step SI 03. In step S1031, the input layer 501 operates to group the HIFs of the feature vectors formed in step S102 into n groups. Each group contains HIFs extracted by the same ConvNets. I SI 032, the locally-connected layer 502 operates to extract local features from each group of HIFs. In S I 033, the fully-connected layer 503 operates to extract global features from the previously extracted local feaiiires. In Si 034, the output neuron 504 operates to calculate a single face similarity score based on the previously extracted global features.

10002) Although the preferred examples of the present invention have been described, those skilled in the art can make variations or modifications to these examples upon knowing the basic inventive concept. The appended claims is intended to be considered as comprising the preferred examples and all the variations or modifications fell into the scope of the present invention.

|0003] Obviously, those skilled in the art can make variations or modifications to the present invention without departing the spirit and scope of the present invention. As such, if these variations or modifications belong to the scope of the claims and equivalent technique, they may also fall into the scope of the present, invention.