Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A NOVEL DEEP LEARNING-BASED METHOD FOR MATCHING CRIME SCENE DATA
Document Type and Number:
WIPO Patent Application WO/2023/249599
Kind Code:
A1
Abstract:
The invention relates to a novel method for performing a matching process by using feature vectors obtained as a result of a deep learning model on the patches obtained around the minutiae (key points) in the traces obtained from the crime scene.

Inventors:
OZTURK HALIL IBRAHIM (TR)
SELBES BERKAY (TR)
ARTAN YUSUF OGUZHAN (TR)
Application Number:
PCT/TR2023/050606
Publication Date:
December 28, 2023
Filing Date:
June 22, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HAVELSAN HAVA ELEKTRONIK SAN VE TIC A S (TR)
International Classes:
G06V40/12
Foreign References:
US20200193117A12020-06-18
Attorney, Agent or Firm:
CANKAYA PATENT MARKA VE DANISMANLIK LIMITED SIRKETI (TR)
Download PDF:
Claims:
CLAIMS The invention is a deep learning-based method for matching crime scene data comprising the following steps;

- creating a data set by using 30,000-100,000 images of rolled and plain impression fingerprints of the same finger (1),

- performing minutiae detection by using the minutiae detection algorithm on each image in the data set (2),

- performing minutiae matches between minutiae key points of the rolled and plain impression fingerprints in the data set automatically or manually by means of the related images, with at least 8 minutiae matches for each impression pair (3),

- enhancing the images by performing image enhancement algorithms on all images in the data set (4),

- obtaining patches of size 64x64, 128x 128 or 192x 192 of rolled and plain impression images by using the minutiae matching points and the related images obtained (5), characterized in that it comprises the following process steps;

- creating the true ground matrix (G) of size 64^64x6, 128x 128x6, 192x 192x6 such that it represents the spatial position and angular distribution of the minutiae in the obtained patches as specified in Equation 4 and Equation 5 (6), (Equation 5)

- inputting the backbone component of the deep learning model such that the ridge flow and minutiae distribution of the obtained patches are preserved (7),

- generating features of size 8x8x64, 8x8x 128 or 8x8x256 for patches of size 64x64, 128x 128 or 192x 192 in the backbone component (8),

- separating the feature of size 8x8x64, 8x8x 128 or 8x8x256, which is the input to the minutiae segmentation component, into two different lines (9),

- creating a vector of size 64, 128 or 256 by using an average pooling layer of the features coming from the backbone in step 8 for the descriptor generating component (10),

- comparing the vector of size 64, 128 or 256 generated in step 10 and the loss function by using Equation 1 (11), (Equation 1)

- generating features of size 8x8x32, 8x8x64 or 8x8x 128 by processing a matrix of size 8x8x64, 8x8x 128 or 8x8x256 coming from step 8 to the convolution layer (12),

- transmitting the feature generated in step 9 to the average pooling layer (13),

- transmitting the data from the average pooling layer to the convolution layer (14),

- creating a feature of size 1 x 1 x32, 1 x 1 x64, 1 x 1 x 128 after the convolution layer and inputting it to the upsampling layer (15),

- multiplying the features from the two lines and creating a single feature of size 8x8x32, 8x8x64 or 8x8x 128 (16),

- inputting the feature of size 8x8x32, 8x8x64 or 8x8x 128 to the convolution layer (17),

- creating a feature of size 8x8x6 in the convolution layer and inputting these features to the upsampling layer (18),

- generating a matrix of size 64x64x6, 128x 128x6, 192x 192x6 that ensures encoding the spatial and angular information of the minutiae in the upsampling layer (19),

- calculating the mean square error (MSE) loss function of the true ground matrix (G) obtained with the generated matrix (Gd) of size 64x64x6, 128x 128x6, 192x 192x6 by using Equation 2 (20), (Equation 2)

G: the true ground matrix is obtained in step (6).

Ga: matrix is obtained in step (19). numel(G) : represents the number of elements in the matrix G. calculating the loss value by weighting and summing the loss values obtained from Equation 1 in step 11 and Equation 2 in step 20 as specified in Equation 6 (21),

L - Ij XLA + ls ;x LMSE (Equation 6)

- updating the weights of the parameters in the deep learning model to reduce the difference resulting from the comparison (22),

- creating a test set consisting of a pair of image obtained from the crime scene and rolled or plain impression images of the person’s finger (23),

- performing the minutiae detection on the test set (24),

- enhancing the images by performing an image enhancement algorithm on the image in the test set (25),

- obtaining patches of size 64x64, 128x 128 or 192x 192 around each minutiae from the enhanced images for each image in the test data set (26),

- transforming the obtained patches into 1 x64, 1 x 128 or 1 x256 vectors by training a deep learning model (27),

- scaling the resulting 1 x64, 1 x 128 or 1 x256 vectors such that the norm is 1

(28),

- representing each image in the test data set with a matrix of size “number of minutiaex64”, “number of minutiaex 128” or “number of minutiaex256”

(29),

- obtaining a similarity score matrix (Equation 3) by multiplying the matrix of the “number of minutiaexX” in the crime scene image (latent fingerprint image) and the “number of minutiae*X” in the compared sensor image in order to compare each crime scene (latent) image in the test data set with each rolled/plain impression image in the test set (30),

Score Matrix = (MxX) x (XxN) = MxN (Equation 3)

M: the number of minutiae found in the crime scene image.

N: the number of minutiae in the sensor image.

X:64, 128 or 256

- vectorizing the similarity score matrix of length l x(NxM) by flatting it using each crime scene image in the test data set and each sensor image in the database to calculate the similarity score matrix (NxM) (31),

- applying a decomposition process from largest to smallest or vice versa on the obtained vector and calculating the match score by summing the vector element values between 8 and 16 having the first (last) highest similarity value (32),

- performing a decomposition process by using the match score information obtained for each crime scene image (latent fingerprint image) in the test data set thereon and calculating performance values (33).

Description:
A NOVEL DEEP LEARNING-BASED METHOD FOR MATCHING CRIME SCENE DATA

Technical Field

The invention relates to a novel method for performing a matching process by using feature vectors obtained as a result of a deep learning model on the patches obtained around the minutiae (key points) in the traces obtained from the crime scene.

Background

A fingerprint is a unique physical characteristic of a person that does not change throughout their lifetime. A fingerprint is the most widely used biometric characteristic for the identification of individuals. A fingerprint is a pattern of lines and valleys (grooves) on the top of a person’s fingers. Fingerprints have key points called minutiae points located at the end of the lines and at the line separations.

Fingerprint matching systems are systems that work to determine whether two given fingerprints are the same.

Early work on fingerprint recognition treats fingerprint matching as a matching problem that consists of matching 2-D minutiae point clouds on the fingers with the entire fingerprint. Matching methods performed by considering the entire fingerprint are expensive in terms of calculation. In addition, the images taken from the crime scene do not always contain the full fingerprint, which may result in incomplete or distorted images. Therefore, matching methods that consider the fingerprint in its whole form cannot make a good and accurate match when there are missing minutiae points. To solve these problems, local descriptor-based fingerprint matching methods have been developed using only the spatial coordinate and angle information of each minutiae point [1, 2, 3, 4], In these methods, descriptors extracted from a fixed set of minutiae points are used to measure the similarity between rolled and plain impression fingerprints. While these methods work efficiently and well for data sets containing sensor images, their performance for latent (crime scene) fingerprint images is quite poor. A few studies have proposed methods [5, 6] to improve the performance of the local descriptor-based method for crime scene fingerprint recognition by using features beyond minutiae, such as the number of papillary lines, orientation maps of papillary lines, and so on. However, good performance has not been achieved with these methods.

In recent years, deep learning-based methods have been widely used in crime scene trace recognition. These deep learning-based methods are frequently used for minutia extraction and local patch feature representation purposes.

Engelsma et al. [7] proposed a deep learning-based fingerprint recognition algorithm. In the study, the spatial position and texture of the fingerprint minutiae are used for matching. This method works only for fingerprints obtained from sensors. It is often the case that not all fingerprints collected from the crime scene can be recovered. In the aforementioned study, the lack of fingerprint image fragments is a shortcoming of a crime scene fingerprint recognition.

In the United States patent document numbered US9613251B2, a fingerprint recognition method that evaluates the neighborhood of minutiae is proposed. The one-to-one comparison of the neighborhood of the minutiae of the reference fingerprint and the query fingerprint leads to a longer matching time for large databases. At the same time, the fact that it does not use the image derived features of fingerprints is considered as a shortcoming.

The European patent document numbered EP0050842A2, which is in the state of the art, discloses a fingerprint matching method based on the overlap of minutiae collected from different fingerprints. They determine a matching score by calculating the angular and positional deviation of the matched minutiae. One-to-one comparison of the spatial distribution of the minutiae increases the matching time. The increase in matching time creates a shortcoming when working with big data.

The International patent document numbered WO2021048174A1, which is in the state of the art, recommends a fingerprint matching method using local minutiae features. Local minutiae features are determined by neighborhood relationship or delaunay triangulation method. Local minutiae features are matched according to their geometric similarity. The use of minutia derived features as the only features extracted from fingerprints can be considered as a shortcoming. The International patent document numbered WO2020167655A1, which is in the state of the art, discloses a deep learning-based fingerprint recognition method.

The method extracts specific features for a given fingerprint from the image using a first neural network, which is trained to identify specific features in the fingerprints. A second neural network is used to extract the texture features of the fingerprint from the image.

International patent document numbered W02014068089A1 discloses a method based on small details in a fingerprint image and a reference fingerprint. It is filtered based on the difference between the positions in the first and second sets of details.

Japanese patent document numbered JP2021532453 A, which is in the known state of the art, discloses systems and methods for rapidly extracting noise-resistant skin trace movements from digital signals using a feed-forward neural network. The proposed neural network-based system is superior to other neural network-based systems in terms of both speed and accuracy. The motions extracted using the system can be used at least for tasks such as biometric authentication, identification, or fingerprint analysis.

However, sensor fingerprints (rolled/plain fingerprints) are used in these above applications. Since the fingerprints obtained from the crime scene are in small pieces (segments, patches), there are problems such as prolonged matching time or inaccurate matching when matching the crime scene (latent) fingerprints with the sensor fingerprints stored within a database. Therefore, there is a need to implement a novel deep learning-based method for matching crime scene data.

Objects of the Invention

The objective of the present invention is to perform a novel deep learning-based method for matching crime scene data that allows the fingerprint images obtained from the crime scene to be matched with the fingerprint images in the database in a short time and accurately.

Another objective of the present invention is to develop a novel deep learning-based method for matching latent fingerprints using local feature vectors extracted from patches obtained from fingerprint images. These features would represent both the spatial and angular distribution of the minutiae points within the patch and the ridge flow of the patches. Another objective of the present invention is to perform a novel deep learning-based method for matching crime scene data (latent fingerprint), which allows the training data generation process to be shortened by using information from weakly labeled minutiae pairs in rolled and plain impression finger images in the training phase.

Another objective of the present invention is to perform a novel deep learning-based method for matching crime scene data that makes the features obtained for patches obtained from images more distinctive by using a novel cost function in the training phase.

Detailed Description of the Invention

Figures of a novel deep learning-based method for matching crime scene data realized to achieve the objects of the present invention are shown.

These figures;

Figure 1: Flow diagram view of the deep learning-based method for matching latent fingerprints.

Figure 2: a) Latent fingerprint with superimposed minutiae as white space, b) plain fingerprint with superimposed minutiae as white space, c-d) enhancement maps with a single minutiae, d-e) local minutiae patch with and without rotation, and g-h) minutiae segmentation maps.

The invention is a deep learning-based method for matching crime scene data comprising the following steps; creating a data set by using 30.000-100.000 images of rolled and plain impression fingerprints of the same finger (1), performing minutiae detection by using the minutiae detection algorithm on each image in the data set (2), performing minutiae matches between minutiae key points of the rolled and plain impression fingerprints (corresponding to same finger) in the data set automatically or manually , with at least 8 minutiae matches for each impression pair (3), enhancing the images by performing image enhancement algorithm on all images in the data set (4), obtaining patches of size 64x64, 128x 128 or 192x 192 of rolled and plain impression images by using the minutiae matching points and the related images obtained (5), creating the true ground matrix (G) of size 64x64x6, 128x 128x6, 192x 192x6 such that it represents the spatial position and angular distribution of the minutiae in the obtained patches as specified in Equation 4 and Equation 5 (6), (Equation 5) inputting the backbone component of the deep learning model such that the ridge flow and minutiae distribution of the obtained patches are preserved (7), generating features of size 8x8x64, 8x8x 128 or 8x8x256 for patches of size 64x64, 128x 128 or 192x 192 in the backbone component (8), separating the feature of size 8x8x64, 8x8x 128 or 8x8x256, which is the input to the minutiae segmentation component, into two different branches (9), creating a vector of size 64, 128 or 256 by using an average pooling layer of the features coming from the backbone in step 8 for the descriptor generating component (10), comparing the vector of size 64, 128 or 256 generated in step 10 and the loss function by using Equation 1 (11), (Equation 1) generating features of size 8x8x32, 8x8x64 or 8x8x 128 by processing a matrix of size 8x8x64, 8x8x 128 or 8x8x256 coming from step 8 to the convolution layer (12), transmitting the feature generated in step 9 to the average pooling layer (13), transmitting the data from the average pooling layer to the convolution layer (14), creating a feature of size 1 ^ 1 x32, 1 x 1 x64, 1 x 1 x 128 after the convolution layer and inputting it to the upsampling layer (15), multiplying the features from the two lines and creating a single feature of size 8x8x32, 8x8x64 or 8x8x 128 (16), inputting the feature of size 8x8x32, 8x8x64 or 8x8x 128 to the convolution layer (17), creating a feature of size 8x8x6 in the convolution layer and inputting these features to the upsampling layer (18), generating a matrix of size 64x64x6, 128x 128x6, 192x 192x6 that ensures encoding the spatial and angular information of the minutiae in the upsampling layer (19), calculating the mean square error (MSE) loss function of the true ground matrix (G) obtained with the generated matrix (Gd) of size 64x64x6, 128x 128x6, 192x 192x6 by using Equation 2 (20),

G: the true ground matrix is obtained in step (6).

Ga: matrix is obtained in step (19). numel(G) : represents the number of elements in the matrix G. calculating the loss value by weighting and summing the loss values obtained from Equation 1 in step 11 and Equation 2 in step 20 as specified in Equation 6 (21),

L = XIXL •+• S' LMSE (Equation 6) updating the weights of the parameters in the deep learning model to reduce the difference resulting from the comparison (22), creating a test set consisting of a pair of image obtained from the crime scene and rolled or plain impression images of the person’s finger (23), performing the minutiae detection on the test set (24), enhancing the images by performing an image enhancement algorithm on the image in the test set (25), obtaining patches of size 64x64, 128x 128 or 192x 192 around each minutiae from the enhanced images for each image in the test data set (26), transforming the obtained patches into 1x64, 1x 128 or 1 x256 vectors by training a deep learning model (27), scaling the resulting 1 x64, 1 x 128 or 1 x256 vectors such that the norm is 1 (28), representing each image in the test data set with a matrix of size “number of minutiae x 64”, “number of minutiae x 128” or “number of minutiae x 256” (29), obtaining a similarity score matrix (Equation 3) by multiplying the matrix of “number of minutiaexX” in the crime scene image and the “number of minutiae x X” in the compared sensor image in order to compare each crime scene image in the test data set with each rolled/plain impression image in the test set (30),

Score Matrix = (MxX) x (XxN) = MxN (Equation 3)

M: the number of minutiae found in the crime scene image.

N: the number of minutiae in the sensor image.

X:64, 128 or 256

- vectorizing the similarity score matrix of length lx(N*M) by flatting it using each crime scene image in the test data set and each sensor image in the database to calculate the similarity score matrix (NxM) (31),

- applying a decomposition process from largest to smallest or vice versa on the obtained vector and calculating the match score by summing the vector element values between 8 and 16 having the first (last) highest similarity value (32),

- performing a decomposition process by using the match score information obtained for each crime scene image in the test data set thereon and calculating performance values (33).

The spatial position information of the minutiae is learned in the backbone component and the minutiae segmentation component of the model with the gradient information from the minutiae segmentation component. Equations 1 -4-5-6 are used to pass the obtained patches as input to the backbone component of the deep learning model such that the line flow and minutiae distribution are preserved (7). A loss function that jointly preserves the spatial and angular distribution of the minutiae in the patches and the line flow in the patches results in a feature vector representing each patch.

The MinNet model of the developed method jointly optimizes the spatial and angular distribution of the ridge flows of neighboring minutiae and patches.

Rolled/plain fingerprints have high image quality. However, dirty fingerprints are obtained from the crime scene by various means (photographing, dusting, chemical processing, etc.). Therefore, latent images usually have poor image quality and may contain ambiguous ridge structure, deformations and fabricated artifacts that make the fingerprint recognition task more difficult.

(i) Descriptive generation phase:

Local patches

For minutiae matching in latent and sensor fingerprints, it is crucial to encode the information around the minutiae into descriptors. As shown in Figures 2 (e) and 2 (f), minutiae descriptors are generated from local patches cropped around the minutiae. However, before cropping the patch, the fingerprint image is enhanced. The enhancement is achieved by removing the contamination in the image and improving the local line flow. After the enhancement process, the unnecessary background is removed from the part of the fingerprint. For this purpose, segmentation masks, enhancement maps and minutiae are extracted from the latent fingerprint using the FingerNet algorithm trained for the minutiae extraction task. This allows us to bring the latent fingerprint and sensor fingerprints into the same area as shown in Figure 2 (c) and (d). The last step in creating a minutiae patch is to rotate the patch by the minutiae angle (counterclockwise). Thus, the minutiae angle is aligned with the horizontal + x-axis, which is necessary to make the rotation of the identifiers invariant.

Label creation The proposed method is trained using a special data set of matching rolled and plain fingerprint image pairs. In the training phase of the MinNet model, matching minutiae pairs from rolled and plain fingerprints of the same finger are used to extract patches around the minutiae. Minutiae pairs are generated using the minutiae cylinder coding (MCC) algorithm. To remove contamination from the correct pair, high-scoring image pairs are utilized. The MCC algorithm selects the top 8 minutiae pairs with the highest local similarity scores for matches with strong match scores (matches above a certain threshold).

Descriptive generator

In the training phase, the developed method aims to generate similar descriptor vectors for identical minutiae patches from different fingerprints and to increase the dissimilarity of descriptor vectors corresponding to mismatched minutiae patches. To achieve this aim, the total angular margin loss (AAM) is used in the training process of the MinNet model. (Equation 1)

The total angular margin loss is based on the cross-entropy loss and softmax operations as shown in Equation 1. This loss function requires a linear layer at the end of the backbone as shown in Figure 1. This linear layer contains weight vectors (Wi) for each class. In the training phase, the degree between a different class vector (xi) and class weight (Wj) increases by at least one margin (m), while the degree between a class vector (xi) and class weight (Wi) decreases. Since the linear layer is not used after training, it is removed from the MinNet model. The margin (m) parameter of the contribution margin loss of Equation 1 is set to 28,6 degrees. The scale parameters (s) are chosen as 16 in the training phase.

Minutiae Mapping

The relative positions and angles of neighboring minutiae with respect to a minutiae provide distinctive information for matching local minutiae patches. Since this information leads to better matching performance, the positions, and angles of minutiae within local patches are encoded. From the generated descriptor, the MinNet model learns to reconstruct neighboring minutiae. In this way, the MinNet model gains the ability to explicitly encode the positional and angular information of neighboring minutiae.

The reconstruction of neighboring MinNets for a patch is performed by the MinNet segmentation branch, as shown in Figure 1. The segmentation branch of the MinNet model generates the segmentation map using the feature map produced by the backbone. The descriptor generation branch performs global average pooling over the same feature mapping to obtain the minutiae descriptor as shown in Figure 1. Since no weight parameters are used in the pooling process, the descriptor also contains the spatial and angular information of neighboring minutiae required by the minutiae segmentation branch.

In the training phase, a multichannel minutiae segmentation map M wxhx6 is used as the target of the minutiae segmentation branch. These minutiae segmentation maps encode the spatial and angular information of the minutiae. (Equation 5)

Each minutiae center (x, y) is represented as a Gaussian distribution with variance c 2 . Equation 5 gives a minutiae map value at position (i, j) of channel k th for the minutiae at (x, y) with degree 9. The value of c is chosen to be 5.

The output of the model from the minutiae segmentation map (128x 128x6 matrix) is called Gd. The corresponding true ground matrix for the patch is called G. The mean square error (MSE) loss function is calculated as shown in Equation 2.

LMSE = (1/(128 x 128

Final loss

The mean square error (MSE) is used as the loss function for the minutiae segmentation map, while the AAM loss is used for the descriptor generation branch of the proposed MinNet model. Therefore, when combining the minutiae segmentation and descriptor generation branches, a weighted combination of AAM loss and MSE loss is used to train the model, as shown in Equation 6. The contribution of the losses is controlled by the values of 1 and X2. In the studies, X 1 and X2 are set to 1 and 64, respectively.

L = AJXLA 12 X LMSE (Equation 6)

Data replication

To further improve the matching performance of the MinNet model descriptors, data replication techniques are used to increase the intraclass variation of the trained minutiae patch pairs. The replicates are applied randomly with 25% probability, while the rest of the advanced minutiae patches cannot be replicated by any replication technique. The replicas applied are rotation and scaling.

Rotation: After rotating the enhanced minutiae patches by the minutiae angle, the patches are rotated by a randomly chosen degree in the range [-10, 10],

Scaling: The patch is scaled with randomly selected ratio values (0,8, 0,9, 1,1, 1,2). To avoid information loss at the edges of the patches, duplication is applied before cropping the patch from the enhanced fingerprint. The same duplications are applied to the segmentation maps of the duplicated patches.

Application details

The proposed MinNet model uses MobileNetv3 -large [8] as the backbone. The rotated and clipped local patches are normalized by sample normalization before being transmitted to the network. The Adam [9] optimizer of size 512 is used. The model was trained for 200 cycles using a learning rate of le-3.

(ii) Matching phase:

Once the descriptor vectors for each minutiae of the latent and sensor fingerprints are created, the local similarity between the latent minutiae descriptor (vi) and the sensor minutiae descriptor (vj) is measured using the cosine similarity measure shown in Equation 7.

V X Vi

S(Vi, Vj) = ih’ihi d! (Equation 7)

Therefore, the given sensor and hidden minutiae identifier templates are A={ai, a2, aru} and B={bi, b2, bx}, respectively; s(a, b) minutiae a denotes the local similarity between a G A and b G B, s( ) : A x B - [-1, 1],

- T G [0, l] n A xn B , shows the similarity matrix corresponding to the placement templates A and B comprising the local similarity between T[r, c] =s(ar, be) and minutiae embeddings.

When comparing the minutiae descriptor templates of the hidden image and the sensor image, it is necessary to obtain a global score value expressing the overall similarity from these local similarities. A local similarity assignment (LSA) algorithm is used to generate a global similarity score value. The Hungarian algorithm [10] is used to solve the linear assignment problem on the matrix T to find the set of pairs np (P = (n, Ci)) that maximize the global score without considering the same minutiae more than once. The global score, also known as the matching score, is calculated as in Equation 8. The parameter n p is set equal to min(12, min(N,M)). Where N and M correspond to the template size A and B, respectively. (Equation 8)

The advantages obtained with the developed method are listed below.

It allows the construction of a new local descriptor that provides not only the local ridge flow information around each minutiae, but also the spatial and angular distribution of neighboring minutiae.

The improved method works very well for both latent fingerprint and sensor fingerprint recognition tasks. It produces successful results in both sensor-to-sensor and latent-to-sensor image matching.

REFERENCES

[1] Raffaele Cappelli, Matteo Ferrara, and Davide Maltoni. Minutia cylinder-code: A new representation and matching technique for fingerprint recognition. IEEE transactions on pattern analysis and machine intelligence, 32(12):2128-2141, 2010.

[2] A.K. Jain S. Prabhakar D. Maltoni, D. Maio. Handbook of Fingerprint Recognition. Springer-Verlag, New York, 2009

[3] Miguel Angel Medina-Perez, Milton Garcia-Borroto, Andres Eduardo GutierrezRodriguez, and Leopoldo Altamirano-Robles. Robust fingerprint verification using mtriplets. In 2011 International Conference on Hand-Based Biometrics, pages 1-5, 2011.

[4] R. Cappelli, M. Ferrara, D. Maio and D. Maltoni, "Metodo di codifica delle minuzie di una impronta digitale e corrispondente metodo di riconoscimento di impronte digital! ", Patent N. ITB02009A000149, 2009.

[5] Alessandra A. Paulino, Jianjiang Feng, and Anil K. Jain. Latent fingerprint matching using descriptor-based hough transform. IEEE Transactions on Information Forensics and Security, 8(l):31-45, 2013

[6] Soweon Yoon, Jianjiang Feng, and Anil K. Jain. Latent fingerprint enhancement via robust orientation field estimation. In 2011 International Joint Conference on Biometrics (IJCB), pages 1-8, 2011.

[7] Engelsma, J. J., Cao, K. and Jain, A.K., 2019. Learning a fixed-length fingerprint representation. IEEE transactions on pattern analysis and machine intelligence, 43(6), pp.1981-1997.

[8] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1314-1324, 2019. [9] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2014.

[10] H.W. Kuhn. The hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(4):83-97, 1955.