Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation

Free download. Book file PDF easily for everyone and every device. You can download and read online Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation book. Happy reading Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation Bookeveryone. Download file Free Book PDF Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation Pocket Guide.

Structure features are derived locally from the local DCT frequency coefficients computed on a patch k. To measure these statistical traits of the DCT histograms of the patch k , its kurtosis is computed to quantify the degree of its peakedness and tail weight:. Consequently, anisotropy, which is a directionally dependent quality of images, was shown by Gabarda and Cristbal [ 22 ] to decrease as more degradation is added to the image. Each patch consists of the DCT coefficients of oriented pixel intensities.

We discard the DC coefficient, since the focus is on directional information. Each DCT patch is then subjected to a normalization of the form:. Due to the fact that the perception of image details depends on the image resolution, the distance from the image plane to the observer and the acuity of the observers visual system, a multiscale approach, is applied to compute the final global score as:.

The stronger the image is degraded, the lower the quality index is. From left to right, reference image then its noisy images. The used pattern-based quality criteria are based on statistical measures of keypoint features. We have used this approach since keypoint features describe, in a stable manner, the regions of the image where the information is important. This approach is widely used in object [ 23 ] and biometric recognition [ 24 ] issues.

For the descriptor vector computation, several methods exist in the literature such as the scale-invariant feature transform [ 25 ], shape contexts [ 26 ], and speed up robust features SURF [ 27 ]. SIFT algorithm has been also successfully used in biometric recognition for different modalities such as the veins [ 24 ], face [ 29 ], fingerprint [ 30 ], iris [ 31 ] as well as in 3D facial recognition [ 32 ]. SIFT algorithm consists of four major stages: 1 scale-space extrema detection, 2 keypoint localization, 3 orientation assignment, and 4 keypoint descriptor.

(PDF Download) Formal Methods and Stochastic Models for Performance Evaluation: Third European

In the first stage, potential interest points are identified, using a difference-of-Gaussian function, that are invariant to scale and orientation. In the second stage, candidate keypoints are localized to sub-pixel accuracy and eliminated if found to be unstable. The third stage identifies the dominant orientations for each keypoint based on its local image patch. The keypoint descriptor in the final stage is created by sampling the magnitudes and orientations of the image gradients in a neighborhood of each keypoint and building smoothed orientation histograms that contain the important aspect of the neighborhood.

For each coordinate of this array, an eight-orientation vector is associated. The features extracted are invariant to image scaling and rotation and partially invariant to change in illumination and 3D camera viewpoint. From these features, four criteria are retained see Section 4 to contribute to the quality assessment of the biometric raw data. In order to predict biometric sample quality using both information image quality and pattern-based quality , we use the support vector machine.

From all existing classification schemes, a SVM-based technique has been selected due to high classification rates obtained in previous works [ 33 ] and to their high generalization abilities. SVMs have been proposed by Vapnik [ 34 ] and are based on the structural risk minimization principle from statistical learning theory. SVMs express predictions in terms of a linear combination of kernel functions centered on a subset of the training data, known as support vectors SV. The second step is to find an optimal decision hyperplane in this space. The criterion for optimality will be defined shortly.

The values w and b are the parameters defining the linear decision hyperplane. In this paper, we use the RBF kernel:. This script automatically scales training and testing sets. Originally, SVMs have essentially been developed for the two classes problems. However, several approaches can be used for extending SVMs to multi-class problems. The method we use in this communication, is called one against one. Instead of learning N decision functions, each class is discriminated here from another one. The class of this point x becomes then to the majority class after the voting.

The goal of the proposed quality metric is to detect, with a reasonable accuracy, three synthetic alterations which may deeply affect the most widely used matching systems. The proposed metric may be considered as independent from the used matching system. An example of its practical use is illustrated in Figure 5.

The method predicts the alteration of the input image. Then, depending from the robustness of the used matching system against the predicted alteration, the matching system qualifies the image good, fair, bad, or very bad quality. In Section 4 , we present the experimental protocol followed by the validation process of the proposed metric. The results are then given in Section 4. Six benchmark databases and one biometric matching algorithm are used in order to validate the proposed metric. In this study, we introduce three types of synthetic alterations as well as three levels for each type using the MATLAB tool:.

Blurring alteration: blurring images are obtained using a two-dimensional Gaussian filter. It adds Gaussian white noise of mean m and variance v to the image I. It resizes the image I using a nearest-neighbor interpolation. Using these alterations, the input vector to SVM is the five retained quality criteria one for image quality and four pattern-based quality and the output can belong to ten different classes defined as follows see Table 2 :. Classes 2 to 10 illustrate three types of alterations and three levels for each type see Section 4 for details about the introduced alterations.

In this study, we use six benchmark databases. For each database, we introduce three types of alterations blurring, Gaussian noise, and resize alterations and three levels for each type of alteration. The presented alterations are commonly realistic during the acquisition of biometric raw data, which may deeply affect the overall performance of biometric systems. Finally, we get 60 databases: 6 references and 54 altered databases i.

These images have been captured in regulated illumination, and the variation of expression is moderated Figure 6. Each sample corresponds to one pose from left to right Figure 7. FERET database [ 38 , 39 ]: It is composed of individuals with from 5 to 91 samples per individual the average value is Each sample corresponds to a pose angle, illumination, and expression Figure 8. AR database [ 40 ]: It is composed of individuals and 26 samples per individual. These included images captured under different conditions of illumination, expression, and occlusion Figure 9.

Hand veins database [ 24 ]: It is composed of 24 individuals and 30 samples per individual. From left to right, reference image then alteration levels 1, 2, and 3. The used biometric matching algorithm is a SIFT-based algorithm [ 25 ]. The matching similarity principle used is described in previous works [ 24 ].

Each image im is described by a set of invariant features X i m as described in Section 4. The verification between two images i m 1 and i m 2 corresponds to the similarity between two sets of features X i m 1 and X i m 2. We thus use the following matching method which is a modified version of a decision criterion first proposed by Lowe [ 25 ].

Then, we consider this keypoint x is matched to y iff x is associated to y and y is associated to x. Figure 13 presents an example of a matching resulting from a genuine and an impostor comparison. Example of matching results resulting from a genuine on the left and an impostor comparisons on the right. According to Grother and Tabassi [ 8 ], biometric quality metrics should predict the matching performance. That is, a quality metric takes a biometric raw data and produces a class or a scalar related to error rates associated to that sample.

Therefore, we use the EER which illustrates the overall performance of a biometric system [ 42 ]. In order to validate the proposed quality metric, we proceed as follows:. Quality criteria behavior with alterations: the first step of the validation process consists of showing the robustness of the used five quality criteria in detecting the introduced alterations: blurring, Gaussian noise, and resize alterations. For the fingerprint and hand veins databases, we learn two multi-class SVM models, respectively.

In order to train and to test the multi-class SVM models, we split each benchmark database images into two sets S training and S test in a balanced way i. The choice of the kernel and the selection of the parameters required are presented in Section 4. Quality sets definition: the proposed metric predicts a quality class of the target image. In order to show the utility of this metric, we need to define the quality sets for the used authentication system. Depending from the used authentication system, some alterations may have more impact on its global performance than others.

Thereafter, we use the EER to illustrate the global performance of the biometric system. EER value of each quality set: in order to quantify the effectiveness of our quality metric in predicting system performance, we have put each image to a quality set, using its predicted label by our metric. Then, we have calculated the EER value for each quality set. The effectiveness of the method is quantified by how well our quality metric could predict system performance among the defined quality sets. More generally speaking, the more the images are degraded, the more the performance of the overall system will be decreased illustrated by an increase of its EER value.

In this section, we show the robustness of the used criteria in detecting alterations presented in the previous section. In order to compute the correlation of the used criteria with the three types of alterations, we define for each type of alteration and for each criterion p the variables as follows:. Alteration levels are represented by the variable Y 1: for the reference databases; 2, 3 and 4: for the altered databases levels 1, 2, and 3. Therefore, the vector V used to predict biometric sample quality is a five-dimensional vector containing one image quality criterion and four pattern-based criteria as depicted in Table 3.

Figure 14 shows that the four pattern-based criteria keypoints, DC coefficient, mean, and standard deviation of scales are pertinent in detecting the three types of alterations: blurring, Gaussian noise, and resize alterations. Correlation between the proposed criteria and the three alterations among the four face databases. Distortion cards. The cards are related to a slightly altered image first row and a highly altered one second row , respectively.

We learned seven multiclass SVM models: five for face databases and two for hand veins and fingerprint databases. Table 4 presents the accuracy of the learned multiclass SVM models on both training and test sets. Table 4 shows the efficiency of the proposed metric in predicting the three synthetic alterations blurring, Gaussian noise, and resize of data, with a valuable ten-class classification accuracy going from Results for the different databases are similar but not exactly the same. The reason is related to the complexity of the databases incorporating more or less artifacts.

We used one of the four face databases for training and the rest three for tests. We obtained a valuable four-class accuracy of The four classes are 1: original; 2, 3, and 4: blurring, Gaussian noise, and resize level 3, respectively. For a ten-class classification, this accuracy decreased to However, the resolution of images to be used for training should be as much as similar for the test images in order to maintain a high accuracy. We have obtained good classification results since the images used for training this model contains a set of images from each database.

The volunteers of CASIA-FingerprintV5 were asked to rotate their fingers with various levels of pressure to generate significant intra-class variations. We then computed the intra-class matching scores, using the matching algorithm presented in Section 4 , of the CASIA-FingerprintV5 database by taking the first image as reference and the rest for the test.

This shows that images classified of good quality by the proposed method provided higher matching scores compared to the images predicted of bad quality; which clarify the benefits of the presented method. In order to quantify the robustness of the proposed metric in predicting system performance, we need first to define the quality sets of the used biometric authentication systems. Therefore, we have tested the robustness of the used system against the three alterations presented in Section 4. The EER values are computed using the first image as a reference single enrollment process and the rest for the test.

Figure 16 shows that all the introduced alterations have an impact on overall performance of the used authentication matching algorithm presented in Section 4. Therefore, we define in Table 5 the quality sets definition for the used matching algorithm. Impact of alterations on overall performance of the used authentication system among the 4 face databases. In order to validate the proposed quality metric in predicting the used matching algorithm performance, according to Grother and Tabassi [ 8 ], we calculate the EER value of each quality set predicted by the learned multiclass SVM models.

Figure 17 shows the EER values of each quality set among the used biometric databases. From this figure, we can deduce several points:. EER values of each quality set among the used biometric databases. Face, fingerprint, and hand veins databases.

Michael Schuckers Public records

The proposed metric has shown its efficiency in predicting the used matching system among the six biometric databases. More generally speaking, the more the images are altered, the more the EER values are increasing. This is demonstrated by the increasing curves presented in Figure This result can be explained by the small size of the hand veins database 24 persons and the robustness of the used matching system against the introduced alterations. The four curves related to the four face databases, presented in Figure 17 , are computed using the four multiclass SVM one multi-class SVM per database.

We have obtained similar curves using the other multiclass SVM model containing examples from the four benchmark databases. This shows the scalability of the presented metric to be used on different types of images such as the image resolution. In order to show the efficiency of the proposed metric, we present in this section a comparison study with the NFIQ [ 10 ]. For the comparison with the proposed method four levels of quality , we consider that the fourth and fifth levels belong to the very bad quality set.

In order to compare the proposed metric with NFIQ, we use the approach suggested by Grother and Tabassi [ 8 ] when comparing quality metrics. To do so, we use the Kolmogorov-Smirnov KS test [ 44 ] which is a nonparametric test to measures the overlap of two distributions: in our case, distributions of scores of genuine and impostors, respectively.

More generally speaking, KS test returns a value defined between 0 and 1: for better quality samples, a larger KS test statistic i. For the three quality sets bad, fair, and good , Figure 18 shows that the proposed metric KS statistic going from 0. The quality assessment of biometric raw data is a key factor to take into account during the enrollment step when using biometric systems.

Such kind of information may be used to enhance the overall performance of biometric systems, as well as in fusion approaches. However, few works exist in comparison to the performance ones. Toward contributing in this research area, we have presented an image-based quality assessment metric of biometric raw data using two types of information image and pattern-based quality.

The proposed metric is independent from the used matching system and could be used to several kind of modalities. Using six public biometric databases face, fingerprint, and hand veins , we have shown its efficiency in detecting three kinds of synthetic alterations blurring, Gaussian noise, and resolution. For the perspectives of this work, we aim to add an additional quality criterion in order to detect luminance alteration, which is also considered as an important alteration affecting biometric systems mainly, facial-based recognition systems.

In addition, we are planning to test the efficiency of the presented method on altered images combining the presented alterations, which also represent another kind of real-life alterations. Modality-specific alterations could also be used to have a precise analysis of the efficiency of the proposed methodology. The innate inherited and learned behavior skills of humans have both similarities and differences. A behavior is a way of acting. Inherited behaviors are also called instincts. Humans are born with such behaviors, and know what to do by instincts under specific situation or conditions.

In contrast, learned or acquired behaviors are not inherited, but learned by experience. Biometrics consist of recognizing individuals based on one or several of the aforementioned human traits. Each biometric technology is defined in terms of its ability of recognizing one or several of these traits. As such, traditionally, biometrics are categorized in two major classes according to which traits they depend on: physiological biometrics and behavioral biometrics. Human traits targeted by biometric systems are linked directly or indirectly to physical or functional structures of the human body.

In particular, all physiological biometrics and some behavioral biometrics are rooted directly in physical structures of the human body. The most complex entities are organ systems , followed in decreasing order of complexity by organs , tissues , cells , and chemicals. Organs are the most widely known entities. Each organ consists of a collection of tissues working together to achieve specific functions of the human body. Examples of organs include the stomach, the brain, the kidney, etc. Organs contributing to achieving particular function eg, eating, breathing, moving, etc.

Examples of organ systems include sensory, nervous, digestive, musculoskeletal, respiratory, urinary, reproductive, etc. For instance, one of the most popular group of organs that has given rise to many biometrics is the sensory system. The sensory organs are the conduits through which humans interact and react in their environments.

Early sensory conduits, also known as the five senses, include sight, hearing, touch, taste and smell. With advances in biological sciences, more sensory systems have been uncovered including kinesthetic sense sense of motion , vestibular balance receptors in the ear, sensory receptors in the blood sensitive to change in blood pressure or blood chemistry , etc. Sensory systems are characterized by the stimuli to which they normally respond. There are different kinds of receptors, some of which are shared by different sensory systems.

Examples include photoreceptors light , mechanoreceptors distortion , thermoreceptors heat , chemoreceptors eg, chemical odors , etc. The variety of sensory systems available or yet to be discovered means more opportunity for the emergence of new biometric technologies.

For instance, face biometric is related to sight as a medium for sight, while odor biometric is related to olfaction smell ability or function. Such distinction is important, as it opens up the door for the emergence of new biometric modalities. For instance, most of the existing vision biometrics eg, retina, iris are linked only to the medium, not the underlying function or ability. So, there is a hole that could be filled in the future.

Only a few behavioral biometrics have such straightforward connection to human physiological traits. For most behavioral biometrics, the connection to human physiology is more complex, and typically indirect. For instance, keystroke dynamics is linked to how we use our hands to type on a keyboard, but also to the brain learning and memorization.

Only a few of them exhibit both innate and learned traits. While the traditional categorization of biometrics has been in terms of physiological and behavioral, we have started witnessing in the last few years a growing interest in the possibility of identifying humans based on their cognitive characteristics.

This has given rise to cognitive biometrics as a third category. Cognitive biometrics leverage the human cognition for identification purpose.


  • Economic Citizens: A Narrative of Asian American Visibility.
  • Human detection in surveillance videos and its applications - a review;
  • Statistical Methods for Performance Evaluation.
  • Individual identification via electrocardiogram analysis;
  • Book Resources - ASAP;
  • Computational Methods in Biometric Authentication: Statistical Methods for Performance Evaluation!
  • Product description.

Cognition is the mental process involved in acquiring, organizing and using knowledge, and understanding rationally rather than emotionally. It encompasses both the concept of knowledge and learning. The central process of human cognition is the memory. Sensory memory relies on sensory channels visual, acoustic, touch, gustatory, olfactory, etc. This involves very short information persistence time. Short term memory , also known as working memory , processes limited amount of information, symbolically coded.

The human processor model abstracts human interactions through three interacting subsystems, namely, the perceptual system, cognitive system, and motor system. Information is acquired by the perceptual processor, and stored through the sensory memory. Cognitive processor processes the acquired information, and the motor processor provides the response. Cognitive biometrics modalities attempt to measure human cognitive characteristics such as the ability to memorize certain information under specific environmental, stress condition, or human reaction to specific stimuli expected or unexpected.

Only a handful of cognitive biometrics has been proposed so far, and most of them are still at an early stage of research. One of the key challenges faced by cognitive modalities is measurability. As a result, some of the existing cognitive modalities are combined with other more established biometrics. A trend in research on cognitive biometrics consists of extracting metrics for human identification by analyzing the interactions between mind, brain, and body based on human behavior. Among the challenges that have delayed in the past this field of research was the difficulty in capturing data and metrics indicative of human cognitive states.

However, recent advances in sensing technologies are helping alleviate such hurdle. For instance, in webcams it is possible to capture facial expressions, while with electroencephalogram EEG , one can collect various cognitive information related to brain activity. While macroexpressions are conscious reactions, microexpressions can be either deliberate or unconscious. The unconsciousness of microexpressions could possibly be leveraged toward building a cognitive biometric signature for human recognition.

Cognitive load : The amount of mental effort by the working memory in holding and processing information while performing specific task eg, reading, thinking, visualizing, etc. Cognitive load can be measured using sensors such as EEG by measuring brain rhythms and extracting various signals, or eye trackers by measuring signals such as pupil dilation and eye blinks. Visual scan and detection : The amount of efforts involved in locating and visualizing both familiar and new information on a screen, can be measured though eye trackers.

Such information can be correlated with underlying memory systems interactions eg, measured using EEG , and used to build a cognitive profile for individuals based on their attention span and their reaction to new information. Arousal : This captures the reaction or level of consciousness of an individual in response to some stimuli. The stimuli, for instance, can be the display of pictures that carry for the individual very special and unique memories. Different reaction would occur for an individual with no connection with such pictures.

Hence, measure of arousal could contribute toward building cognitive profile for human identification. Most of the existing efforts in developing cognitive biometrics have consisted of capturing the corresponding modalities in combination with behavioral or physiological modalities. This is because the capture of cognitive modalities in isolation is still not yet well developed. Some of the current proposals combine cognitive and behavioral biometrics such as mouse dynamics, 9 and keystroke dynamics, 10 by analyzing the user reaction to changes and by studying the interdependencies between user's mouse and keyboard behaviors.

For instance, user reaction to change can be captured by analyzing the reaction of the user to changes in the user interface UI or the data being displayed during a computing session. Various biometric modalities have been developed over the years. Prominent examples technologies include fingerprints and face scans, eye biometrics such as retina and iris, and hand geometry biometrics.

Human Fingerprint has traditionally been used as method to identify people, particularly criminals. Fingerprint consists of set of ridges and minutiae located on the surface of human fingertips that carry distinctive patterns that enable to uniquely distinguish individuals from each other. Fingerprint is considered the most used biometric system for many reasons. Furthermore, fingerprint systems implementation, installation, and maintenance are efficient. Most of the recent works on fingerprint biometric have focused on improving the accuracy of the recognition systems by enhancing the data processing techniques and the matching methods and algorithms.

To this end, Aguilar et al 14 enhanced the performance of fingerprint recognition using Gabor Filters and Fast Fourier Transform methods. The proposed method enhances the ridge structures purity in the rectifiable regions and indicates the unrectifiable regions as too noisy for additional processing. They conducted an experiment in which fingerprints images were captured from 50 subjects.

The extracted features are minutiae coordinates, distance between coordinates and angles between each coordinate. They obtained a recognition rate of Ma et al 15 presented an effective fingerprint biometric system based on efficient segmentation algorithm. The Gaussian filter algorithm was used to remove noise and enhance the quality of the poor images. The experiment results showed that the performance of segmentation was enhanced for poor quality fingerprints.

Anand et al 16 improved the fingerprint recognition in Automated Border Control ABC systems by calculating cohort scores from an external dataset and utilized computational intelligence to enhance the matching score distribution. In their experiment, three public fingerprint datasets were used to evaluate the proposed approach performance. They obtained equal error rate EER of 1.

Darshan et al 17 utilized the yttrium aluminate nanoparticles to enhance the fingerprint quality for the first time. In this study, they captured the fingerprint images from subjects over 4 sessions. The experiment results showed that fingerprint recognition is viable solution for child identification in some applications like enhancing child nutrition, national identification programs, and the growing attention being directed to identity for lifetime. Humans use facial images as a common way to recognize and distinguish individuals from each other.

Therefore, facial image is one of the most common biometric modalities for person recognition. Face biometric features include the location and shape of facial characteristics such as eyebrows, nose, eyes, lips, jaw line and chin. Face biometrics modes are divided into static as mug shots and dynamic as uncontrolled face identification in airport. After many years of research, conventional face recognition using visual light under controlled and homogeneous conditions is now a mature technology.

Facial biometric does not requires a physical contact with the capture device ie, camera , and that makes it easy to use. It is also widely used although it does not present many unique features like eye biometrics such as iris recognition. Face recognition is impeded by the fact that physiognomy changes over age and facial features or expression can be intentionally manipulated.

Some of the earliest works on face biometrics include proposals by Bledsoe 22 who developed a face recognition method by modeling and classifying faces based on factors such as normalized distances and ratios between feature points, and Sakai, Nagao and Kanade 23 , who used simple heuristics and anthropometric techniques for face recognition where the feature points of human face related to eyes and mouth are extracted. In this work, the proposed system was able to recognize faces automatically without any guidance.

Over the years, and more recently, researchers have focused on improving the quality of face detection and the accuracy of the recognition techniques. The detector is also 2. Ahmad et al 28 evaluated different face detection and recognition methods proposed in the literature. They used the outcome of the study as a foundation to develop a new approach for improved image based face detection and recognition that lays the steps for efficient video surveillance. This method consolidates the distinctiveness of Gabor features despite the fact that the Gabor representations were hugely used.

They claimed that the proposed method is the best one to handle illumination, poses, and expression. The iris is a circular part in the eye. It has various colors like green, blue, hazel or pink. The different patterns in the iris color can be used to identify and verify the identity of humans, and thus it can be used as a biometric system. Iris biometric has many advantages over the other types of biometric systems. The iris biometric is stable because iris patterns remain steady from birth until death.

These patterns are also reliable because they are not susceptible to loss, compromise or theft. Additionally, iris biometric does not require any kind of invasive treatment. The iris biometric system uses camera equipped with infrared illumination to obtain images of the complicated structures of the iris at a remote distance.

Iris is considered as one the most accurate biometrics available. In their experiment, CASIA Iris database, which consists of iris images from 60 various eyes, was used to evaluate the proposed method. Guo and Jones 34 enhanced the performance of iris localization using a new method that employed the depth gradient and texture difference. They conducted an experiment in which the CASIA iris database that consists of iris images collected from subjects was used to evaluate the proposed method. This mitigation is made possible by adjusting the iris samples from one sensor to the other. More recently, Nalla and Kumar 36 presented an effective iris recognition system using a domain adaptation framework based on the naive Bayes nearest neighbor classification and a new method based on Markov random field's model.

The proposed system was able to match accurately iris images, which were obtained under different domains. Galdi and Dugelay 37 improved the fast noisy iris recognition algorithm for iris recognition on mobile devices. The eye retina is a light fragile layer of blood vessels at the back of the eye. The retina structure is complex because of the capillaries that provide the retina with blood. Such complexity contributes to the uniqueness of the blood vessels pattern, which is the underlying principle behind the retina biometric. Principal component analysis, Gradient Vector Flow Snakes, and canning window analysis are some examples of common techniques that are used for retinal feature extraction.

Examples of extracted features of retina's blood vessels include nodes, edges, size of the retina and retina orientation. The Retina remains unchanged throughout human lifetime. Unlike iris, retinal biometric is considered invasive, and it requires very close distance to the camera for correct scan. Retina scan devices are employed for physical access applications in environments that have stringent security requirements, national security and military facility. Recent progress in retina biometric has focused on developing efficient feature representation and biometric matching techniques.

Several contributions have been made in developing graph representations of retina vessel patterns and effective biometric graph matching algorithms. Among those contributions, Arakala et al 39 investigated the retina vessel patterns representation as spatial relational graphs. Also, the uniqueness of the nodes is compared to other substructures. The VARIA database, which consists of retina images collected from 56 subjects, was used in the study.

The features were extracted from a retina image as nodes, edges, nodes of degree 2 and paths of length 2. The experiment results showed that combining nodes and edges could enhance the separation distance. Jeffers et al 40 presented the score distributions of theoretical genuine and impostor retina templates by guesstimating normal kernel density.

In their experiment, images were obtained from VARIA database and utilized for the purpose of estimation. The authors utilized a feature extraction and matching algorithm based on retina images graph representation. Seven normalized scoring functions were tested on feature point based retina templates. They obtained an EER between 0. Lajevardi et al 41 proposed an automatic framework for retina verification based on the biometric graph matching algorithm. The filters in the frequency domain and other morphological operators were used to obtain the retinal vasculature data.

In their experiment, VARIA database, which has images collected from subjects was used to evaluate the biometric graph matching algorithm. They utilized SVM classifier to discriminate between imposter and genuine individuals. Human hands vary in terms of geometric shape of the hand and size of the palm. Also, the fingers' lengths and widths of hands are different.

Thus, people can be identified and verified using hand geometry. This biometric system is simple, easy and inexpensive. However, it is affected by age. The extracted features of hand geometry biometric include hand size, the finger size, the finger length and the crest points of the finger.

PHP Web Biometric Registration and Authentication using a ZKTeco Fingerprint Scanner

Unlike other traditional physiological biometric eg, fingerprint, iris , the accuracy of hand geometry has so far been relatively low. Hence, most of the research efforts have targeted the improvement of the accuracy of this biometric, by expanding the feature space and exploring alternative matching algorithms. They extracted 31 hand features from a color photograph, which are grouped as 21 widths, 3 heights, 4 deviations, and 3 angles.

Varchol and Levicky 43 proposed a hand geometry biometric model designed for medium security access control applications. An experiment was conducted in which hand images were collected from 24 subjects of young ages. In the proposed model, 21 features were extracted from a hand like finger length, heights, palm, area, etc.


  • Citations per year.
  • Energy Methods in Time-Varying System Stability and Instability Analyses.
  • John Dewey, Confucius, and Global Philosophy (S U N Y Series in Chinese Philosophy and Culture);
  • Trích dẫn trùng lặp.

They obtained EER of 4. In their study, pictures were collected from subjects of young ages. The features were extracted from the image of the hand such as the hand size, the finger size, the finger length and the crest points of the fingers, etc. An accuracy ratio of Ferrer et al 45 studied how image resolution affects hand geometry biometric. In this study, two different databases were developed for different purposes. The first database consisted of overhand images, while the second database is composed of underhand images.

During the experiment, the authors collected 10 different images from 85 users for each database. They extracted 15 features such as fingers' end, fingers' valley and exterior base of thumb, index and little finger locations. They concluded that the input image resolution could be reduced up to 72dpi without loss of performance using SVM classifier. Wang et al 46 introduced a multimodal biometrics system by joining hand geometry and palm print triggered by morphology.

They conducted an experiment, in which hand images were captured from different hands. They applied image morphology and Voronoi diagrams' concept for feature extraction. The evaluation of the proposed system yielded FAR of 0. Aghili et al 47 introduced an approach for personal verification and identification based on four fingers geometry. They collected a dataset consisting of pictures from 50 users, and extracted 20 features from 4 fingers, namely, little, ring, middle and index fingers.

The experimental evaluation yielded EER of 0. Guo et al 48 proposed a hand geometry based identification system using infrared illumination device. They collected and used a dataset consisting of hand images captured from subjects 60 images per person. They extracted 34 features from all the fingers for palm identification. They obtained a correct identification rate CIR of They collected a dataset consisting of images of the hand's dorsum captured from 97 subjects, and extracted 54 features for each hand.

Specifically, they obtained the same accuracy rate of They collected a dataset involving images obtained from different persons. Each person has 12 hand images, 7 of 12 images are dorsal, and the other remaining 5 are palm; 54 features were extracted and used for classification. They achieved a relative enhancement of Song et al 51 introduced a simple and reliable authentication method for mobile devices using hand geometry and behavioral information.

In the proposed system, a user is authenticated by combining both hand geometry information and behavioral characteristics. Experimental evaluation of the proposed system involved subjects who were asked to perform various touching with fingers straight and together TFST gestures where all subjects have to keep their fingers together and as straight as possible during the experiment.

The authors obtained an EER of 1. While there are several traditional physiological biometrics, only a very small number of behavioral modalities fall in that category. Actually, to our knowledge, only two behavioral modalities, namely voice and signature scans, can be considered in the traditional category. Voice recognition is a technology that allows users to use their voice as an input device.

Voice recognition may be used to instruct and give commands to the computer such as opening application programs. In the old voice recognition systems, each word needs to be separated by a distinct space in order to be recognized by the applications. However, newer voice recognition applications allow a user to give commands fluently into the computer and these applications can recognize speech at up to words per minute.

Some applications are designed to recognize text and format it in order to allow continuous speech. This customization allows the voice recognition to distinguish among humans' voice although each person speaks with different accent and inflection. Voice biometric features include physical characteristics such as vocal tracts, nasal cavities, mouth, and lips.

Kounoudes et al 54 proposed a voice biometric authentication approach to improve and make the Internet services more secure. In their experiment, 10 users' voices were recorded over the Internet. In their experiment, NTT database, which consists of sentences data, recorded from 35 Japanese speakers was used to evaluate the proposed method.

An error reduction rate of Tiwari 56 presented and discussed sundry feature extraction algorithms for speaker recognition. In addition, the author suggested some modifications to the existing MFCC technique for feature extraction to enhance the efficiency of the speaker recognition.

Obaidat et al 52 devised a new scheme to identify speakers using four Wavelet algorithms with excellent accuracy. They obtained an EER of 1. A signature is any handwriting sample by a person for the purpose of identification. Human writing, in fact, is not only a physical expression but also an obtained skill. A signature verification scheme can be categorized into two methods, which are static and dynamic techniques.

In the static technique, which is also named offline verification, the user is required to provide her signature on a paper, which will then be digitized using a scanner or camera. On the other hand, in the dynamic technique, which is also referred to as online verification, user's behavioral and anatomical characteristics are captured when she writes her signature.

Signature biometric features are extracted by analyzing curves, edges and center of gravity of the signature samples. Research in signature biometric has been driven over the year by the need to improve the accuracy of the matching algorithm as well as concerns over the ability to thwart forgeries attempt. They utilized the fast Hadamard transform FHT to extract features of signature images.

They conducted an experiment, in which 40 signatures were taken from 15 subjects; 10 out of the 15 subjects were assigned as targets for forgers, while four of the five remaining subjects were chosen to be forgers. Qi and Hunt 61 developed new algorithms to extract global geometric and local grid features of signature images. They conducted an experiment, in which signatures were collected from 25 subjects. Among the 25 participants, 15 subjects were asked to provide signatures to build the authentic signature database, while five subjects were asked to provide a set of simple forgeries, and the other five subjects were asked to generate a set of skilled forgeries to the signatures of the 15 authentic subjects.

Dehghan et al 62 introduced an offline signature verification system using shape descriptors and multiple neural networks. In their experiment, signatures were collected from 50 persons with random and skilled forgeries. In , Kaewkongka et al 63 introduced a method of offline signature recognition using Hough transform for straight lines detection. In their experiment, a database consisting of 70 signatures was used. The features were extracted using Hough space parameters as unique characteristics. Justino et al 64 introduced an offline signature verification system using hidden Markov model HMM for the purpose of detecting casual, random, and skilled forgeries.

In their experiment, two different subsets were employed to determine the optimal codebook size for the purpose of detecting random forgeries. The first subset consists of genuine signatures collected from 40 subjects 40 signatures per subject , while the second subset includes genuine signatures collected from 60 subjects 40 signatures per subject. In addition, 10 forgers were asked to generate simple and skilled forgery signatures of each subject in the second subset simple and skilled forgery signatures per forger.

Next, the forgery signatures were added to the second subset. In the proposed system, the aim is to detect three types of forgeries, namely random signature belongs to a different subject , simple signature similar to that of the genuine subject and skilled exact imitation of the genuine signature forgeries. The features were extracted from a grid, such as boundary code and the total number of pixels inside the grid. The evaluation of the proposed model yielded FRR of 1. Nguyen et al 65 proposed an offline signature verification system based on structural features. In their experiment, 12 genuine specimens and random forgeries were obtained from a public database.

The features extracted from energy information are maxima and ratio, and the ones extracted from modified direction feature MDF are the location of transitions from background to foreground pixels and the direction at transitions in the vertical and horizontal directions of the boundary representation of an object. They obtained an average error rate AER of Bansal et al 66 introduced an offline signature verification model using critical region matching.

In their experiment, signatures were collected from 76 persons. The features were extracted as critical points from the sample signatures. They obtained accuracy of Kumar et al 67 proposed an offline signature recognition and verification model using neural network. The features were extracted from preprocessed signature images. The experimental evaluation of the proposed model yielded an accuracy of Patil et al 68 proposed an offline signature recognition model using global features. In their experiment, signature samples were collected from 20 individuals.

Two hundred of these signatures were used for training, while the remaining were used for testing. Roy et al 69 proposed a handwritten signature recognition and verification model using artificial neural network. In their experiment, signature images were collected from seven people. The features extracted are aspect ratio, energy, area of normalization, white pixel density, Euler number, Centroid, Entropy, Maximum horizontal histogram and vertical histogram, number of objects and solidity. They obtained an accuracy of Emerging biometrics are modalities that have appeared or been established more recently, typically in the last two decades.

A few of these technologies, such as DNA, have gained instant recognition, while other are still at the very early stage of research. Several physiological modalities have emerged in the last two decades. On the other hands, several other modalities are only available as early stage prototypes. Living beings have a distinguishable genetic material in their nucleus of the cells called Deoxyribonucleic, also widely known as DNA.

The hereditary traits of DNA are used to identify humans, as an example of organisms, from one another. DNA makes use of chromosomes to help sharing the genetic material among organisms. Humans' DNA and genes exist in 23 chromosomes. These 23 chromosomes are from each parent of an offspring. Parents share This iterative coding is unique, and thus it can be used to distinguish humans from each other. DNA is one of the highest profile biometrics in society, due to the pivotal role it has been playing in criminal justice and forensics investigation in the last two decades.

Michael Schuckers - Address, Phone Number, Public Records | Radaris

Early work in DNA recognition date back to the s. In , Jeffreys et al 72 developed a method for DNA fingerprinting in human when they discovered inconstant and heritable patterns obtained from repetitive DNA. This method was capable of discriminating between allele of gene ie, a variant of gene.

They stated that the overall performance is up to expectation, thus the system is ready for deployment. Lip is one component of the human face that has many different shapes and colors. Thus, it can be employed to discriminate people from each other. Lip biometric has not been widely used like other biometric systems from human physiology such as the fingerprint, face, or voice.

Therefore, the investigation and researches are still at very early stage in this field. While the idea of using lip prints as Identification for human was first proposed in the s, 77 it was only in the s that researchers started exploring the capability of such technology. In , Wark and Sridharan 78 proposed a new type of lip biometric feature extraction for speaker identification. Different methods have been proposed for lip print examinations.

These methods use statistical analyses, 79 Hough Transform, 80 lip shape analyses, 81 Dynamic Time Warping, 82 cell structure segmentation, 83 and similarity coefficients. In , Gomez et al 85 proposed a lip biometric identification system based on the shape of lip. In their experiment, they collected face images of 50 subjects over 10 sessions. They only studied and focused on the area around the lips, and used image transform to extract the lip's shape. Two types of features were extracted. The first set of features was obtained from the polar coordinates of the envelope.

The second set of features was samples of the lip envelope height and width. They obtained a recognition ratio of They collected a dataset by capturing lower face images from 38 subjects. Nine novel lips shape parameters were introduced, namely lips width to perimeter ratio, Upper to lower lips height ratio, Upper lip height to width ratio, Lower lip height to width ratio, Inner to outer circle ratio, Width to middle height ratio, Left side upper to lower lip convexity ratio, Right side upper to lower lip convexity ratio and Indent ratio.

In , Wang et al 86 investigated the different physiological and behavioral lip features based on their discriminative power in speaker identification and verification. Around the same time, Liu et al 87 investigated how the lip impacts the biometrics recognition systems. They proposed a lip recognition system that can work with partial face images. More recently, Wrobel et al 88 proposed an automatic lip personal identification system based on analyzing single features and investigating the similarity between them by comparing the lower and upper bifurcations.

They conducted an experiment in which lip prints were collected from 30 subjects. Lu et al 89 investigated the possibility of using the lip texture for person identification. They achieved a recognition rate of Feature extraction was based on lip contours and new lip geometrical measurements. They obtained average classification accuracies of Human smell, also referred to as body odor, is unique. Body odor recognition is a contactless physical biometric that attempts to identify people by analyzing and studying the olfactory properties of their body scent.

Body odor is one of the most recent types of biometrics available, and as such research is still at an early stage. In , Di Natale et al. Around the same time, Gottfried and Dolan 95 established that human olfactory perception could be identified easily by visual cues. They stated that although human nose consists of about smell receptors, there are only three kinds of receptors in the human visual system.

They concluded that the odor system is not universally standardized. In , Gibbs 96 discussed the perception and acceptance of body odor recognition. He stated that body odor can be acquired using an array of chemical sensors sensitive to different organic compounds, and concluded that odor scanning and its security and privacy need to be improved.

Volume 3 Issue 6

They conducted an experiment in which they captured samples from 13 subjects over 28 sessions. They suggested that odor biometric should be used along with other biometric technologies to enhance its effectiveness. In , Shu et al 98 proposed a novel authentication scheme based on body odor. In the proposed work, the authors collected human body odor through gas sensor arrays.

Then, the human body odor was detected and analyzed using identification technologies, such as chromatography and mass spectrometry, neural network and principle Component Analysis, to affirm the identity. The ear is a part of human face that does not change dramatically as time passes by.

The human ears are also different and distinct. Therefore, human ear can be used to identify and verify people. Unique characteristics of the ear used in ear biometric recognition are the richness and stability of the ear's structure, immutability in its form with expressions, and the uniform distribution of color.

Ear biometric features include the outer helix, scapha, antihelix, lobe, crus antihelicis, tragus, antitragus, and concha. While the potential for using the human ear for identification has been highlighted since the s by Bertillon, it is only in the s that the first attempt to build an ear biometric recognition system was made, specifically by Burge and Burger. Moreno et al discussed the idea of using outer ear images to build an ear recognition system. In their experiment, a dataset of images was used. The proposed model relies on features such as outer ear points, information obtained from ear shape and wrinkles, and macro using compression network.

Yuizono et al developed an ear recognition system using genetic local search, which is a mixed method of local search and genetic algorithm. In their experiment, images were collected from subjects. These subjects then were unequally divided into three categories. Hurley et al proposed a new force field transformation technique for ear biometrics recognition to reduce the dimensionality of the original pattern space, and to maintain discriminatory power for classification at the same time.

In their experiment, the XM2VTS face profiles database, which consists of images collected from 63 subjects, was used. In the first step, the model ear helix is aligned. In the second step, the transformation is refined to determine if the match is good. In their experimental evaluation, they used a dataset of 30 3D ear images collected from 30 users. They obtained using the proposed approach an EER of 6. Yuan et al introduced an ear detection approach with two stages: offline cascaded classifier training and online ear detection. They used 18 layers cascaded classifier to train and detect ears.

They obtained a FRR of 1. More recently , Benzaoui et al proposed a model that implements a feature extraction approach for automated 2D ear recognition using the local texture descriptors. They used in their experiment two versions of the IIT Delhi database. The first version consists of images of different users, while the second version consists of images collected from users.

Arunachalam et al introduced an ear recognition system using band limited phase only correlation BLPOC algorithm for feature extraction. The authors just mentioned that the results demonstrated powerful recognition performance without showing any concrete measures. When the heart beats, it produces electrical currents that propagate within the heart and throughout the body.

The ECG is the process of measuring and recording the various electrical activities of the heart. The human heart and body anatomic features form the shape of the ECG signals. In the early 20th century, W. Einthoven developed the first recording device for Electrocardiogram. The features were further reduced from to 12 by relying on limb leads and removing features with high correlation.

Around the same time, Kyoso and Uchiyama developed an ECG identification system by introducing a new way to discriminate between people based on a fiducial method using four features, namely, PQ interval, QRS, the P wave duration and QT durations. In their experiment, they collected data from nine subjects using ECGs. In , Shen et al presented an ECG biometric recognition method using the individual's electrocardiogram. A dataset was collected from 20 subjects. In their experiment, ECG signals were recorded from 74 subjects.

They achieved an EER of 2. Zhao et al proposed an ECG identification system based on ensemble empirical mode decomposition. Before extracting the subject heartbeats, the noise of the ECG signal was canceled using wavelet decomposition. Salloum and Kuo proposed an effective ECG biometric model using recurrent neural networks. On average, they achieved an accuracy rate of They obtained a recognition accuracy of EEG is the process of measuring and recording the different brain electrical activities.

The EEG measures the voltage flux that causes an ionic current flow through brain neurons. EEG waves are distinct, and thus can be used to distinguish people from each other. The EEG biometric is considered confidential, impossible to steal and hard to mimic. But its use for biometric recognition is relatively recent. In , Paranjape et al suggested the use of electroencephalogram as biometric identification for human.

They conducted an experiment, in which EEG recordings were collected from 40 users. In , Palaniappan presented an EEG biometric system that identifies people by means of parametric classification of multiple mental thoughts. In their experiment, they collected EEG signals from four subjects while thinking of 1 to up to 5 mental thoughts.

They calculated the autoregressive features and classified them using Linear Discriminant Classifier. They obtained an average error rate of 2. In , Palaniappan and Mandic presented an EEG biometric model for identifying individuals with enhanced feature extraction method. In the experiment, EEG signals were recorded from 61 active channels of 40 subjects. In , Sun presented an EEG biometric system using multitask learning. Nine subjects were asked to visualize moving their right or left index finger in reaction to foreseeable visual cue.

The author used the common spatial patterns method for feature extraction purpose. He achieved classification accuracy of They collected the EEG signals from 40 subjects using two electrodes. One channel and synchronicity features were extracted.


fortracharcpa.gq/map3.php admin