Palm Print Features for Personal Authentication Based On Seven Moments

Biometric images are considered as one of the major coefficients in the field of personal authentication. One of the main approaches for personal identification is based on palm print. So studying the features extracted from palm print image adopted to get high efficient system for any recognition systems. In this research two major phases are hold on, in the first phase a database was built for 100 persons by acquiring four images for both hands (4 for left hand and 4 for right hand), then processed to extract ROI (region of interest) by looking for the palm centroid then a square area is fixed based on that centroid. The pre-process play an important step for stable features. Evaluation of the seven moments for each image (8 images) follow the preprocess then stored in the database file (so each person will have 56 values), this phase called personal database preparation. The second phase is the detection phase, which requires the same steps to get 56 values then go through the database looking for the closest person to the tested one. The system evaluation measured by statistical metrics which show good result goes up to 95.7% when applied on 50 persons with different conditions. Also the effect of ROI dimension with individual hands and integrated both of them studied, and the recommended dimension is 192*192.


Introduction
The personal identification system is known as the biometric system, which is of great importance in whole life [1]. Based on unique individual features or characteristics, "biometrics" refers to the definition and authentication of individual identity [2]. The methods used to identify personality are of two types, either knowledge-based methods or token-based methods. For authentication purposes, knowledge-based methods use the password or user-preset code while methods based on the token used keys or id cards. The needs for sophisticated new reliable methods of personal identification have become more important in research, since traditional methods have become unreliable, for example, when the distinctive mark has been lost or forgotten [2]. By improving existing systems in field of security maintenance or developing new systems, identification systems and biometric technology become an important key to the protection of systems worldwide [16]. In the past few decades, fingerprint-based personal identification has attracted a considerable attention in the area of biometrics but because of physical work and the nature of skin older persons and workers may have an unclear finger print .In the last few years, facial recognition, voice recognition and iris recognition have been widely studied [1]. Unique features of the palm print, low cost acquisition devices, fast feature extraction and verification accuracy stability are the reasons to a successful palm print recognition system [20]. The pattern of the palm print does not change throughout the life and the palm print is somewhat similar even in the case of twins, but is not exactly the same [16].

Palm features
To extract the biometric features of palm print the inner surface of the hand between the fingers and the wrist is used [13]. The inner surface of the palm has three main lines (flexion creases) that are clear and do not change over the life of any person, called heart line, head line and life line also contains secondary lines, called wrinkles, which are on the inner palm except the main lines and are irregular and thinner than the main lines as well as the ridges which are located throughout the palm rest [21]. Figure  (1) shows the main lines and their locations [8]while figure (2) shows other features such as structure features, datum points and delta point, that cannot be extracted from low resolution images [13]. Palm print patterns are not repeated throughout human life even in the case of a twins, so the palm print is a reliable biometric identifier. One of the most important advantages of using the palm print recognition systems is that they do not require high-resolution devices, because the features of the palm, such as primary and secondary lines, appears even in the case of low-resolution images [15].

literature review
In 2002, the researcher (Wai kin Kong, David Zhang) published their research "palm print texture analysis based on low-resolution images for personal authentication" where they presented a way to extract the features based on low resolution images of palm print [8].
Chin-Chuan Han, et. al in 2003 published their research" personal authentication using palm-print features". their proposal suggested two mechanisms for verification , The first verification mechanisms namely templet matching method get more than 91% accuracy rate and the second is the neural network based method, in which back propagation was used and 98% accuracy was obtained [7]. AJAY Kumar, Helen C. Shen with their paper "Palmprint Identification using Palm Codes" in 2004, identifying people based on the palm print by using Real Gabor Filter. The experimental results that used 400 images of the palm print are reaching an accuracy rate of 97.5 [9].
In 2006, Ajay Kumar, David Zhang published their research "combining fingerprint, palm print and hand-shape for user authentication" introducing an approach to authenticate persons by merge unique biometric features that can be gained from hand images [10].
In 2007, Lei Zhang et.al published their research "palm print verification using complex wavelet transform" where they suggested a complex wavelet structural similarity index (CW-SSIMM) to calculate the degree of congruence and identify the image of the palm print entered. Experimental results have shown that the proposed method acquire a lower false acceptance rate and higher genuine acceptance rate at the same time [21].
Michael, G. et.al in 2008 presented their research "Touch-less palm print biometrics: Novel design and implementation" suggesting a system to recognize the palm print without a person touching the biometric scanners. An algorithm has also been developed for tracking and detecting the region of interest (ROI) [15].
Azadeh Ghandehari, Reza Safabakhsh In 2011 published their research "A Comparison of Principal Component Analysis and Adaptive Principal Component Extraction for Palmprint Recognition" As they seeks to recognize palm print using the principle component analysis (PCA) and adaptive principle component extraction (APEX) which is a PCA technology that includes a neural network for the purpose of extract feature [6].
In 2012 Promila, V.laxmi published their research "Palmprint Matching Using LBP" where they discussed new way with the relevant analyzes to verification of the palm print based on local binary pattern (LBP) to capture the texture of the palm print [12].
The researcher Feiferi Lin, Lijian Zhou published their research "palm print feature extraction based on curvelet transform" in 2015 suggesting a way to extract features from the palm print based on the fast discrete curvelet transform. In this research 2nd frequency band coefficient selected as a feature of the palm print and then apply PCA to the space of the selected features to get more low-dimensional representation [13].
In 2019 Satya Verma and Saravan Chandran published their research "contactless palm print verification system using 2-d Gabor filter and principal component analysis" proposing a model to verify the palm print using the Principal Component Analysis, 2d Gabor Filter, Sobel Edge Detection [20].
In the last recent years palm print recognition goes to be done mechanically using high robust technique, Poonam and et. Al. in their research, "palm print recognition using robust template matching" yields better results in terms of Correct Recognition Rate (CRR) and Equal Error Rate (EER) of 95.4% and 0.37% respectively [ 18].
Also palm print recognition goes over to match a 3D direction, Fei, L., Zhang B. and et. al. in their research "feature extraction for 3-D palm print recognition" touch this point and present a comprehensive overview of feature extraction and matching for 3-D palm print recognition [ 5].

Feature extraction
Images play an important role in various fields such as entertainment, medicine, science, advertising, design, journalism, education, etc. Computer-aided analysis of images has become important in all areas of research. Image analysis includes many tasks, including image segmentation, image transformation, classification of patterns, and extraction of features [17]. The texture is one of the most important features of images, as these features derived from the texture are widely used in image processing applications. The texture refers to the arrangement of the structure or type of information in the image [19]. Features contain information related to the image is used in image processing such as search, storage, retrieval [19]. Because large amounts of data are used to represent the image, image analysis requires a large amount of memory and therefore spend a longer time in the analysis process, in order to reduce the amount of the big data, images will be represented by a set of features [19]. As the main goal of extracting features is to represent the information related to the image with a less distant space when the input data is large and cannot be processed, then it is converted into a feature vector [11]. The process of extracting features can be defined by the process of converting the data entered features set [11]. For the purpose of distinguishing patterns, the extraction of features is very important, whereby features such as color, shape, and texture will be extracted to be analyzed and distinguished into two parts feature selection and feature construction [19]. Features can be classified under two categories [11]:- Local features:-That typically are geometric such as concave and convex parts, number of branches and joints.  Global features :-It is divided into two parts:-1. Topological structure:-such as number of openings and projection profiles,.

Statistical:-such as invariant moments.
Also features can be classified based on the type of properties that some of these features describe texture, color, structure and shape [17] [19].

Invariant moment
One of the main problems in identifying patterns is the identification of object, regardless of their size, direction, or location. Based on the algebraic constants, in 1962 HU determined a set of constants, where the idea of using moments was very important at that time [14]. A long time ago, moments were used in classical mechanics and in statistical theory, where moments are presented as variances, means, kurtosis of distributions and skewness. There are several uses for moments that are used as a feature vector for the purpose of classification and shape specifications for objects in image processing as well as for use as features for image texture [4].
Fixed moment was used for character recognition as it was applied to its problem by distinguishing 2D patterns and also used in identification of the aircraft, recognition of the ships , And the hand printed character [4]. Using non-linear combinations, Hu has provided a set of constants based on regular moment can be defined as the equation (1) And for the digitally sampled image in equation (2)[14]:- ) . (y q ) f(x, y) p, q = 0,1,2 … ….
(2) The moment of the f (x, y) translated by quantity (a,b) can be defined by equation (3):- Where represents the moment arrangement (p + q)th for the continuous function f (x, y). Thus, the central moments can be calculated using the above equation by substitute a= − ̅ and = − ̅ as in equation (4) Using the center of the image, the central moments are calculated which are tantamount to the standard moments of the image whose center has been shifted to coincide with its original position. Accordingly, the central moments are not affected by translation of the image [3]. By normalizing the central moments during the third arrangement, HU specifies seven calculated values that are constant when both the direction, location, scale are changed. These seven moments are presented from equation (5) (11) These seven moments have the desired properties of being static under the influence of image rotation, scaling, and translation. Calculating the highest order among the seven fixed moments is very complicated, and recovering images from the result is complex [3].

Proposed algorithm:-
The proposed algorithm goes in two phases, the first one builds the database for the palm print features for the persons hands while in the second phase the detect operation is applied to find the closest attribute within the database to the tested images. The following steps summarized the proposed algorithm:-1. Import 8 images of the palm print 4 for left hand and 4 for the right hand, the images represented in a gray level. 2. Determine the region of interest (ROI) for the image of size 192*192 and remove unimportant parts. 3. Apply the SOBEL operator on the ROI area for each image to highlight the important edges, then apply the morphological filters on them to clarify the main lines. 4. Evaluate the seven moments of each image (28 values for each hand). 5. The obtained values will be stored in the database (each record hold 56 moment's value).
The classification phase goes through the same steps to obtain the 56 moment values then extract the nearest record in the database, then justify weather the tested person is within the database or not, based on threshold value, if the tested person is far away from stored record, then a new record will be added to the database.

Algorithm implementation
Step one:-import images and detect region of interest for 8 images then apply preprocess operation. Figure   Step 2:-figure(5)and figure (6) represent the effect of selecting ROI and the alignment of the palm, the algorithm goes over it by acquisition for Image for each hand and apply restrictions on conditions of fixing the palm on in the scanners instruments.

Result Discussion
The seven moments for ROI of the left hand shown in table (1) and plotted in figure (11).  The seven moment for ROI of the right hand shown in table (2) and plotted in figure (12).   Table (3) and figure (13) represent the decision effect of each hand individually and integrated both hands on the target person in the database. The proposed system will select the index person who has the highest correlation.    (14) clarify the effect of ROI dimension on the decision to select the right person from the database which show that best dimension on the most experimental test is 192*192 because of the left, right and both of them give the same person in the database.

Conclusion
As a conclusion seen from the applied result that the low moments were highly sensitive than the upper moments and their effects are less than the upper moments, but add them to the above result make the decision more accurate. The result of the applied sample in table (1) and table (2) which is clarified in figure(11)and figure (12) for both hands show that the effect of whole moment is approximately similar to each other when looking for the closest person. The proposed algorithm try to correlate individually the similar pairs of moments (tested person and the database) then correlate the integrated moment of the tested person with the integrated moment of the database to extract the closest person ( based on threshold vale not more than 0.1) which gave a correlation up to 0.95.
The result of looking for the nearest person in the database (table(1), table(2) show a sample of the seven moments for both hands) a test based on the first four moment appear low detection than integrated seven moment, but spend less time. Finally the number of moment is directly proportional with detection level, while inversely proportional with time spend for same operation. Finally after the seven moment evaluation of palm, the Max correlation is 0.8585 from person 21 in the database, And max correlation of right hand is 0.7586 from person 49 in data base. A conclusion that seven moment of the palm print seems to be good metrics to justify the nearest person whose palm print was stored in a database which can be adopted for personal systems authentication.

Future work
Due to the result discussion and the conclusion, this work can be developed by adopting image transform coefficients as input to the proposed algorithm in addition to use the centroid of ROI plus the moments to get high recognition. The proposed algorithm can be introduced in some applied technique used for personal authentication.