Iris Recognition System Based on Wavelet Transform

In order to provide accurate recognition of individuals, the most discriminating information present in an iris pattern must be extracted. Only the significant features of the iris must be encoded so that comparisons between templates can be made. Most iris recognition systems make use of a band pass decomposition of the iris image to create a biometric template. In this paper, the feature extraction techniques are improved and implemented. These techniques are using wavelet filters. The encoded data Maha A. Hasso & Bayez K. Al-Sulaifanie and Kaydar M. Quboa 106 by wavelet filters are converted to binary code to represent the biometric template. The Hamming distance is used to classify the iris templates, and the False Accept Rate (FAR), False Reject Rate (FRR) and recognition rate (RR) are calculated [1]. The wavelet transform using DAUB12 filter proves that it is a good feature extraction technique. It gives equal FAR and FRR and a high recognition rate for the two used databases. When applying the DAUB12 filter to CASIA database, the FAR and FRR are equal to 1.053%, while the recognition rate is 97.89%. For Bath database the recognition rate when applying DAUB12 filter is 100%. %. CASIA and Bath databases are obtained through personal communication. These databases are used in this paper.

by wavelet filters are converted to binary code to represent the biometric template.The Hamming distance is used to classify the iris templates, and the False Accept Rate (FAR), False Reject Rate (FRR) and recognition rate (RR) are calculated [1].
The wavelet transform using DAUB12 filter proves that it is a good feature extraction technique.It gives equal FAR and FRR and a high recognition rate for the two used databases.When applying the DAUB12 filter to CASIA database, the FAR and FRR are equal to 1.053%, while the recognition rate is 97.89%.For Bath database the recognition rate when applying DAUB12 filter is 100%.%.CASIA and Bath databases are obtained through personal communication.These databases are used in this paper.

Introduction
All biometric systems essentially operate in the same fashion.First, a biometric captures a sample of the biometric characteristic.Then unique features are extracted and converted into mathematical code.Depending upon the needs and technology, several samples could be taken to build the confidence level of the initial data.These data are stored as biometric templates for that person.When identity needs to be checked, the person interacts with the biometric system.Features are extracted and compared with the stored information for validation [12] .
The iris begins to form during the third month of gestation [15].The structures creating its striking patterns are developed by the eighth month [9] , although pigment accretion may continue into the first postnatal years.Its complex pattern can contain many distinctive features such as arching ligaments, furrows, ridges, crypts, rings, corona, freckles, and a zigzag collarette [9] .
The iris, as shown in Figure (1), is a physiological biometric feature.It contains unique texture and is complex enough to be used as a biometric signature [5] .Lim et al., [10] presented several interesting ideas.They proposed alternative approaches to both feature extraction and matching.For feature extraction they compared the use of the Gabor transform and the Haar wavelet transform, and their results indicated that the Haar transform is somewhat better.Using the Haar transform the iris patterns can be stored using only 87 bits, which is compared to the 2048 required by Daugman's algorithm.The matching process used Linear Vector Quantization (LVQ) competitive learning neural network, which is optimized by a careful selection of initial weight vectors [10].
Huang et al., [8] proposed a new iris recognition algorithm, which adopted Independent Component Analysis (ICA) to extract iris texture feature and competitive learning mechanism to recognize iris pattern.The proposed iris recognition system represented iris pattern by ICA coefficients.It determines the center of each class by competitive learning mechanism and finally recognized the pattern based on Euclidean distances.
Daouk et al., [4] developed techniques to create an iris recognition system, in addition to analysis results.The techniques use the Canny edge detection and a circular Hough transform to detect the iris's boundaries in the eye's digital image.Then applied the Haar wavelet in order to extract the deterministic patterns in a person's iris in the form of a feature vector.Finally the quantized vectors were compared using the Hamming distance operator to determine whether the two irises are similar.

Iris recognition system
Iris recognition system is composed in many stages.Firstly, an image containing the user's eye is captured by the system and preprocessed.Secondly, the image is localized to determine the iris boundaries.Thirdly, the iris boundary coordinates are converted to the stretched polar coordinates to normalize the scale and illumination of the iris in the image.Fourthly, features representing the iris patterns are extracted based on texture analysis.Finally, a human is identified by comparing his iris with the iris database [5] .

Segmentation
The circular Hough transform is used to deduce the radius and center coordinates of the iris regions.Circular Hough transform requires long execution time therefore an estimation of the radius of the searched circle is used to reduce the search space and execution time.

Normalization
Once the iris region is successfully segmented from an eye image, the next stage is limiting the iris region within fixed dimensions in order to allow comparisons.Another point should be noted is that the pupil region is not always concentric within the iris region.It is usually slightly nasal [16].
The remapping of the iris region from (x,y) cartesian coordinates to the normalized non-concentric polar representation is modeled by the following equation: 2) ) where I(x,y) is the iris region image intensity, (x,y) are the original cartesian coordinates, (r,θ) are the corresponding normalized polar coordinates, and x p , y p and x s , y s are the coordinates of the pupil and iris boundaries along the θ direction [16].The rubber sheet model takes into account pupil dilation and size inconsistencies in order to produce a normalized representation with constant dimensions.In this way the iris region is modeled as a flexible rubber sheet anchored at the iris boundary with the pupil center as the reference point [6].

Wavelet Encoding
Wavelets can be used to decompose the data in the iris region into components that appear at different resolutions.Wavelets have the advantage over traditional Fourier transform in that the frequency data is localized, allowing features which occur at the same position and resolution to be matched up.A number of wavelet filters, also called a bank of wavelets, is applied to the 2D iris region, one for each resolution with each wavelet considered as a scaled version of some basis function.The output of the applied wavelets is then encoded in order to provide a compact and discriminating representation of the iris pattern .
The wavelet transform uses two filters, high pass filter H and low pass filter L [7] [11].These filters are applied in both image directions (x,y) as shown in Figure (2).The HH means that the high pass filter is applied to signals of both directions.The results of wavelet transform have four types of coefficients: (a) Coefficients that result from a convolution with high pass filter in both directions (HH) which represent diagonal features of the image.(b) Coefficients that result from a convolution with high pass filter on the columns after a convolution with low pass filter on the rows (HL).This is corresponding to horizontal structures.(c) Coefficients from high pass filtering on the rows, followed by low pass filtering of the columns (LH).This reflects vertical information.
(d) The coefficients from low pass filtering in both directions (LL) are further processed in the next level [11].

Haar Wavelet
There are several design properties for the construction of a wavelet basis that one would want to be fulfilled [3].The Haar wavelet is the most fundamental of the wavelet systems and is also known as the length-2 Daubechies filter as shown in  It is important to note that the Haar wavelet system is the only one that is orthogonal, symmetric, and has compact support.
where h(n) is the decomposed low pass filter and g(n) is the decomposed high pass filter [3].Haar wavelet is easy in implementation so that it may be considered as the mother wavelet [10] to extract features from the iris region.

Daubechies Wavelet
Due to the properties of approximation quality, redundancy, and numerical stability the Daubechies became the foundation for the most popular techniques for signal analysis and representation in a wide range of Ψ(t) t applications [13].Daubechies wavelet is applied to the normalized iris template for feature extraction.
The Daubechies wavelets are localized basis functions which are translated and dilated versions of some fixed mother wavelet.

Wavelet Filters Implementation
A 4-level of wavelet transform is applied.The template is obtained by combining the features in the HH sub-image (as shown in Figure ( 2)) of the high-pass filter of the fourth level transform (HH4) and each average value for the three remaining high-pass filters level areas (HH1, HH2, HH3).Since each dimension has a real value ranging from -1.0 to +1.0, the feature vector is the sign of the quantiized so that the positive value is represented by 1, and negative value as 0 [10].
To obtain the maximum features in the irises, the iris region is normalized to 48×448 for radial and angular resolution for the wavelet filters implementation.

Hamming Distance Matching Algorithm
The Hamming distance gives a measure of how many bits are the similar between two different patterns [14].Using the Hamming distance of the two patterns, a decision can be made as to whether the two patterns are generated from different irises or from the same one .
In comparing the bit patterns X and Y, the Hamming distance, HD, is defined as the sum of disagreeing bits over N, the total number of bits in the bit pattern is: If two patterns are derived from the same iris, the Hamming distance between them will be close to 0.0, since they are highly correlated.

Segmentation
The experimental result of segmentation using Hough transform of a sample from CASIA database is shown in Figure (4).From Figure (4) it is clear that the eyelash and eyelid regions affect the iris region and cover some important parts of iris.

Normalization
Normalization is the technique to rescale the images to equivalent dimension and convert it to a rectangular shape.In this paper the applied normalization factors including radial and angular resolution are 448X48 examined for different wavelet filters.

Wavelet Filters for Feature Extraction
In order to get the sub-images (i.e.features), the localized iris is normalized to (48×448) pixels of radial and angular resolution respectively.Then the wavelet transform is applied and combined as mentioned in section (2.3.3).The resulting feature vector dimension is given in Table (1).It is clear from Table (1) that the code size increases with the increasing of filter length.This is because the convolution implementation in wavelet transform is theoretically extended the filter mask with zeros, depending on filter length, in all directions.
The wavelet encoding filters generate various code sizes depending on the types of filter.The decidability and minimum difference for the applied wavelet filters to CASIA and Bath databases are given in Table (2).

DAUB15
From this table, if choice is to be made then the DAUB12 filter with decision criterion of 0.34 will be the most suitable choice which gives a recognition rate of 97.89% with equal FAR and FRR of 1.05% for CASIA database.While for Bath database the DAUB12 filter with decision criterion of 0.3 will be the most suitable choice which gives the recognition rate is 100%.

Conclusion
The template that is generated in the feature encoding process needs a corresponding matching metric, which gives a measure of similarity between two iris templates.This metric gives one range of values when comparing templates generated from the same eye, intra-class comparisons, and another range of values when comparing templates created from different irises, inter-class comparisons.
The wavelet transform is an efficient technique for feature extraction.Therefore, it is decided to use the wavelet transform applying Daubechies filter (DAUB12) as feature encoding technique.The wavelet transform encoding using the (DAUB12) filter gives accurate recognition results.The used template resolution is (48×448) pixel that is encoded to (1179) code size.For the "CASIA" database, the achieved recognition rate is 97.8%.for "Bath" database the recognition rate is 100%.The best feature extraction techniques is achieved when using Daubechies wavelet filter DAUB12 since it gives EER and high recognition rate for the two implemented databases.

Acknowledgement
We should not fail to acknowledge the favor of the "Institute of Automation, Chinese Academy of Sciences" and the "Signal and Image Processing Group (SIPG), UK. " for providing me with their iris databases (CASIA and Bath databases).

Table ( 2
(3)The decidability and minimum difference values for encoded templates by wavelet filters with various code sizes.It is clear that the decidability is less than 2.59 for CASIA database and 2.16 for Bath database.Also, it is clear from Table(2) that DAUB12 filter gives the maximum decidability when applying to the two databases.If decision criteria for maximum RR is chosen, FAR, FRR and RR are calculated and results are given in Table(3)for each code size.