Image Compression Based on Clustering Fuzzy Neural Network

صخلملا نإ يأ قفارت يتلا تاقوعملاو لكاشملا ة ريبك ةمزح ضرع بلطتت ةيمقر ةروص اهلقنل ة آ ىلإ ناكم نم رخ ىلإ جاتحت كلذكو ةريبك ةينزخ ةحاسم . تداق تاقوعملا هذه ىلإ نـع ثحبلا بلا ةبسن ليلقتل سبكلا تايمزراوخل تانيسحت يف ريثأت نود نم يأ ةيعون لضفأبو ةثوعبملا تاناي قيقحلا تانايبلا ةروصلل ةي . ةدـقنعلا ىلع دامتعلااب ةروصلا سبكل ةديدج ةقيرط ميدقت مت ثحبلا اذه يف . ةـقيرط بكلا ةديدج فده ةلاد نمضتت ةديدجلا س ةكبش ىلع ةدمتعملا ةقاطلا ةلاد ةطساوب اهتميق لقت يتلا ةيئانث ةببضملا ةيعانطصلاا ةيبصعلا دليفبوهلا داعبلأا بيردتلا تاذ فارشإ نود . نوـكتت د ةـلا ديدجلا فدهلا ة زـكارمو ةروصـلا طاقن نيب ةفاسملا لدعمو ةيفينصتلا يبورتنلاا ةلاد طبر نم ةدقنعلا . يدامر جردت تاذ روص جذامن ىلع ةديدجلا ةقيرطلا قيبطت مت دادعإبو نـم ةـعونتم ىلع لوصحلا متو ةدقنعلا زكارم لضفأ سبك ةبسن . ربتعت ةديدجلا ةقيرطلا هذهو اضيأ ةـقيرط ةدقنع ةروصلا طاقنل ةيوق ةديدج .


Introduction
Image compression has been pushed to the forefront of the image processing field.This is largely a result of the rapid growth in computer power, the corresponding growth in the multimedia market, and the advent of the World Wide Web, which makes the Internet easily accessible for everyone.Additionally, the advances in video technology, including highdefinition television, are creating a demand for new, better, and faster image compression algorithms.The storage and transmission of such data require large capacity and bandwidth which could be very expensive.Image data compression techniques are concerned with reduction of the redundancies in data representation in order to decrease data storage requirements and hence communication costs.Reducing the storage requirements is equivalent to increasing the capacity of the storage medium and hence communication bandwidth.Thus the development of efficient compression techniques will continue to be a design challenge for future communication systems and advanced multimedia applications [1,2].
Clustering is a useful approach in several exploratory patternanalysis, grouping, and machine-learning situations, including data mining, document retrieval, image segmentation, and pattern classification [3,4].
In image segmentation coding techniques, image is segmented to different regions separated with contours, and coded with different coding techniques.Region growing, k-means, c-means, and split and merge methods are used generally for image segmentation.Beside of this, crisp classical segmentation methods, the fuzzy logic segmentation methods were also seen very effective for coding [5,6].
The Hopfield neural network is well-known technique used for solving optimization problems based on energy function [7].In this study, a new image clustering and compression method based on fuzzy Hopfield neural network was introduced for gray scale images.This new approach includes new objective function, and its minimization by energy function based on unsupervised two dimensional fuzzy Hopfield neural network.After applying new method on gray scale sample images at different number of clusters, better compression ratio was observed.

Classification of compression algorithms
In an abstract sense, we can describe data compression as a method that takes an input data D and generates a shorter representation of the data c(D) with a fewer number of bits compared to that of D. the reverse process is called decompression, which takes the compressed data c(D) and generates or reconstructs the data D ' as shown in the figure 1.Sometimes the compression (coding) and decompression (decoding) systems together are called a CODEC.
The reconstructed data D ' could be identical to the original data D or it could be an approximation of the original data D, depending on the reconstruction requirements.If the reconstructed data D ' is an exact replica of the original data D, we call the algorithm applied to compress D and decompress c(D) to be lossless.On the other hand, we say the algorithms are lossy when D ' is not an exact replica of D. hence as far as the reversibility of the original data is concerned, the data compression algorithms can be broadly classified into two categories, lossless and lossy, we will focus our discussions on lossless coding [2,8].

Coding (Compression) Method
The neighboring pixels in a typical image are highly correlated to each other.Often it is observed that the consecutive pixels in a smooth region of an image are identical or the variation among the neighboring pixels is very small.
Run length coding(RLC) is an image compression method that works by counting the number of adjacent pixels with the same gray level value.This count, called the run length, is then coded and stored.
Run length coding is a simple approach to source coding when there exists a long run of the same data, in a consecutive manner, in a data set.As an example, the data set that represents by the matrix d = In this manner, the data d can be run length encoded as (5 7) (19 12) (0 8) ( 81) (23 6).For ease of understanding, we have shown a pair in each parentheses.Here the first value represents the pixel, while the second indicates the length of its run [1,2].

Decompression system
In some cases, the appearance of runs of symbols may not be very apparent.But the data can possibly be processed in order to aid run length coding.Here we apply classical clustering algorithm, fuzzy clustering algorithm and using fuzzy with neural network on the gray level image.Then, after obtaining the cluster centroids the clustering image is created and it is coded by run length coding algorithm in one and two dimensions.When applying two dimensions of run length coding, we use zig-zag ordering of coefficients of the clustered image as shown in the following figure.

Fidelity Criteria
In some image transmission systems some errors in the reconstructed image can be tolerated.In this case a fidelity criterion can be used as a measure of system quality [16].After completing the decoding process, the root mean square error e RMS , the root mean square signal to noise ratio SNR RMS , and the peak signal to noise ratio SNR PEAK , should be calculated between the reconstructed image and the original image to verify the quality of the decode image with respect to the original one.The root mean square error is found by taking the square root of the error squared divided by the total number of pixels in the image.
[ ] The smaller the value of the error metrics, the better the compressed image represents the original image.Alternately, with the signal to noise (SNR) metrics, a larger number implies a better image.The SNR metrics consider the decompressed image ( ) , ( ˆc r I ) to be the 'signal' and the error to be 'noise'.We can define the root mean square signal to noise ratio as : where: L = the number of gray levels (e.g. , for 8 bits L=256) To check the compression performance, the values (CR) compression ratio and Bpp rate (bit per pixel rate ) are calculated.The compression ratio is the amount of compression, while the Bpp rate is the number of bits required to represent each pixel value of the compressed image.The compression ratio is defined by: The bits per pixel for N*N image is :

K-means Clustering Algorithm
The Standard K-means clustering algorithm is well known and understood algorithm.Its computational complexity is O(n), where n is the number of data points (feature vectors) to be clustered.The k-means is one of a group of algorithms which aim to minimize an objective function [2,9].
Although it can be proved that the procedure will always terminate, the k-means algorithm does not necessarily find the most optimal configuration, corresponding to the global objective function minimum.The algorithm is also significantly sensitive to the initial randomly selected cluster centers.The k-means algorithm can be run multiple times to lessen this affect [10,11].
The k-means is a simple algorithm that has been adapted to many problem domains.It is a good candidate for extension to work with fuzzy feature vectors.

Fuzzy Approach to Pixel Classification
The basic postulates of fuzzy clustering is that a member may have partial memberships grades in several fuzzy clusters.A membership value in the interval[0,1] is assigned to each sample in every cluster, based on certain measurements [2,10].
In fuzzy clustering a pattern is assigned with a degree of belongingness to each cluster in a partition.Here we will present the most popular fuzzy clustering algorithm, known as fuzzy c-means algorithm [12,13].

Fuzzy C-Means
Fuzzy c-means is a commonly used clustering approach.It is a natural generalization of the k-means algorithm allowing for soft segmentation based on fuzzy set theory.As in hard k-means algorithm, fuzzy c-means algorithm is based on the minimization of a criterion function.
The following criterion function may be chosen.Which differs from the k-means objective function by the addition of the membership values u ik and the fuzzifier m that shown in the following equation.
• U={u ik } is c*n matrix, where u ik is the membership value of the k th input sample x k in the i th cluster.The membership values satisfy the following conditions: is an exponent weight factor .There is no fixed rule for choosing the exponent weight factor.However, in many applications m=2 is a common choice.
The above three conditions imply the followings: • The membership values of each sample x k to a particular cluster should lie between 0 and 1. • Each sample x k must belong to at least one cluster.
• Each class must have at least one sample and all the sample cannot belong to a particular class.
The objective function in this case is the sum of the squared Euclidean distances between each input sample and its corresponding cluster center, weighted by the fuzzy membership values.The algorithm iteratively updates the cluster centers using the expression: The fuzzy membership function of the k th sample x k to the i th cluster is given by the following: It can be noted that the weight factor m, reduces the influence of small membership values.
The fuzzy c-means algorithm is thus summarized as follows: Step1: Initialize U (0) randomly or based on some approximation.Initialize V (0) and calculate U (0) .Set the iteration counter t=1.Select the number of cluster centers c and choose the value of m.Step2: Compute the cluster centers.Given U (t) , calculate V (t) according to the Eq. 1.
Step3: Given V (t) update the membership values to U (t+1) according to Eq.
, where ε is a small positive number.Step5: Increment iteration counter to t=t+1 Go to step2.
It may be noted that, we are applying the fuzzy c-means to image the data points or the sample points x 1 ,x 2 ,…….,x n are the pixel gray values.Thus n represents the total number of pixels in the image [2,10,12].

Hopfield Neural Network
Artificial neural networks are mimicking the neurophysiology of the human brain.They have an ability to learn from examples in order to find patterns in data or classify data.Once trained on training data they have predicting ability on new data.They perform global search on data, however their shortcoming is that they represent the kind of black box , where a user can hardly understand the underlying principles that are used to classify the data.On the other hand, it could perform well on recognizing images and similar tasks.The most common method used for learning of neural network is Hopfield [14].
The Hopfield neural network can be used as a content addressable memory.Knowledge and information can be stored in single layered interconnected neurons (nodes) and weighted synapses(links) (as shown in the following figure), and can be retrieved based on the network's parallel relaxation method, nodes are activated in parallel and are traversed until the network reaches a stable state (convergence).It had been used for various classification tasks and global optimization [15].In this study, a new image clustering and compression method based on fuzzy Hopfield neural network was introduced for gray scale images.

The Proposed Method
The Hopfield neural network is a well-known technique used for solving optimization problems based on the energy function.In this method, two dimensional Hopfield neural network consists of N*c neurons which are fully interconnected neurons.The total weighted input for neuron (x,i) is given as: where N is the number of data points, c is number of cluster, V y,j denotes the binary state of neuron (y,j), W x,i;y,j is interconnection weight between neuron (x,i) and neuron (y,j) , I x,i is external bias vector for neuron (x,i).energy function of two dimensional Hopfield neural network is also given as: The neural network reaches a stable state, when the energy function is minimized.The optimization problem can be mapped into a two dimensional fully interconnected Hopfield neural network with the fuzzy reasoning strategy.Instead of using the competitive learning strategy, the fuzzy Hopfield neural network uses the fuzzy reasoning algorithm to eliminate the need for finding weighting factors in the energy function.

Iterative minimization of new objective function consists of the following steps:
Step1: Choose number of cluster c, iteration criteria ε , fuzzification parameter m chosen to be 2, and primary centroids v 0 .Step2: Compute initial membership values: Where m is the fuzzification parameter and membership value u x,i is the output state at neuron (x,i), z x is x pixel value of image .A neuron (x,i) in a maximum membership state indicates that z x pixel belongs to class i.The summation of membership values of each pixel in different classes equals 1 and total membership values for N image pixel equal N. v represents the cluster center.
Step3: Compute new membership values according to the following equation: When the input to neuron (x,i) can be expressed as: New objective function consists of equal weighted combination of classification entropy function and average distance between image pixels and cluster centroids for separate and compact clustering.
Step8: Segmented or clustering image is created when obtaining the membership values and the cluster centroids, after this it is coded by run length coding in one or two dimensions.
The following block diagram explain the flow of the new image clustering and compression method based on fuzzy Hopfield neural network.

Results and Conclusions
This new method fuzzy hopfield neural network was applied to 256*256 dimensional four sample grayscale images and compared with results of k-means and fuzzy c-means algorithms and also compared with original run length coding with one and two dimensions.Comparing parameters is a signal to noise ratio, compression ratio and bits per pixel.Comparison results are given at table 1. Original images and reconstructed image by k-means, fuzzy c-means and fuzzy hopfield neural network method were also given in figure 5,6,7 and 8 corresponding to different sample of images fuzzy hopfield neural network method provides better image compression than other methods according to results.
Importance of image clustering and compression methods increases nowadays.A new image clustering and compression method based on fuzzy hopfield neural network provides better compression ratio.This method can be used for pattern recognition additionally, because it provides good validity measure.There isn't possibility to reach incorrect results.But some of methods as k-means has high possibility to go to a local minimum according to selection of initial values and may not give correct results.The fuzzy hopfield neural network can also provide a more efficient mechanism.This new method is a good alternative for image clustering and compression.

Fig. 2
Zig-Zag Ordering of Pixels of ImageWe can define the error between an original, uncompressed pixel value and the reconstructed (decompressed) pixel value by the following equation image next we can define the total error in an N * N decompressed image as: related metric, the peak signal to noise ratio, is defined as: is effective to minimize new objective function in iteration.
If u x,i membership values is out of 1 Compute new objective function J k :

Fig. 7 :
Fig. 7 : Original image , Reconstructed image using K-means, Reconstructed image using fuzzy c-means, Reconstructed image using fuzzy Hopfield neural network