Main Subjects : Image Processing, Computer Vision, Pattern Recognition & Graphics

Image Coding Based on Contourlet Transformation

Sahlah Abd Ali Al-hamdanee; Eman Abd Elaziz; Khalil I. Alsaif

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 2, Pages 149-163
DOI: 10.33899/csmj.2021.170018

The interest in coding was very high because it is widely relied on in the security of correspondence and in the security of information in addition to the need to rely on it in the storage of data because it leads to a pressure in the volume of information when storing it. In this research, image transformation was used to encode gray or color images by adopting parameters elected from contourlet transformations for image. The color images are acquired into the algorithm, to be converted into three slices (the main colors of the image), to be disassembled into their coefficients through contourlet transformations and then some high frequencies in addition to the low frequency are elected in order to reconstruct the image again. The election of low frequencies with a small portion of the high frequencies has led to bury some unnecessary information from the image components.
The performance efficiency of the proposed method was measured by MSE and PSNR criteria to see the extent of the discrepancy between the original image and the recovered image when adopting different degrees of disassembly level, in addition, the extent to which the image type affects the performance efficiency of the approved method has been studied. When the practical application of the method show that the level of disassembly is directly proportional to the amount of the error square MSE and also has a great effect on the extent of correlation where the recovered image away from the original image in direct proportional with the increased degree of disassembly of the image. It also shows the extent to which it is affected by the image of different types and varieties, where was the highest value of the PSNR (58.0393) in the natural images and the less valuable in x-ray images (56.9295) as shown in table 4.

Embedded Descriptor Generation in Faster R-CNN for Multi-Object Tracking

Younis A. Younis; Khalil I. Alsaif

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 2, Pages 91-102
DOI: 10.33899/csmj.2021.170013

With the rapid growth of computer usage to extract the required knowledge from a huge amount of information, such as a video file, significant attention has been brought towards multi-object detection and tracking. Artificial Neural Networks (ANNs) have shown outstanding performance in multi-object detection, especially the Faster R-CNN network. In this study, a new method is proposed for multi-object tracking based on descriptors generated by a neural network that is embedded in the Faster R-CNN. This embedding allows the proposed method to directly output a descriptor for each object detected by the Faster R-CNN, based on the features detected by the Faster R-CNN to detect the object. The use of these features allows the proposed method to output accurate values rapidly, as these features are already computed for the detection and have been able to provide outstanding performance in the detection stage. The descriptors that are collected from the proposed method are then clustered into a number of clusters equal to the number of objects detected in the first frame of the video. Then, for further frames, the number of clusters is increased until the distance between the centroid of the newly created cluster and the nearest centroid is less than the average distance among the centroids. Newly added clusters are considered for new objects, whereas older ones are kept in case the object reappears in the video. The proposed method is evaluated using the UA-DETRAC (University at Albany Detection and Tracking) dataset and has been able to achieve 64.8% MOTA and 83.6% MOTP, with a processing speed of 127.3 frames per second.

Rapid Contrast Enhancement Algorithm for Natural Contrast- Distorted Color Images

Asmaa Y. Albakri; Zohair Al-Ameen

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 2, Pages 73-90
DOI: 10.33899/csmj.2021.170012

Digital images are often obtained with contrast distortions due to different factors that cannot be avoided on many occasions. Various research works have been introduced on this topic, yet no conclusive findings have been made. Therefore, a low-intricacy multi-step algorithm is developed in this study for rapid contrast enhancement of color images. The developed algorithm consists of four steps, in that the first two steps include separate processing of the input image by the probability density function of the standard normal distribution and the softplus function. In the third step, the output of these two approaches is combined using a modified logarithmic image processing approach. In the fourth step, a gamma-controlled normalization function is applied to fully stretch the image intensities to the standard interval and correct its gamma. The results obtained by the developed algorithm have an improved contrast with preserved brightness and natural colors. The developed algorithm is evaluated with a dataset of various natural contrast degraded color images, compared against six different techniques, and assessed using three specialized image evaluation methods, in that the proposed algorithm performed the best among the comparators according to the used image evaluation methods, processing speed and perceived quality.

Bresenham's Line and Circle Drawing Algorithm using FPGA

Areej H. Ali; Riyadh Z. Mahmood

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 2, Pages 39-53
DOI: 10.33899/csmj.2021.170007

In Bresenham's line drawing algorithm, the points of an n-dimensional raster that have to be selected are determined forming a close approximation to a straight line existed between two points. It is widely used for drawing line primitives in a bitmap image (for example: on a computer screen), since only integer addition, subtraction and bit shifting are used. These three operations are cheap concerning standard computer architectures. In addition, it is an incremental error algorithm. It is among the oldest algorithms that have been developed in computer graphics. An extension to the original algorithm may lead to draw circles. This research deals with the Bresenham’s line and circle drawing algorithm based on FPGA hardware platform.  The shapes on the VGA screen are displayed via internal VGA port that is built in the device.

Image Enhancement Based on Fan Filter Parameters Adjustment

Huda S. Mustafa

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 2, Pages 27-38
DOI: 10.33899/csmj.2021.170006

In the field of image processing, there is an urgent need to adopt image transformations. In this paper, work is done on image coefficients after decomposing them through Curvelet transformations obtained through the fan filter.
The research deals with two stages: the first is to study the effect of the Fan filter by adopting angles (8, 16, 32) on the image (Lina.jpg) of size (256*256) after being analyzed using Curvelet transformations at scales (2,3,4)  through comparing a set of measurements (Contrast, Energy, Correlation, MSE, and PSNR) for both the original and reconstructed images. It can be found that Contrast and Energy criteria remain the same for the original and reconstructed images according to different levels of analysis or directions, so the value of the Correlation measure is 1. The value of the MSE criterion is very small and is almost not affected by the change of the number of angles in one scale, but it is slightly affected by increasing the scale analysis. What was mentioned above applies to the PSNR criterion as well.
As for the second stage of the research, which included decomposing the image to its coefficients, canceling the effect of one of these coefficients, and then reconstructing it. The results proved that the two criteria (Contrast and Energy) were not affected with falling Correlation criteria from 1 to values ​​ranging from (0.9987_0.9997) depending on the number of scales used in the Curvelet analysis and the number of angles used in  Fan filter (8,16,32). The results also showed an increase in the MSE value when dropping some frequencies, and a corresponding decrease in the PSNR value. Whereas, the decrease in the MSE scale was demonstrated at a specific scale with the increase in the number of angles in the Fan filter, in contrast to the PSNR scale.

Comparison Study for Three Compression Techniques (Wavelet, Contourlet and Curvelet Transformation)

Shahad M. Sulaiman; Hadia Saleh Abdullah

AL-Rafidain Journal of Computer Sciences and Mathematics, 2021, Volume 15, Issue 1, Pages 101-114
DOI: 10.33899/csmj.2021.168263

Researches and studies on compressing digital images are aiming to make it easier to deal with networks, communications and Internet by reducing the size of the multimedia files transferred, and reducing the execution time and transmission time. In this research, the lossy compression method was adopted as one of the solutions that reduce the size of the data required to compress the image, through the process of compression of digital image data using Discrete Wavelet Transform algorithms using Haar filter, and Contourlet. Using Laplace and Directional Filter, Curvelet transformation using FDCT- Wrapping Technology .The performance of the algorithms used in the proposed research is also evaluated using a Ratio Compression (RC) scale, As well as the Peak signal to noise ratio (PSNR) scale, the mean sequence error (MSE) scale, the signal to noise ratio (SNR) scale, and finally, the Normalization correlation (NC) scale. Correspondence between the original image and the recovered image after compression, in order to choose the best algorithm that achieves the best compression ratio of the image and maintains the parameters of the recovered image based on the standards (MSE, PSNR, SNR, COR and CR) used with the three algorithms, and the results showed that the Curvelet transformation algorithm achieved : best compression ratio, but at the expense of image quality.

Improving Performance of Projector with the Protection of the Eyes while using a Smart Board

Abdulrafa H. Maree

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 2, Pages 53-62
DOI: 10.33899/csmj.2020.167338

One of the most important problems that a teacher faces when using a smart board in the teaching process is the fall of a strong light beam from the projector on their faces and bodies. The focus of this light is harmful to the human eye, which leads to temporary blindness when it falls directly on the eye. It also leads to harmful side effects. The light falling on the presenter body will make the picture on the screen looks unprofessional and unclear and distract the attention of the students. Solving this problem will led to better lectures delivering for both the teachers and the student.
In this study, a system is designed to track the movement of the teacher using an infrared transmitter that is attached to the teacher’s freshness or head cap. Electronic signals directed to an infrared receiver are installed on the front of the projector device in order to send these signals to the computer for analysis according to the proposed algorithms to determine the teacher face position. A black shade square) is placed in the designated and displayed on the smart board where the lighting will decrease on the face and eyes of the teacher, as this shade will be moving with the movement of the transmitter. This method aims to protect the teacher’s eyes from the harmful strong light.

Palm Print Features for Personal Authentication Based On Seven Moments

Khaleda B. Ali; Khalil I. Alsaif

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 2, Pages 63-74
DOI: 10.33899/csmj.2020.167339

Biometric images are considered as one of the major coefficients in the field of personal authentication. One of the main approaches for personal identification is based on palm print. So studying the features extracted from palm print image adopted to get high efficient system for any recognition systems. In this research two major phases are hold on, in the first phase a database was built for 100 persons by acquiring four images for both hands (4 for left hand and 4 for right hand), then processed to extract ROI (region of interest) by looking for the palm centroid then a square area is fixed based on that centroid. The pre-process play an important step for stable features. Evaluation of the seven moments for each image (8 images) follow the preprocess then stored in the database file (so each person will have 56 values), this phase called personal database preparation. The second phase is the detection phase, which requires the same steps to get 56 values then go through the database looking for the closest person to the tested one. The system evaluation measured by statistical metrics which show good result goes up to 95.7% when applied on 50 persons with different conditions. Also the effect of ROI dimension with individual hands and integrated both of them studied, and the recommended dimension is 192*192.

A Comparative Study of Methods for Separating Audio Signals

Riham J. Issa; Yusra Mohammad

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 2, Pages 39-49
DOI: 10.33899/csmj.2020.167345

The process of separating signal from a mixture of signals represents an essential task for many applications including sound signal processing and speech processing systems as well as medical signal processing. In this paper, a review of sound source separation problem has been presented, as well as the methods used to extract features from the audio signal, also, we define the Blind source separation problem and comparing between some of the methods used to solve the problem of source separation.

Studying the Coefficient Curvelet for Aerial Image Segmentation

Nagham A. Sultan; Khalil I. Alsaif

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 1, Pages 83-95
DOI: 10.33899/csmj.2020.164880

Currently, there are many approaches for image transformations that are developed to cover the new technology in the huge amount of data treatment. In this paper, a study on Curvelet transformation coefficients was performed based on the aerial image to apply segmentation. This paper applies a lot of modifications on cut off frequencies on the filters, which is used to decompose the image on curvelet transformation. Two approaches are proposed and tested to look for the best segmentation result; the first one is based on designing filters manually, while the second evaluates the filter coefficients depending on the selected shape of the filters. The first technique gives acceptable segmentation and the second reaches the optimal result. One of the most important results is that the cut of frequency has a high effect on the segmentation; in addition, choosing filter parameters depended on the coefficients dimension of the curvelet transformation. Finally, the results show that the first approach underperformed the second one.

Using Cohen Sutherland Line Clipping Algorithm to Generate 3D Models from 2D

Marah M. Taha

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 1, Pages 39-49
DOI: 10.33899/csmj.2020.164675

This paper provides an efficient algorithm to generate three dimensional objects from simple uncomplicated 2D environment, lead to reduce processor effort, limit of using complex mathematical operations. Most of the previous researches used the idea of ​​drawing by vanishing point to generate 3D objects from 2D environment, But the algorithm designed in this paper provides an idea of ​​how to draw three-dimensional shapes from two-dimensional drawings when applying Cohen Sutherland Line clipping algorithm, so that a basic two-dimensional shape is inserted from a set of points connected with each other must be within vision borders with a vanishing point outside of vision that is connected with all points of basic shape to consist a group of lines with partial intersections. So that any point has specific limited vision border which represents one of its coordinates of depth vertex, finally 3d object is generated when all clipping processes are completed to obtain other coordinates for all points.

Medical Image Classification Using Different Machine Learning Algorithms

Sami H. Ismael; Shahab W. Kareem; Firas H. Almukhtar

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 1, Pages 135-147
DOI: 10.33899/csmj.2020.164682

The different types of white blood cells equips us an important data for diagnosing and identifying of many diseases. The automation of this task can save time and avoid errors in the identification process. In this paper, we explore whether using shape features of nucleus is sufficient to classify white blood cells or not. According to this, an automatic system is implemented that is able to identify and analyze White Blood Cells (WBCs) into five categories (Basophil, Eosinophil, Lymphocyte, Monocyte, and Neutrophil). Four steps are required for such a system; the first step represents the segmentation of the cell images and the second step involves the scanning of each segmented image to prepare its dataset. Extracting the shapes and textures from scanned image are performed in the third step. Finally, different machine learning algorithms such as (K* classifier, Additive Regression, Bagging, Input Mapped Classifier, or Decision Table) is separately applied to the extracted (shapes and textures) to obtain the results. Each algorithm results are compared to select the best one according to different criteria’s.

Applying Standard JPEG 2000 Part One on Image Compression

Maha Abdul Rahman Hasso; Sahlah Abed Ali

AL-Rafidain Journal of Computer Sciences and Mathematics, 2020, Volume 14, Issue 1, Pages 13-33
DOI: 10.33899/csmj.2020.164796

In this paper, has been proposed Algorithm for standard JPEG2000 part one for image compression. The proposed Algorithm was executed by  using  MATLAB7.11  environment,  applied  these  algorithm  on  the gray and color images for type of the images natural, medical, Graphics images  and  remote  sensing.  Dependence  on  the  Peak  Signal-to-Noise Ratio  (PSNR)  for  comparing  the  result  of  the  proposed  Algorithm  by using the Daubechies filters 5/3 tap filter and 9/7 tap filter  Biothogonal , Another  comparison  is  held  concerning  the  obtained  results  of   the algorithm    of    ModJPEG  and  Color-SPECK. Proved  the  processing results Efficiency performance of   proposed Algorithm.

Information Hiding Based on Chan-Vese Algorithm

Samia Sh. Lazar; Nadia M. Mohammed

AL-Rafidain Journal of Computer Sciences and Mathematics, 2011, Volume 8, Issue 2, Pages 99-110
DOI: 10.33899/csmj.2011.7876

The process of data transfer via the Internet becomes an easy process as a result of the great advances in networking technologies, and now many people can communicate with each other easily and quickly through them.Because the online environment is general and open, the unauthorized one can control information were transmitted between any two parts and interception of getting access for it, because of that there is an emergency need for write covered, which is the science of hiding secret information in a digital cover such as an images, so it is impossible for the normal person and others unauthorized to detected or perceives. In this paper, the technology in the field of information hiding in the images is developed, where first, the cover (PNG, BMP) image is segmented using Chan-Vese algorithm, then the text will hide in the segmented image depending on the areas of clipping.The standards (PSNR, BER) are used to measure technical efficiency. In addition the algorithm of this technique is implemented in Matlab.

Comparison of Edge Detection Methods in Gray Images

Sobhi H. Hamdoun; Afzal A. Hassan

AL-Rafidain Journal of Computer Sciences and Mathematics, 2006, Volume 3, Issue 2, Pages 11-28
DOI: 10.33899/csmj.2006.164056

The methods of edge detection play an important role in many image processing applications as edge detection is regarded as an important stage in image processing and the extraction of certain information from it.
Therefore, this subject was the focus of many studies performed by many authors. Many new techniques of edge detection which search into the discontinuity in color intensity of the image leading to the features of the image components were suggested.
Despite of the presence of many methods of edge detection which proved their efficiency in certain fields and gave good results on application, the performance of one method differs from one application to another, thus there was a need to carry out an evaluation of performance for each method to show its efficiency. The aim of this research is to evaluate the performance of edge detection by choosing five methods known as (Canny, Laplacian of Gaussian,Prewitt, Scobel, Roberts) and the application of each method on images with grayscale to find out the performance of each of them and writing down computer programs for each. Also, a subjective evaluation to compare the performance of these five methods using Partt Figure of Merit, calculating the increase percent in the detected edges, decrease percent in the edge points and the correct position of the edge in each method.

Encryption Binary Images by Using Template Matching

Sundus Khaleel Ebraheem

AL-Rafidain Journal of Computer Sciences and Mathematics, 2006, Volume 3, Issue 2, Pages 43-69
DOI: 10.33899/csmj.2006.164058

Text encryption is a very important field in application of data transformation through the digital networks, and the Internet, so it is very necessary to do encryption operation on the text data to get more security in data transformation.
In this paper, we present a method -Template Matching- to encrypt data which is represented in form of image with BMP extension by using Mono Digital Images method with partial compression for the data by using RLE method which increases the security of the method and reduces the file size.
The application results is efficient for the printed or handwritten text in Arabic or English or any other language, and for the maps or sketches images. The method gives a good ability for data encryption. It is suitable for data transformation through the Internet networks.

Fast Backpropagation Neural Network for VQ-Image Compression

Basil S. Mahmood; Omaima N. AL-Allaf

AL-Rafidain Journal of Computer Sciences and Mathematics, 2004, Volume 1, Issue 1, Pages 96-118
DOI: 10.33899/csmj.2004.164100

The problem inherent to any digital image is the large amount of bandwidth required for transmission or storage. This has driven the research area of image compression to develop algorithms that compress images to lower data rates with better quality.  Artificial neural networks are becoming very attractive in image processing where high computational performance and parallel architectures are required.
In this work, a three layered backpropagation neural network (BPNN) is designed to compress images using vector quantization technique (VQ).The results coming out from the hidden layer represent the codebook used in vector quantization, therefore this is a new method to generate VQ-codebook. Fast algorithm for backpropagation called

(FBP) is built and tested on the designed BPNN. Results show that for the same compression ratio and signal to noise ratio as compared with the ordinary backpropagation algorithm, FBP can speed up the neural system by more than 50. This system is used for both compression/decompression  of any image. The fast backpropagation (FBP) neural network algorithm was used for  training  the designed BPNN. The efficiency of the designed BPNN comes from reducing the chance of error occurring during the compressed image transmission through analog channel (BPNN can be used for enhancing any noisy compressed image that had already been corrupted during transmission through analog channel). The simulation of the BPNN image compression system is  performed using the Borland C++ Ver 3.5 programming language. The compression system has been applied on the well known images such as Lena, Carena, and Car images, and also deals with BMP graphic format images.

FID Fast Image Display for (.BMP & .PCX) Images

Rawaa P. Qasha; Ahmed S. Nori

AL-Rafidain Journal of Computer Sciences and Mathematics, 2004, Volume 1, Issue 1, Pages 66-87
DOI: 10.33899/csmj.2004.164098

Video display speed constitutes a very important factor in modern software performance. The best way to achieve fastest display is by accessing Video RAM and programming video card directly. In addition to the speed, this method provides flexibility and high performance video display operations. Besides that, dealing with 64K and 16.7 M color mode can be achieved only by this method.
Fast Image Display (FID) software is developed to display two popular types of images (BMP, PCX) using direct access to VRAM method with various SVGA modes differing in resolutions and number of colors. Assembly instructions and C++ language have been used to write Software parts.