Mohammed Alsofey, Abdulghafor Al-Rozbayani (Author)
Abstract: This research is a combination of the homotopy perturbation method with Elzaki transform method and Elzaki inverse to solve some nonlinear partial differential equations. Where the method Elzaki transform is not able to deal with nonlinear equations and find a solution by applying the homotopy perturbation method to solve nonlinear differential equations problems. The method used has proven to be an effective and easy way to solve the nonlinear from kind Newell-Whitehead-Segel partial differential equations, which are classified and belong to the category of homogeneous partial differential equations of the second degree. The Elzaki transform method and Homotopy pertubation were shown to be very potent and successful integral transform methods for solving some non-linear equations when this method was compared to other well-known methods for handling the problems under consideration.
Rand Alneamy, Nazar Shuker (Author)
Abstract: In this paper, we present the idea of a strongly invo. T-clean rings, which we define as rings with every a in R having the formula a = t + v, where t is a tripotent and v is an order two unit that commute. Such rings fundamental properties are given, and its connection with other related rings is obtained. we consider a strongly Tri nil clean ring as a ring R with every element a in R can be expressed as a sum of a tripotent and a nilpotent that commute, and we explore the Jacobson radical over strongly Invo. T-clean ring.
Shahad Mohammed, Nazar Hussein, Mohammed Alkahya (Author)
Abstract: The sine cosine algorithm (SCA), a recently discovered population-based optimization technique, is used to resolve optimization issues. In this research, the study proposes employing the LWSCA (Locally Weighted Sine Cosine Algorithm) as a hybrid approach to enhance the performance of the original SCA (Sine Cosine Algorithm) and mitigate its limitations. These limitations encompass restricted resolution, slow convergence rates, and difficulties in achieving global optimization when dealing with complex, multi-dimensional spaces. The fundamental idea underlying LWSCA is to incorporate the SCA algorithm with the locally weighted (LW) technique and mutation diagram. The hybridization process has two stages: An algorithm is initially changed by altering the fundamental equations to ensure greater effectiveness and accuracy. The second point is that when the LW local approach is used to create a new dependent site, it increases the randomness during the search process. This, in turn, raises the population variance of the optimizer being proposed, ultimately enhancing the overall effectiveness of the global search. The putative method's hybrid architecture is anticipated to significantly increase the potential for exploration and exploitation. By evaluating SCA's performance against IEEE CEC 2017 functions and contrasting it with a variety of different metaheuristic techniques, the usefulness of SCA is investigated. According to the experimental data gathered, the LWSCA's convergence, exploration, and exploitation tendencies have all greatly improved. According to the results, the suggested LWSCA method is a good one that performs better than SCA and other rival algorithms in most functions.
Tariq AL-Hadidi, Safwan Omar Hasoon (Author)
Abstract: Software applications have become widely spread in an unprecedented manner in our daily lives, controlling some of the most sensitive and critical aspects within institutions. Examples include automated systems such as traffic control, aviation, and self-driving cars, among many others. Identifying software defects in these systems poses a challenge for most software-producing companies. In order to develop high-quality and reliable software, companies have turned to defect prediction using machine learning, relying on historical project datasets. This study aims to classify software defect prediction using machine learning techniques, specifically classification techniques. One of the classification techniques employed is eXtreme Gradient Boosting (XGBoost), a useful method for regression and classification analysis based on a gradient boosting decision tree (GBoost). XGBoost incorporates several hyperparameters that can be fine-tuned to enhance the model's performance. The employed hyperparameter tuning method is grid search, validated thereafter using 10-fold cross-validation. The hyperparameters configured for XGBoost include n_estimators, max_depth, subsample, gamma, colsample_bylevel, min_child_weight, and learning_rate. Based on the results of this study, it has been demonstrated that the utilization of algorithms with hyperparameter tuning can improve the performance of the XGBoost algorithm in accurately classifying software defects with high precision.
Rusul Hasan, Inaam Salman Aboud, Rasha Majid Hassoo (Author)
Abstract: The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition System contains many stages, including image acquisition, pre-processing of images, and character recognition. This review aims to examine the earlier studies on transcription and Braille cell recognition by other scholars and the comparative results and detection techniques among them. This review will look at previous work done by other researchers on Braille cell recognition and transcription, comparing previous works in this study, and will be useful and illuminating for Braille Recognition System researchers, especially newcomers.
Mais Irreem Kamal, Laheeb Ibrahim (Author)
Abstract: Multilevel encryption is a system used to protect sensitive data and information with multiple overlapping encryption methods. This system aims to increase the level of security by adding multiple layers of encryption, which makes it more difficult to decrypt the data. When using the multilevel encryption system, different encryption techniques are applied to the data successively. For example, digital encryption algorithms can be used as initial layers to encrypt data, and then the encrypted data is encrypted again using other encryption techniques. This study aims to review study and analyze research and studies in the field of multilevel encryption system and propose a multilevel encryption system to enhance security and protection from cyber threats, to make it difficult for attackers to break all overlapping layers of encryption.
Abeer Alyoons, sabren altai (Author)
Abstract: Learning AI is used to detect various cyber-attacks, malicious attacks and unusual future threats in the network. Notice the pattern in which any illegal or suspicious activities behave, recognize what is needed, and identify proactive behaviours that indicate the presence of threats. In light of this, the two researchers provide an in depth review of research related to artificial intelligence in cyberspace and through learning techniques used to train machines to enhance cyber security. The studies that were used, the techniques, the extent of their effectiveness and efficiency in achieving the desired goal were reviewed, detailed, sorted and compared with each other, especially the data used to build the model and the diversity of these methods that were used to create its model.
Esraa Alobaydi, Muna Jawhar (Author)
Abstract: The rapid digital development has led to a steady increase in the use of cloud storage as a primary means of saving and sharing data and files. This development brought major challenges in the field of security and data protection, as well as the concerns related to hacking and information theft are constantly escalating. Hence, the importance of applying strong encryption techniques to protect data saved in cloud storage. For this reason, this study explores and presents an advanced encryption strategy that combines three of the most powerful known algorithms which are Blowfish, Paillier, and AES with the aim of increasing the level of security and privacy during uploading files to the cloud storage. This research aims to provide an overview of how this process can be implemented using these nested hybrid algorithms and the potential benefits of this multi-layered approach. While these algorithms are diverse, effective, and strong in protection, which yields the contribution significantly to increasing the level of security in the field of storing sensitive data in the cloud. The obtained results showed that the hybrid algorithm gives the ability to combine different advantages of the encryption algorithms and achieve the ideal balance between strong protection and efficient performance in the field of in-cloud data protection with minimal time consumption.
Noor Gassan Abdullah, Shahd Abdulrhman Hasso (Author)
Abstract: One of the primary challenges in Internet data transmission lies in safeguarding data from unauthorized access by potential attackers. The goal of content-adaptive steganography is to conceal data inside the image's intricate texture. This research introduces an improved algorithm for concealing messages within color images. The developed method incorporates bit-plane slicing and the RSA algorithm as its foundation, aiming to achieve a heightened level of security for data hiding. The algorithm's uniqueness lies in its adaptability, where the threshold is determined based on both the text and image characteristics. Subsequently, the public key is employed for encryption the thresholds, while the private key is utilized for decryption it. Performance criteria such as Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) are using to assess the quality of the developed algorithm. The results indicate that when 90,000 bits are conceal in the image, it yields an acceptable PSNR value of 60.5749, MSE of 0.0569, and SSIM of 0.9996. The developed algorithm excels in data hiding, as evidenced by its favorable comparison with other studies.
Walaa ALhadidiu, Shayma Mohi-Aldeen (Author)
Abstract: Technological development in all areas of life has played an effective role in the development of electronic systems. They have become one of the necessary modern building systems used by the government in order to facilitate the work of employees and reduce the effort and time of the employee and citizen worker. This research suggested searching for an electronic platform for private property matters between departments so that all specializations would be transferred from the basic system to the electronic system. This system is considered a platform and does not require submitting ownership transfer requests easily and quickly via the Internet. The system provides a great opportunity to progress their transactions on a regular and continuous basis, providing accurate information about the progress of their transactions and the specific time for completing them.
Abstract: This study demonstrates how to control fluid flow in inclined and horizontal cavities. The equations governing the problem are established, including the two-dimensional nonlinear partial differential equations of energy, motion, and continuity. We use the successive implicit direction method (ADI) to numerically solve these equations. We discovered that the Rayleigh, Prandtl, Darcy, Eckert, and Reynolds numbers influenced the action of motion and energy equations. In addition to the effect of the problem's inclination angle. We solved this scheme by creating a computer program in MATLAB,we conclude that all equations can be solved to a stable state after a number of iterations, and at different angles 0, 30, and 90, the ADI method, as shown in the paper figures
Rana AL -jaheishi, Amir AL- siraj (Author)
Abstract: In this paper, new kinds of open sets inside ideal topological spaces are introduced; they are called ic - I - open,icc - I -open,weakly ic - I - open and weakly icc - I - open. Some properties and relations between these new classes are studied with examples. The concept of continuity in ideal topological spaces is also presented for these new classes. Theorems that provide an equivalence relation between these new classes are proved. Also, for ideal topological spaces (Ɲ, Ʈ, I) we show that all open sets are ic -I- open, icc - I - open, weakly ic - I - open, weakly icc - I - open . Finally, Assume Z ⊂ Ɲ of ideal Topological spaces (Ɲ,Ʈ,l ). Then 1) if Z is semi - I - open, then Zc is ic – I - open.2) if Z is open and ic - I - closed, then Z is semi - I - open.
Abstract: In this study, we generalized two known open sets called semi-open and h-open sets to find a new type called semi-h-open set in topological space We have presented the relationship of several famous open sets to this set. We have also studied the semi-h-open continuity in topological space. In this study, we generalized two known open sets called semi-open and h-open sets to find a new type called semi-h-open set in topological space We have presented the relationship of several famous open sets to this set. We have also studied the semi-h-open continuity in topological space. we generalized two known open sets.
Abstract: Accelerated life testing is a fundamental practice in reliability engineering, making the evaluation of component or device performance over extended lifetimes impractical to encounter during design. This study delves into the application of the Weibull distribution to model lifetime data, showcasing its versatility in real-world scenarios. The evaluation includes critical metrics such as Akaike’s information criterion (AIC), Bayesian information criterion (BIC), coefficient of determination, and standard error for distribution comparison. Utilizing Maximum Likelihood Estimation (MLE) for parameter estimation, a simulation study is conducted with varying sample sizes, and the R programming language is employed for in-depth analysis. Real data analysis involves Weibull using goodness-of-fit criteria. Maximum Likelihood Estimates (MLEs) are obtained, and the likelihood ratio test demonstrates the Weibull model's superior alignment with the data. The study concludes with the simplicity of producing Quick Fit plots for analysis using R software. The presented approach provides a comprehensive understanding of reliability characteristics, combining theoretical insights with practical applications and numerical analyses. The estimated parameters (B=0.973725, n=14167.5) and statistical measures (K-Smirov, AIC, BIC, Anderson-Darling, Cramer-von Misses) underscore the thoroughness of the evaluation process. The likelihood ratio test further substantiates the Weibull distribution's closer alignment with the input data compared to the standard 2-parameter Weibull distribution. These findings offer a significant methodology for accelerated life testing and model selection, providing essential practical insights into reliability engineering.
Alaa AbdulRaheeM, shahd Abdulrhman Hasso (Author)
Abstract: Creating and testing a biometric key is a critical process used for security and identity verification .When using biometric traits such as fingerprints, facial features, iris patterns, earprints, and voice patterns, a unique key is created and linked to the individual's biometric identity. These biometrics provide inherent uniqueness, resulting in a higher level of security compared to traditional methods. In addition, biometric authentication eliminates the need for users to memorize complex passwords or carry physical tokens, thus enhancing convenience and user experience. Iris recognition systems have received significant attention in biometrics for their ability to provide robust criteria for identifying individuals, thanks to the rich texture of the iris. In this research, the key generation process was created by converting biometrics (the iris) into a digital representation (a set of binary numbers from the two iris) that can be used in the encryption process. This is done by using digital image processing algorithms to extract unique features from the two irises. After the key is generated, it is tested using random metrics. If the key meets the criteria, it is random otherwise the key will be generated again.
Mohammed Al-Neima, Raida Mahmood (Author)
Abstract: In this paper, three types of rings were reviewed: invo-clean, invo-t-clean and invo-k-clean, the ring invo-clean is invo-t-clean and invo-k-clean. Since invo-t-clean ring is invo-k-clean when k=3. Various examples representing elements of the three rings were presented, as were other rings in Zn that satisfy the three types, especially for the invo-k-clean ring, where different examples of rings were taken at different values of k. There are properties that all rings share and others that differ among them, the most important of which is characteristic, where the invo-clean ring is 24, and the invo-t-clean ring is 120 while invo-k-clean there is difficulty in proving it.
Omar Abed Najm, Ahmed Nori (Author)
Abstract: Steganography is one of the important topics in the field of data security. This is because of the exponential progress and secret communication of prospective computers. Secret communication in image Steganography was accomplished to insert a message into a cover image (as the transferor to insert message into) and create a stegoimage. More precisely, steganography is using over non-secret data to be hiding secret data and not able to be deleted, such as image, text, voice or multimedia content for copyright, military communication, authentication and many other purposes. Therefore, its job is to hide data in bits that epitomize the identical color pixels recurrent in a row of an image file. This research aims critically to analyze various stenographic techniques and covers steganography literature. Also, it supports the apprehensive developers to understand the limitations of most popular techniques to be used based on stenography techniques.