Skip Navigation Links Home : MemberDetails
             
Name : Hussein Ismail Al-Bahadili

Academic Rank: Associate Professor

Administrative Position : Faculty Academic Member

Office 7311       Ext No 7311

Email : Hbahadili@uop.edu.jo

Specialization: Parallel Computers

Graduate Of: University of London

Qualification

    Qualification

    University

    Country

    Year

    Bachelor
    University of Baghdad
    Iraq
    1986
    Master's
    University of London
    United Kingdom
    1988
    Ph.D
    University of London
    United Kingdom
    1991



  • Book





      Hussein Al-Bahadili, " Simulation in Computer Network Design and Modeling: Use and Analysis " , "http://www.igi-global.com/book/simulation-computer-network-design-modeling/58282",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012


  • Journal Paper





      Hussein Al-Bahadili,, " Network Security Using Hybrid Port Knocking " , "International Journal of Computer Science and Network Security (IJCSNS)",Vol.10,No.8, , , 10/11/2010 Abstract:
      The main objective of this work is to develop and evaluate the performance of a new port knocking (PK) technique, which can avert all types of port attacks and meets all network security requirements. The new technique utilizes three well-known concepts, these are: PK, steganography, and mutual authentication, therefore, it is referred to as the hybrid PK (HPK) technique. It can be used for host authentication to make local services invisible from port scanning, provide an extra layer of security that attackers must penetrate before accessing or breaking anything important, act as a stop-gap security measure for services with known unpatched vulnerabilities, and provide a wrapper for a legacy or proprietary services with insufficient integrated security. The performance of the proposed technique was evaluated by measuring the average authentication time, which also compared with the average authentication time for a number of currently used port authentication techniques. Download




      Hussein Al-Bahadili,, " Performance Evaluation of an OMPR Algorithm for Route Discovery in Noisy MANETs " , "International Journal of Computer Networks and Communications (IJCNC)",Vol.2,No.1, , , 01/01/2010 Abstract:
      In this paper, we present and evaluate the performance of an optimal multipoint relaying (OMPR) algorithm for route discovery in noisy MANETs. The main feature of this new algorithm is that it calculates all possible sets of multipoint relays (MPRs) and then selects the set with minimum number of nodes. The algorithm demonstrated an excellent performance when it was compared with other route discovery algorithms as it achieves the highest cost-effective reachability. Download




      (4) Hussein Al-Bahad, " Analyzing the Performance of Probabilistic Algorithm in Noisy MANETs " , "International Journal of Wireless & Mobile Networks (IJWMN)",Vol.2,No.3, , , 10/17/2010 Abstract:
      In this paper, we propose a simulation model that can be used to evaluate the performance of probabilistic broadcast for flooding in noisy environment. The proposed model is implemented on a MANET simulator, namely, MANSim. The effect of noise on the performance of probabilistic algorithm was investigated in four scenarios. The main conclusions of these scenarios are: the performance of probabilistic algorithm suffers in presence of noise. However, this suffering is less in high density networks, or if the nodes characterized by high retransmission probability or large radio transmission range. The nodes’ speed has no or insignificant effect on the performance. Download




      Ghassan F. Issa, Sha, " A Framework for Building an Interactive Satellite TV Based M-Learning Environment " , "International Journal of Interactive Mobile Technologies (iJIM)",Vol.4,No.3, , , 10/17/2010 Abstract:
      This paper presents a description of an interactive satellite TV based mobile learning (STV-ML) framework, in which a satellite TV station is used as an integral part of a comprehensive interactive mobile learning (M-Learning) environment. The proposed framework assists in building a reliable, efficient, and cost-effective environment to meet the growing demands of M-Learning all over the world, especially in developing countries. It utilizes recent advances in satellite reception, broadcasting technologies, and interactive TV to facilitate the delivery of gigantic learning materials. This paper also proposed a simple and flexible three-phase implementation methodology which includes construction of earth station, expansion of broadcasting channels, and developing true user interactivity. The proposed framework and implementation methodology ensure the construction of a true, reliable, and cost effective M-Learning system that can be used efficiently and effectively by a wide range of users and educational institutions to deliver ubiquitous learning. Download




      Hussein Al-Bahadili,, " Performance Evaluation of the TSS Node Authentication Scheme in Noisy MANETs " , "International Journal of Network Security (IJNS)",Vol.12,No.1, , , 01/01/2011 Abstract:
      Hussein Al-Bahadili, Shakir M. Hussain, Ghassan Issa, and Khalid Al-Zayyat. Performance Evaluation of the TSS Node Authentication Scheme in Noisy MANETs. International Journal of Network Security (IJNS), Vol. 12, No. 1, pp. 88-96, 2011. URL: http://ijns.femto.com.tw/ Download




      Hussein Al-Bahadili , " A Bit-Level Text Compression Scheme Based on the HCDC Algorithm " , "International Journal of Computers and Applications,",Vol.32,No.3, Actapress, , 10/17/2010 Abstract:
      In this paper we proposed and evaluated the performance of a new bit-level text compression scheme that is based on the HCDC algorithm. The scheme consists of six steps some of which are repetitively applied to achieve higher compression ratio. The repetition loops continue until inflation detected and the accumulated compression ratio is the multiplication of the compression ratios of the individual loops, therefore, we refer to the new scheme as HCDC(k), where k refers to the number of repetition loops. In order to enhance the compression power of the HCDC(k) scheme, a new adaptive encoding format was proposed in which a character is encoded to binary according to its probability. This method of encoding reduces the binary sequence entropy so that it grants higher compression ratio. A number of text files from standard corpora were compressed and the results obtained demonstrate that the proposed scheme has higher compression power than many widely used compression algorithms and it has a competitive performance with respect to state-of-the-art programs.




      Hussein Al-Bahadili , " A Bit-Level Text Compression Scheme Based on ACW Algorithm " , "The International Journal of Automation and Computing (IJAC)",Vol.7,No.1, , , 10/17/2010 Abstract:
      This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and symmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n,s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.




      Ghassan F. Issa, Sha, " Economic Efficiency Analysis for Information Technology in Developing Countries " , "Journal of Computer Science",Vol.5,No.10, Science Publications, USA, 10/17/2010 Abstract:
      Problem statement: The introduction of Information Technology (IT) to government institutions in developing countries bears a great deal of risk of failure. The lack of qualified personnel, lack of financial support and the lack of planning and proper justification are just few of the causes of projects failure. Study presented in this study focused on the justification issue of IT projects through the application of Cost Benefit Analysis (CBA) as part of a comprehensive Economic Efficiency Analysis (EEA) of IT Projects, thus providing management with a decision making tool which highlights existing and future problems and reduces the risk of failure. Approach: Cost-Benefit Analysis (CBA) based on Economic Efficiency Analysis (EEA) was performed on selected IT projects from ministries and key institutions in the government of Jordan using a well established approach employed by the Federal Government of Germany (KBSt approach). The approach was then modified and refined to suit the needs of developing countries so that it captured all the relevant elements of cost and benefits both quantitatively and qualitatively and includes a set of guidelines for data collection strategy. Results: When IT projects were evaluated using CBA, most cases yielded negative Net Present Value (NPV), even though, some cases showed some reduction in operation cost starting from the third year of project life. However, when the CBA was applied as a part of a comprehensive EEA by introducing qualitative aspects and urgency criteria, proper justification for new projects became feasible. Conclusion: The modified EEA represented a systematic approach which was well suited for the government of Jordan as a developing country. This approach was capable of dealing with the justification issue, evaluation of existing systems and the urgency of replacing legacy systems. This study explored many of the challenges and inherited problems existing in the public sectors of developing countries which can not simply be resolved by the introduction of IT projects, but rather require more comprehensive solutions. Download




      Khalid Kaabneh, Azmi, " An Effective Location-Based Power Conservation Scheme for MANETs " , "American Journal of Applied Sciences (AJAS)",Vol.6,No.9, Science Publications, USA, 01/01/2009 Abstract:
      Khalid Kaabneh, Azmi Halasa, and Hussein Al-Bahadili. An Effective Location-Based Power Conservation Scheme for MANETs. Science Publications, American Journal of Applied Sciences (AJAS), Vol. 6, Issue 9, pp. 1708-1713, 2009. URL: http://www.scipub.org/scipub/back_issue.php?j_id=ajas Download




      Hussein Al-Bahadili, " Enhancing the Performance of Adjusted Probabilistic Broadcast in MANETs " , "The Mediterranean Journal of Computers and Networks",Vol.6,No.4, SoftMotor Ltd, Sheffield, UK, 10/30/2010 Abstract:
      Probabilistic broadcast has been widely used as a flooding optimization mechanism to alleviate the effect of broadcast storm problem during route discovery and other services in mobile ad hoc networks (MANETs). In current dynamic probabilistic algorithms, the retransmission probability of the intermediate nodes is expressed as a function of the first-hop neighbors. Usually, two neighborhood densities are identified (low and high), and for each of them a certain constant probability is assigned regardless of the actual value of neighbors. Using such model makes it hard to efficiently adjust the probability to ensure optimal network performance. In this paper, in order to enhance the performance of the probabilistic algorithms, we developed a new probability adjusting model, in which the neighborhood densities are divided into three regions (low, medium, and high). The performance of the new model was evaluated and compared with pure and other probabilistic algorithms. The model enhances the performance of probabilistic broadcast by reducing the number of transmissions while keeping almost the same network reachability. Download




      Hussein Al-Bahadili,, " A Novel Dynamic Noise-Dependent Probabilistic Algorithm for Route Discovery in MANETs " , "The International Journal of Business Data Communication and Networking (IJBDCN)",Vol.7,No.1, , USA, 01/01/2011 Abstract:
      In mobile ad hoc networks (MANETs), broadcasting is widely used in route discovery and other network services. The most widely used broadcasting algorithm is simple flooding, which aggravates a high number of redundant packet retransmissions, causing contention and collisions. Proper use of dynamic probabilistic algorithm significantly reduces the number of retransmissions, which reduces the chance of contention and collisions. In current dynamic probabilistic algorithm, the retransmission probability (pt) is formulated as a linear/non-linear function of a single variable, the number of first-hop neighbors (k). However, such algorithm is suffers in the presence of noise due to increasing packet-loss. In this paper, the authors propose a new dynamic probabilistic algorithm in which pt is determined locally by the retransmitting nodes considering both k and the noise-level. This algorithm is referred to as the dynamic noise-dependent probabilistic (DNDP) algorithm. The performance of the DNDP algorithm is evaluated through simulations using the MANET simulator (MANSim). The simulation results show that the DNDP algorithm presents higher network reachability than the dynamic probabilistic algorithm at a reasonable increase in the number of retransmissions for a wide range of noise-level. The effects of nodes densities and nodes speeds on the performance of the DNDP algorithm are also investigated.




      Hussein Al-Bahadili , " A Distributed Dynamic Channel Allocation Scheme in Cellular Communication Networks " , "Journal of Information Technology Research",Vol.2,No.1, IGI-Global, USA, 01/01/2009 Abstract:
      This article presents a description and performance evaluation of an efficient distributed dynamic channels allocation (DDCA) scheme, which can be used for channel allocation in cellular communication networks (CCNs), such as the global system for mobile communication (GSM). The scheme utilizes a well-known distributed artificial intelligence (DAI) algorithm, namely, the asynchronous weak-commitment (AWC) algorithm, in which a complete solution is established by extensive communication among a group of neighbouring collaborative cells forming a pattern, where each cell in the pattern uses a unique set of channels. To minimize communication overhead among cells, a tokenbased mechanism was introduced. The scheme achieved excellent average allocation efficiencies of over 85% for a number of simulations. Download




      Hussein Al-Bahadili , " A Compressed Index-Query Web Search Engine Model " , "International Journal of Computer Information Systems (IJCIS)",Vol.1,No.4, Silicon Valley Publishers, , 11/01/2010 Abstract:
      In this paper, we propose a new web search engine model based on index-query bit-level compression. The model incorporates two bit-level compression layers both implemented at the back-end processor (server) side, one layer resides after the indexer acting as a second compression layer to generate a double compressed index, and the second layer be located after the query parser for query compression to enable bit-level compressed index-query search. This contributes to reducing the size of the index file as well as reducing disk I/O overheads, and consequently yielding higher retrieval rate and performance. The data compression scheme used in this model is the adaptive character wordlength (ACW(n,s)) scheme, which is an asymmetric, lossless, bit-level scheme that permits compressed index-query search. Results investigating the performance of the ACW(n,s) scheme is presented and discussed. Download




      Hussein Al-Bahadili, " Investigating the Effect of Noise-Level on the Performance of Probabilistic Broadcast in Noisy MANETs " , "The Arab Journal of Statistical Sciences",Vol.1,No.3, Arab Institue for Training and Research in Statistics, Amman, Jordan, 12/01/2010 Abstract:
      Probabilistic broadcast has been widely used as a flooding optimization mechanism to alleviate the effect of broadcast storm problem (BSP) in mobile ad hoc networks (MANETs). Many research studies have been carried-out to develop and evaluate the performance of this mechanism in an error-free (noiseless) environment. In reality, wireless communication channels in MANETs are an error-prone and suffer from high packet-loss due to presence of noise. In this paper, we propose a simulation model that can be used to evaluate the performance of probabilistic broadcast for flooding in noisy environment. In the proposed model, the noise-level is represented by a generic name, probability of reception (pc) (0≤ pc ≤1), where pc=1 for noiseless and pc<1 for noisy. The performance of the algorithm was investigated and analyzed through a number of simulations using the MANET simulator (MANSim).




      Hussein Al-Bahadili,, " Enhancing the Performance of the DNDP Algorithm " , "International Journal of Wireless & Mobile Networks",Vol.3,No.2, AIRCCSE, , 04/18/2011 Abstract:
      A novel dynamic noise-dependent probabilistic (DNDP) route discovery algorithm was recently developed to enhance the performance of the dynamic probabilistic algorithm in noisy mobile ad hoc networks (MANETs). In this algorithm, the mathematical model for calculating node retransmission probability (pt) is calculated as a function of two independent variables: number of first-hop neighbors (k) and probability of reception (pc). The model also shows another independent variable, namely, the maximum retransmission probability that can be assigned to the transmitting node (pt,pcmin), which is assumed to be a fixed value. In this paper, we propose a new mathematical model for calculating pt. In this new model pt,pcmin is calculated as a function of k. The performance of the DNDP algorithm using fixed and k-dependent pt,pcmin is evaluated through simulations. The simulation results showed that the new model enhances the performance of the DNDP algorithm as it significantly reduces the number of retransmissions at an insignificant reduction in the network reachability. Download




      Shakir M. Hussain, H, " Investigating the Effect of Noise-Level on the Performance of Probabilistic Broadcast in Noisy MANETs " , "The International Journal of Computer Networks and Communications (IJCNC)",Vol.3,No.4, AIRCC, , 08/03/2011 Abstract:
      Wireless communication channels in mobile ad hoc network (MANETs) suffer from high packet-loss due to presence of noise. This paper presents a detail description of a simulation model that can be used to evaluate the performance of probabilistic broadcast for flooding in noisy environment. In this model, the noise-level is represented by a generic name, probability of reception (pc) (0≤ pc ≤1), where pc=1 for noiseless and pc<1 for noisy. The effect of noise is determined randomly by generating a random number  (0≤<1); if ≤pc means the packet is successfully delivered to the receiving node, otherwise, unsuccessful delivery occurs. The proposed model is implemented on a MANET simulator, namely, MANSim. In order to investigate the effect of noise on the performance of probabilistic broadcast in noisy MANETs, four scenarios were simulated. The main conclusions of these scenarios are: The performance of probabilistic broadcast decreases with decreasing pc. The percentage relative change in performance decreases with increasing nodes retransmission probability, number of nodes, and nodes radio transmission range. The various nodes speed has insignificant effect on the performance.




      Hussein Al-Bahadili , " A Novel Compressed Index-Query Web Search Engine Model " , "The Research Bulletin of Jordan ACM",Vol.2,No.4, ACM, Amman, Jordan, 11/19/2011 Abstract:
      In this paper we present a description of a new Web search engine model, namely, the compressed index-query (CIQ) Web search engine model, which incorporates two bit-level compression layers implemented at the back-end processor (server) side, one layer resides after the indexer acting as a second compression layer to generate a double compressed index (index compressor), and the second layer resides after the query parser for query compression (query compressor) to enable bit-level compressed index-query search. The data compression algorithm used in this model is the Hamming codesbased data compression (HCDC) algorithm, which is an asymmetric, lossless, bit-level algorithm permits CIQ search. The different components of the new Web model are implemented in a prototype CIQ test tool (CIQTT), which is used as a test bench to validate the accuracy and integrity of the retrieved data, and to evaluate the performance of new Web search engine model. The test results demonstrate that the new CIQ model reduces disk space requirements and searching time by more than 24%, and attains a 100% agreement when compared with an uncompressed model. Download




      Hussein Al-Bahadili, " A Novel Data Compression Scheme Based on the Error Correcting Hamming Code " , "Journal of Computers & Mathematics with Applications",Vol.56,No.1, Elsevier, , 11/19/2008 Abstract:
      This paper introduces a novel lossless binary data compression scheme that is based on the error correcting Hamming codes, namely the HCDC scheme. In this scheme, the binary sequence to be compressed is divided into blocks of n bits length. To utilize the Hamming codes, the block is considered as a Hamming codeword that consists of p parity bits and d data bits (n = d + p). Then each block is tested to find if it is a valid or a non-valid Hamming codeword. For a valid block, only the d data bits preceded by 1 are written to the compressed file, while for a non-valid block all n bits preceded by 0 are written to the compressed file. These additional 1 and 0 bits are used to distinguish the valid and the non-valid blocks during the decompression process. An analytical formula is derived for computing the compression ratio as a function of block size, and fraction of valid data blocks in the sequence. The performance of the HCDC scheme is analyzed, and the results obtained are presented in tables and graphs. Finally, conclusions and recommendations for future works are pointed out.




      Hussein Al-Bahadili , " An Adaptive Character Wordlength Algorithm for Data Compression " , "Journal of Computers & Mathematics with Applications",Vol.55,No.6, Elsevier, , 11/19/2008 Abstract:
      This paper presents a new and efficient data compression algorithm, namely, the adaptive character wordlength algorithm, which can be used as complementary algorithm to statistical compression techniques. In such techniques, the characters in the source file are converted to a binary code, where the most common characters in the file have the shortest binary codes, and the least common have the longest, the binary codes are generated based on the estimated probability of the character within the file. Then, the binary coded file is compressed using 8 bits character wordlength. In this new algorithm, an optimum character wordlength, b, is calculated, where b≥8, so that the compress compression ratio is increased by a factor of b/8. In order to validate this algorithm, it is used as a complement algorithm to Huffman code to compress a source file having 10 characters with different probabilities, and these characters are randomly distributed within the source file. The results obtained and the factors that affect the optimum value of b are discussed, and, finally, conclusions are presented.




      Hussein Al-Bahadili,, " Analytical Modeling of a Multi-Queue Nodes Network Router " , "International Journal of Automation and Computing (IJAC)",Vol.8,No.4, Springer, UK, 11/20/2011 Abstract:
      This paper presents the derivation of an analytical model for a multi-queue nodes network router, which is referred to as the mQN model. In this model, expressions were derived to calculate two performance metrics, namely, the queue node and system utilization factors. In order to demonstrate the flexibility and effectiveness of the mQN model in analyzing the performance of a multi-queue nodes network router, two scenarios were performed. These scenarios investigated the variation of queue nodes and system utilization factors against queue nodes dropping probability for various system sizes and packets arrival routing probabilities. The performed scenarios demonstrated that the mQN analytical model is more flexible and effective when compared with experimental tests and computer simulations in assessing the performance of a multi-queue nodes network router. Download




      Hussein Al-Bahadili , " Performance Evaluation of the LAR-1P Route Discovery Algorithm " , "International Journal of Computer Networks & Communications (IJCNC)",Vol.3,No.6, Academy & Industry Research Collaboration Center (AIRCC), Victoria, Australia, 11/29/2011 Abstract:
      The location-aided routing scheme 1 (LAR-1) and probabilistic algorithms were combined together into a new algorithm for route discovery in mobile ad hoc networks (MANETs) called (LAR-1P) [1]. Simulation results demonstrated that, on network scale, for a uniform random node distribution and for a specific simulation setup (time); the LAR-1P algorithm reduces the number of retransmissions as compared to LAR-1 at a cost of insignificant reduction in the average network reachability. However, on zone scale, the algorithm provides an excellent performance in high-density zones, while in low-density zones; it almost preserves the performance of LAR-1. This paper provides a detail analysis of the performance of the LAR-1P algorithm through various simulations, where the actual numerical values for the number of retransmissions and reachability in high- and low-density zones are estimated to demonstrate the effectiveness and significance of the algorithm and how it provides better performance than LAR-1 in high-density zones. Furthermore, the effect of the total number of nodes on the average network performance is also investigated. Download




      Ghassan Issa, Hussei, " A Scalable Framework to Quantitatively Evaluate Success Factors of Mobile Learning Systems " , "International Journal of Mobile Learning and Organisation",Vol.5,No.3/4, Inderscience, , 12/31/2011 Abstract:
      There has been an enormous increase in the use of mobile learning (m-learning) systems in many fields due to the tremendous advancement in information and communication technologies. Although, there are many frameworks that have been developed for identifying and categorising the different components of m-learning systems, most of them have some limitations, drawbacks, and no support for quantitative assessment for the success factors (global weights) of the system criteria. In this paper, a new scalable hierarchal framework is developed, which identifies and categorises all components that may affect the development and deployments of cost-effective m-learning. Furthermore, due to the hierarchal structure of the framework, any of the analytic hierarchy process techniques can be used to quantitatively estimate the success factors of the system criteria. In order to demonstrate the benefits and flexibility of the new framework, we develop an interactive software tool for computing success factors of the different system criteria. The tool is referred to as SFacts, and it is used to compute success factors for different sets of preferences.




      Ghassan Issa, Hussei, " Development and Performance Evaluation of a LAN-Based Edge-Detection Tool " , "International Journal on Soft Computing (IJSC)",Vol.3,No.1, AIRCC Publisher, Victoria, Australia, 02/20/2012 Abstract:
      This paper presents a description and performance evaluation of an efficient and reliable edge-detection tool that utilize the growing computational power of local area networks (LANs). It is therefore referred to as LAN-based edge detection (LANED) tool. The processor-farm methodology is used in porting the sequential edge-detection calculations to run efficiently on the LAN. In this methodology, each computer on the LAN executes the same program independently from other computers, each operating on different part of the total data. It requires no data communication other than that involves in forwarding input data/results between the LAN computers. LANED uses the Java parallel virtual machine (JPVM) data communication library to exchange data between computers. For equivalent calculations, the computation times on a single computer and a LAN of various number of computers, are estimated, and the resulting speedup and parallelization efficiency, are computed. The estimated results demonstrated that parallelization efficiencies achieved vary between 87% to 60% when the number of computers on the LAN varies between 2 to 5 computers connected through 10/100 Mbps Ethernet switch.




      Farid Al-Zboun, Huss, " Hamming Correction Code Based Compression for Speech Linear Prediction Reflection Coefficients " , "International Journal of Mobile & Adhoc Network",Vol.1,No.2, IFRSA, , 08/01/2011 Abstract:
      In this paper, we represent the exploiting of Hamming Correction Code based Compressor (HCDC) to compress Linear Prediction Coefficients (LPC) in their reflection form, we started with an CELPC system with order 12 and with Discrete Cosine Transform(DCT) based residual excitation, we used first the 40 coefficients with transmission rate of 5.14 kbps, for each frame of the testing signals we applied the a multistage HCDC, we tested the compression performance for parities from 2 to 7, we were able to achieve compression only at parity 4, the rate was reduced from 5.14 kbps to 3.7 kbps for the best case and to 4.2 kbps for the worst case. This rate reduction was made with no compromise in the original CELP signal Quality since the compression is Lossless. Download




      Hussein Al-Bahadili1, " Analyzing the Effect of Node Density on the Performance of the LAR-1P Algorithm " , "The International Journal of Information Technology and Web Engineering (IJITWE)",Vol.7,No.2, IGI-Global, USA, 08/13/2012 Abstract:
      The location-aided routing scheme 1 (LAR-1) and probabilistic algorithms are combined together into a new algorithm for route discovery in mobile ad hoc networks (MANETs) called LAR-1P. Simulation results demonstrated that the LAR-1P algorithm reduces the number of retransmissions as compared to LAR-1 without sacrificing network reachability. Furthermore, on a sub-network (zone) scale, the algorithm provides an excellent performance in high-density zones, while in low-density zones; it preserves the performance of LAR-1. This paper provides a detailed analysis of the performance of the LAR-1P algorithm through various simulations, where the actual numerical values for the number of retransmissions and reachability in high- and low-density zones were computed to demonstrate the effectiveness and significance of the algorithm and how it provides better performance than LAR-1 in high-density zones. In addition, the effect of the total number of nodes on the average network performance is also investigated.




      Hussein Al-Bahadili,, " MODELING AND ANALYSIS OF CLOUD COLLABORATIVE COMMERCE " , "International Journal on Cloud Computing: Services and Architecture (IJCCSA)",Vol.3,No.1, AIRCC, International, 02/26/2013 Abstract:
      Cloud computing has the potential to be particularly suitable for collaboration commerce (c-commerce) because it generally requires less extensive customization, development, integration, operation, and maintenance than other computing resources. However, upgrading c-commerce IT infrastructure, to run on cloud computing, faces a number of challenges, such as lack of effective design and implementation models. This paper describes and evaluates the performance of a new model of c-commerce that utilizes the evolving cloud computing technologies; as a result of that it is referred to as cloud collaborative commerce (cc-commerce) model. The model consists of six main components, these are: client, provider, auditor, broker, security and privacy, and communications network. The new cc-commerce model is used to develop a simple and flexible Web-based test tool, namely, the cc-commerce test (3CT) tool. The performance of the new model is evaluated by measuring the response times for four different configurations using the 3CT tool. The results obtained demonstrate that the cc-commerce model performs faster than equivalent c-commerce models. URL: http://airccse.org/journal/ijccsa/current2013.html




      Hussein Al-Bahadili, " A New Route Discovery Algorithm in MANETs Combining Location-Aided Routing and Probabilistic Algorithms " , "The Mediterranean Journal of Computers and Networks ",Vol.9,No.2, SoftMotor Ltd, UK, 04/30/2013 Abstract:
      A variety of flooding optimization algorithms have been developed to alleviate the effects of broadcast storm problem during route discovery in mobile ad hoc networks (MANETs), such as the locations-aided routing scheme 1 (LAR-1) and probabilistic algorithms. In this paper, we propose a new route discovery algorithm that combines these two algorithms; therefore, thus it is referred to as LAR-1P. In this new algorithm, when receiving a message, a node within the request zone rebroadcasts the message probabilistically with dynamically adjusted retransmission probability (pt). LAR-1P combines the better of the two algorithms, in low-density request zone; a pt of 1 or close to 1, so that the algorithm acts as LAR-1, while in a high-density zone, pt is dynamically adjusted so that the algorithm acts as probabilistic. The performance of the new algorithm is evaluated through simulations using the MANET simulator (MANSim). The simulation results demonstrate that LAR-1P provides average performance better than any of the two algorithms.




      Hussein Al-Bahadili, " A Secure Block Permutation Image Steganography Algorithm " , "International Journal on Cryptography and Information Security (IJCIS)",Vol.3,No.3, AIRCCSE, , 09/30/2013 Abstract:
      Steganography is the art of hiding confidential information (secret) within any media file (cover media) to produce an amalgamated secret-cover media called stego media, so that the secret cannot be recognized or recovered by unauthorized recipients. Many steganalysis techniques have been developed enabling recognition of the existence of secrets within stego media and recovering it. Therefore, it is necessary to develop more secure steganography algorithms. This paper presents a detailed description of a new secure Block Permutation Image Steganography (BPIS) algorithm. The algorithm converts the secret message to a binary sequence, divides the binary sequence into blocks, permutes the block using a key-based randomly generated permutation, concatenates the permuted blocks forming a permuted binary sequence, and then utilizes the Least-Significant-Bit (LSB) approach to embed the permuted binary sequence into BMP image file. The algorithm performance is investigated through performing a number of experiments, and for each experiment the PSNR (Peak Signal-to-Noise Ratio) between the stego and cover images is calculated. The results show that the algorithm provides high image quality, and invisibility, and most importantly higher security as secret cannot be recovered without knowing the permutation, which has a complexity of O(N!), where N is the length of the permutation.




      Hussein Al-Bahadili,, " The Architecture and Analysis of a New Cloud Collaborative Commerce Model " , "International Journal of Cloud Applications and Computing",Vol.3,No.3, IGI-Global, USA, 11/14/2013 Abstract:
      Cloud Computing IT infrastructure has the potential to be particularly suitable for collaborative commerce (c-commerce) applications; because it generally requires less efforts and interferences for development, customization, integration, operation, and maintenance than other traditional IT infrastructures (e.g., on-premises and data centers). However, upgrading c-commerce applications running on traditional IT infrastructures, to run efficiently on cloud computing infrastructure, faces a number of challenges, mainly, lack of effective and reliable architectural model. This paper presents a description of a new architectural model for developing cloud computing based c-commerce applications; which is denoted as cc-commerce model. The model is an basically based on the standard cloud computing model, and it consists of six main components; these are: client, provider, auditor, broker, security and privacy, and communications network. The new model is implemented in a simple and flexible Web-based test tool, namely, the cc-commerce test (3CT) tool, which is used to evaluate the performance of the model through measuring the response times for four different configurations. The analysis of the obtained results demonstrates that the cc-commerce model can provide better response time than equivalent c-commerce models. Download




      Ghassan Issa, Shakir M. Hussain, Hussein Al- Bahadili, " Competition-Based Learning: A Model for the Integration of Competitions with Project-Based Learning using Open Source LMS " , "International Journal of Information and Communication Technology Education (IJICTE)",Vol.10,No.1, , USA, 01/01/2014 Abstract:
      In an effort to enhance the learning process in higher education, a new model for Competition-Based Learning (CBL) is presented. The new model utilizes two well-known learning models, namely, the Project-Based Learning (PBL) and competitions. The new model is also applied in a networked environment with emphasis on collective learning as well as collective outcomes. The new model, which is referred to as CBL, provides educators with an alternative solution to overcome many of student's deficiencies associated with traditional learning practices; such as lack of motivation, lack of self esteem, insufficient practical and real-life experience, and inadequate team work practices. The new CBL model makes a clear distinction between PBL and competitions and CBL. It avoids the disadvantages of competitions, while at the same time gaining from the many benefits of PBL. Identification features of CBL, components of CBL, as well as advantages are presented. An open source Learning Management System (LMS), namely, Moodle is used for the implementation of a networked environment to support CBL.




      Hadi Al-Saadi, Reyad Al-Sayed, M. Al-Sheikh Hasan, and Hussein Al-Bahadili, " Simulation of Maximum Power Point Tracking for Photovoltaic Systems " , "Journal of Energy and Power Engineering (JEPE)",Vol.8,No.4, David Publisher, USA, 03/01/2014 Abstract:
      A PV (photovoltaic) solar panels exhibit non-linear current—voltage characteristics, and according to the MPT (maximum power transform) theory, it can produce maximum power at only one particular OP (operating point); namely, when the source impedance matches with the load impedance, a match which cannot be guaranteed spontaneously. Furthermore, the MPP (maximum power point) changes with temperature and light intensity variations. Therefore, different algorithms have been developed for finding MPPT (maximum power point tracking) based on offline and online methods. Evaluating the performance of these algorithms for various PV systems operating under highly dynamic environments are essentials to ensure producing reliable, efficient, cost-effective, and high performance systems. One possible approach for system evaluation is to use computer simulation. This paper addresses the use of Matlab software as a simulation tool for evaluating the performance of PV solar systems and finding the MPPT.




      Ali Maqousi and Hussein Al-Bahadili, " Evaluating the Performance of the Location-Aided Routing-1P Route Discovery Algorithm " , "Journal of Computer Science",Vol.10,No.3, Science Publications, USA, 04/01/2014 Abstract:
      Dynamic Routing Protocols (DRPs) are widely-used for routing information among mobile nodes in mobile Ad Hoc Network (MANET) and establish and maintain connectivity within the network. A DRP comprises two main phases: route discovery and route maintenance. The route discovery phase involves transmission of large number of redundant control packets consuming significant portion of the nodes power and increase communication overheads. Recently, a new efficient and effective route discovery algorithm has been developed, namely, the LAR-1P algorithm, which combines two well-known routing protocols; these are: the Location- Aided Routing scheme 1 (LAR-1) and probabilistic algorithms. This study evaluates and compared the performance of the LAR-1P algorithm against the performance of a number of route discovery algorithms through simulation. For each simulation, the number of retransmissions and reachability are estimated and compared. The simulations results demonstrated that LAR-1P provides better performance than all other algorithms it is compared with, as it significantly reduces communication overheads while maintaining almost the same network connectivity.




      Hussein Al-Bahadili,, " Speeding up the Web Crawling Process on a Multi-Core Processor Using Virtualization " , "International Journal on Web Service Computing (IJWSC)",Vol.4,No.1, AIRCCSE, Australia, 06/01/2013 Abstract:
      A Web crawler is an important component of the Web search engine. It demands large amount of hardware resources (CPU and memory) to crawl data from the rapidly growing and changing Web. So that the crawling process should be a continuous process performed from time-to-time to maintain up-to-date crawled data. This paper develops and investigates the performance of a new approach to speed up the crawling process on a multi-core processor through virtualization. In this approach, the multi-core processor is divided into a number of virtual-machines (VMs) that can run in parallel (concurrently) performing different crawling tasks on different data. It presents a description, implementation, and evaluation of a VM-based distributed Web crawler. In order to estimate the speedup factor achieved by the VM-based crawler over a non-virtualization crawler, extensive crawling experiments were carried-out to estimate the crawling times for various numbers of documents. Furthermore, the average crawling rate in documents per unit time is computed, and the effect of the number of VMs on the speedup factor is investigated. For example, on an Intel® Core™ i5-2300 CPU @ 2.80 GHz and 8 GB memory, a speedup factor of ~1.48 is achieved when crawling 70000 documents on 3 and 4 VMs.




      Ghassan F. Issa, Hus, " Development and Performance Evaluation of a LAN-Based Edge-Detection Tool " , "International Journal of Soft Computing (IJSC)",Vol.3,No.1, AIRCCSE, Australia, 12/01/2012




      Ghassan Issa, Hussei, " A Scalable Framework to Quantitatively Evaluate Success Factors of Mobile Learning Systems " , "International Journal of Mobile Learning and Organisation",Vol.5,No.3-4, Inderscience, , 05/05/2011




      Hussein Al-Bahadili , " Development of a Novel Compressed Index-Query Web Search Engine Model " , "International Journal of Information Technology and Web Engineering (IJITWE)",Vol.6,No.3, IGI-Global, USA, 07/07/2011




      Abstract:
      Mobile ad-hoc networks (MANETs) are susceptible to attacks by malicious nodes that could easily bring down the whole network. Therefore, it is important to have a reliable mechanism for detecting and isolating malicious nodes before they can do any harm to the network. One of the possible mechanisms




      Abstract:
      This paper presents the description and performance evaluation of a new adaptive-quality image compression (AQIC) algorithm. The compression ratio (C) and Peak Signal to Noise Ratio (PSNR) achieved by the new algorithm are evaluated through a number of experiments, in which a number of widely-used i


  • Chapter in a Book





      Hussein Al-Bahadili,, " Effects of Packet-Loss and Long Delay Cycles on the Performance of the TCP Protocol in Wireless Networks " , "Technology Engineering and Management in Aviation: Advancements and Discoveries",Vol.,No., Information Science Reference, PA, USA, 07/28/2011 Abstract:
      Many analytical models have been developed to evaluate the performance of the transport control protocol (TCP) in wireless networks. This chapter presents a description, derivation, implementation, and comparison of two well-known analytical models, namely, the PFTK and PLLDC models. The first one is a relatively simple model for predicting the performance of the TCP protocol, while the second model is a comprehensive and realistic analytical model. The two models are based on the TCP Reno flavor, as it is one of the more popular implementations on the Internet. These two models were implemented in a user-friendly TCP performance evaluation package (TCP-PEP). The TCP-PEP was used to investigate the effect of packet-loss and long delay cycles on the TCP performance measured in terms of sending rate, throughput, and utilization factor. The results obtained from the PFTK and PLLDC models were compared with those obtained from equivalent simulations carried-out on the widely used NS-2 network simulator. The PLLDC model provides more accurate results (closer to the NS-2 results) than the PFTK model.




      Hussein Al-Bahadili,, " Simulation of a Dynamic-Noise-Dependent Probabilistic Algorithm in MANETs " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      In the current dynamic probabilistic algorithm, the retransmission probability (pt) has always been formulated as a linear/non-linear function of a single variable, namely, the number of first-hop neighbors (k), and therefore denoted as pt(k). The performance of the probabilistic algorithm has severely suffered in the presence of noise due to the reduction in the probability of reception (pc) of route request packets by receiving nodes. This chapter presents a detailed description of a new dynamic probabilistic algorithm in which pt is determined as a function of k and pc, and therefore, it is referred to as the Dynamic Noise-Dependent Probabilistic (DNDP) algorithm. The DNDP algorithm is implemented using the Mobile Ad Hoc Network (MANET) Simulator (MANSim), which is used to simulate a number of scenarios to evaluate and compare the performance of the algorithm with pure flooding and fixed and dynamic probabilistic algorithms. The simulation’s results demonstrated that the DNDP algorithm provides an excellent performance in various network conditions, where it almost maintains the same network reach-ability in noiseless and noisy environments, with the noisy environments inflicting an insignificant increase in the number of redundant retransmissions.




      Hussein Al-Bahadili,, " A Location-Based Power Conservation Scheme for MANETs: A Step towards Green Communications " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      In a Mobile Ad Hoc Network (MANET), a mobile node consumes its power in message communication, message processing, and other operation missions. The amount of power a mobile node consumes for communication is the highest and the dominant as compared to what a node consumes for other tasks. The power consumed in communication is proportional to the square of the nodes’ radio transmission range (R); therefore, minimizing R contributes to a significant reduction in power consumption and consequently increases node battery-power lifetime. This chapter presents a description and performance evaluation of a new efficient power conservation scheme, namely, the Location-Based Power Conservation (LBPC) scheme. It is based on the concept of reducing R by utilizing locally available nodes’ location information to adjust R according to one of the three proposed radius adjustment criteria: farthest, average, and random. So that instead of transmitting with full power to cover up to its maximum radio transmission range (Rmax), the transmitting node adjusts R to less than Rmax, which provides a power conservation factor equivalent to (R/Rmax)2.




      Hussein Al-Bahadili,, " Comparing Various Route Discovery Algorithms in Ad Hoc Wireless Networks " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      Dynamic (reactive or on-demand) routing protocols used in wireless ad hoc networks suffer from transmitting a huge number of control packets during the route discovery phase of the protocols, which increases the overhead significantly. Therefore, a number of optimization protocols have been developed throughout the years. This chapter compares the performance of various route discovery algorithms in ad hoc wireless networks, namely, pure flooding, probabilistic, Location-Aided Routing scheme 1 (LAR-1), LAR-1-Probabilsitic (LAR-1P), and Optimal Multipoint Relying (OMPR). The results obtained through the different simulations are analyzed and compared. This chapter will help practitioners of various kinds (academics, professionals, researchers, and students) grasp a solid understanding of the behavior of ad hoc wireless network route discovery algorithms and develop an appreciation for flooding optimization mechanisms. It also substantiates the case of experimenting via simulation with such models and shows how the different simulation parameters interplay.




      Hussein Al-Bahadili,, " Modeling of TCP Reno with Packet-Loss and Long Delay Cycles " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      The Transport Control Protocol (TCP) is the dominant transport layer protocol in the Internet Protocol (IP) suite, as it carries a significant amount of the Internet traffic, such as Web browsing, file transfer, e-mail, and remote access. Therefore, huge efforts have been devoted by researchers to develop suitable models that can help with evaluating its performance in various network environments. Some of these models are based on analytical or simulation approaches. This chapter presents a description, derivation, implementation, and comparison of two well-known analytical models, namely, the PFTK and PLLDC models. The first one is a relatively simple model for predicting the performance of the TCP protocol, while the second model is a comprehensive and realistic analytical model. The two models are based on the TCP Reno flavor, as it is one of the most popular implementations on the Internet. These two models were implemented in a user-friendly TCP Performance Evaluation Package (TCP-PEP). The TCP-PEP was used to investigate the effect of packet-loss and long delay cycles on the TCP performance, measured in terms of sending rate, throughput, and utilization factor. The results obtained from the PFTK and PLLDC models were compared with those obtained from equivalent simulations carried-out on the widely used NS-2 network simulator. The PLLDC model provides more accurate results (closer to the NS-2 results) than the PFTK model.




      Hussein Al-Bahadili,, " Investigating the Performance of the TSS Scheme in Noisy MANETs " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      A Mobile Ad Hoc Network (MANET) suffers from high packet-loss due to various transmission impairments, such as: wireless signal attenuation, free space loss, thermal noise, atmospheric absorption, multipath effect, and refraction. All of these impairments are represented by a generic name, noise, and therefore such a network is referred to as a noisy network. For modeling and simulation purposes, the noisy environment is described by introducing a probability function, namely, the probability of reception (pc), which is defined as the probability that transmitted data is successfully delivered to its destination despite the presence of noise. This chapter describes the implementation and investigates the performance of the Threshold Secret Sharing (TSS) node authentication scheme in noisy MANETs. A number of simulations are performed using the MANET Simulator (MANSim) to estimate the authentication success ratio for various threshold secret shares, number of nodes, node speeds, and noise-levels. Simulation results demonstrate that, for a certain threshold secret share, the presence of noise inflicts a significant reduction in the authentication success ratio, while node mobility inflicts no or an insignificant effect. The outcomes of these simulations are important to facilitate efficient network management.




      Ali H. Hadi, and Hus, " A Hybrid Port-Knocking Technique for Host Authentication " , "Simulation in Computer Network Design and Modeling: Use and Analysis",Vol.,No., IGI-Global, Amman, Jordan, 01/01/2012 Abstract:
      This chapter presents the detail description of a Port-Knocking (PK) technique, which should avert all types of port attacks and meets all other network security requirements. The new technique utilizes four well-known concepts, these are: PK, cryptography, steganography, and mutual authentication; therefore, it is referred to as the Hybrid Port-Knocking (HPK) technique. It is implemented as two separate modules. One is installed and run on the server computer, either behind the network firewall or on the firewall itself, and the other one is installed and run on the client computer. The first module is referred to as the HPK server, while the second is the HPK client. In terms of data processing, the technique consists of five main processes; these are: request packetization and transmission, traffic monitoring and capturing, mutual authentication, request extraction and execution, and port closing. The HPK technique demonstrates immunity against two vital attacks, namely, the TCP replay and Denial-of-Service (DoS) attacks.




      Ali H. Hadi, and Hus, " A Hybrid Port-Knocking Technique for Host Authentication " , "IT Policy and Ethics: Concepts, Methodologies, Tools, and Applications ",Vol.,No., IGI-Grlobal, Hershey, PA, 03/01/2012 Abstract:
      This chapter presents the detail description of a Port-Knocking (PK) technique, which should avert all types of port attacks and meets all other network security requirements. The new technique utilizes four well-known concepts, these are: PK, cryptography, steganography, and mutual authentication; therefore, it is referred to as the Hybrid Port-Knocking (HPK) technique. It is implemented as two separate modules. One is installed and run on the server computer, either behind the network firewall or on the firewall itself, and the other one is installed and run on the client computer. The first module is referred to as the HPK server, while the second is the HPK client. In terms of data processing, the technique consists of five main processes; these are: request packetization and transmission, traffic monitoring and capturing, mutual authentication, request extraction and execution, and port closing. The HPK technique demonstrates immunity against two vital attacks, namely, the TCP replay and Denial-of-Service (DoS) attacks.




      Abstract:
      Wireless ad hoc networks are susceptible to attacks by malicious nodes that could easily bring down the whole network. Therefore, it is important to have a reliable mechanism for detecting and isolating malicious nodes before they can do any harm to the network. Trust-based routing protocols are one Download


  • Conference paper





      Hussein Al-Bahadili,, " A Web Search Engine Model Based on Index-Query Bit-Level Compression " , "International Conference on Intelligence and Semantic Web: Services and Application (ISWSA 2010)",Vol.,No., , Amman, Jordan, 06/01/2010 Abstract:
      This paper describes a new web search engine model based index-query bit-level compression. The model incorporates two bit-level compression layers both implemented at the backend processor (server) side, one layer resides after the indexer acting as a second compression layer to generate a double compressed index, and the second layer be located after the query parser for query compression to enable bit-level compressed index-query search. This contributes to reducing the size of the index file as well as reducing disk I/O overheads, and consequently yielding higher retrieval rate and performance. The data compression scheme used in this model is the adaptive character wordlength (ACW(n,s)) scheme, which is an asymmetric, lossless, bit-level scheme that permits compressed index-query search. Results investigating the performance of the ACW(n,s) scheme is presented and discussed. Download




      Hussein Al-Bahadili,, " A Hierarchical Framework for Evaluating Success Factors of M-Learning " , "the 2011 Conference on Innovations in Computing and Engineering Machinery (CICEM 2001) ",Vol.,No., Jordan ACM Professional Chapter-ISWSA, Amman, jordan, 09/06/2011 Abstract:
      There has been an enormous increase in the use of mobile learning (M-Learning) systems in many fields due to the tremendous advancement in information and communication technologies (ICTs). Although, there are many frameworks that have been developed for identifying and categorizing the different components of M-Learning systems, most of them have some limitations, drawbacks, and no support for evaluating the success factors (global weights) of the system criteria. In this paper, a comprehensive hierarchical framework is developed for identifying and categorizing all components that may affect the development and deployments of cost-effective M-Learning. Furthermore, due to the hierarchical structure of the framework, analytic hierarchy process (AHP) techniques can be used to quantitatively estimate the success factors of the system criteria. In order to demonstrate the benefits and flexibility of the framework, the success factors of the different system criteria are evaluated for different sets of preferences using an interactive software tool, namely, SFacts, which is developed for calculating the success factors of the criteria of any hierarchical framework. Download




      Hussein Al-Bahadili , " On the Effect of Nodes Density on the Performance of the LAR-1P Route Discovery Algorithm " , "2011 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT)",Vol.,No., IEEE Xplore, Amman, Jordan, 12/06/2011 Abstract:
      The location-aided routing scheme 1 (LAR-1) and probabilistic algorithms are combined together into a new algorithm for route discovery in mobile ad hoc networks (MANETs) called (LAR-1P) [1]. Simulation results in [1] demonstrated that, on network scale, for a uniform random node distribution and for a specific simulation setup; the LAR-1P algorithm reduces the number of retransmissions as compared to LAR-1 without sacrificing network reachability. Furthermore, on zone scale, the algorithm provides an excellent performance in high-density zones, while in low-density zones; it preserves the performance of LAR-1. This paper provides a detail analysis of the performance of the LAR-1P algorithm through various simulations, where the actual numerical values for the number of retransmissions and reachability in high- and low-density zones were estimated to demonstrate the effectiveness and significance of the algorithm and how it provides better performance than LAR-1 in high-density zones. In addition, the effect of the total number of nodes on the average network performance is also investigated.




      Hussein Al-Bahadili,, " A Cloud Computing Collaborative Commerce Model " , "Proceedings of the 4th International Conference on Information and Communication Systems (ICICS 2013)",Vol.,No., Faculty of Computer & Information Technology, JUST, Irbid, Jordan, 04/25/2013 Abstract:
      Cloud computing has the potential to be particularly suitable for collaboration commerce (c-commerce) because it generally requires less customization, development, integration, operation, and maintenance costs than other computing resources. However, upgrading c-commerce IT infrastructure, to run on cloud computing, faces a number of challenges, such as lack of effective design and implementation models. This paper presents a description and performance evaluation of a new model of ccommerce that utilizes the evolving cloud computing technologies; therefore, it is referred to as cloud collaborative commerce (cc-commerce) model. The model consists of six main components, these are: client, provider, auditor, broker, security and privacy, and communications network. The new cccommerce model is used to develop a simple and flexible Webbased test tool, namely, the CC-Commerce Test (3CT) tool. Theperformance of the new model is evaluated using 3CT tool to estimate the response times for four different configurations. The obtained results demonstrate that the new model performs faster than equivalent c-commerce models. Download




      Hussein Al-Bahadili , " Developing a High-Performance VM-Based Distributed Web Crawler Utilizing Multi-Core Processors " , "The 12th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES 2013)",Vol.,No., IEEE Computer Society, London, UK, 09/04/2013




      Hussein Al-Bahadili,, " Simulation of Maximum Power Point Tracking for Photovoltaic Systems " , "Application of Information Technology in Developing Renewable Energy Processes and Systems (IT-DREPS 2013)",Vol.,No., IEEE Explorer Society, Amman, Jordan, 05/30/2013 Abstract:
      A photovoltaic (PV) solar panels exhibit non-linear current–voltage characteristics, and according to the maximum power transform (MPT) theory, it can produce maximum power at only one particular operating point (OP), namely, when the source impedance matches with the load impedance, a match which cannot be guaranteed spontaneously. Furthermore, the maximum power point (MPP) changes with temperature and light intensity variations. Therefore, different algorithms have been developed for maximum power point tracking (MPPT) based on offline and online methods. Evaluating the performance of these algorithms for various PV systems operating under highly dynamic environments are essentials to ensure a reliable, efficient, costeffective, and high performance systems. One possible approach for system evaluation is to use computer simulation. This paper addresses the use of MATLAB software as a simulation tool for evaluating the performance of MPPT for PV systems.
  • Event Calendar
    <July 2017>
    SunMonTueWedThuFriSat
    1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031