virtual machines
Recently Published Documents


TOTAL DOCUMENTS

2586
(FIVE YEARS 778)

H-INDEX

58
(FIVE YEARS 8)

Author(s):  
Noha G. Elnagar ◽  
Ghada F. Elkabbany ◽  
Amr A. Al-Awamry ◽  
Mohamed B. Abdelhalim

<span lang="EN-US">Load balancing is crucial to ensure scalability, reliability, minimize response time, and processing time and maximize resource utilization in cloud computing. However, the load fluctuation accompanied with the distribution of a huge number of requests among a set of virtual machines (VMs) is challenging and needs effective and practical load balancers. In this work, a two listed throttled load balancer (TLT-LB) algorithm is proposed and further simulated using the CloudAnalyst simulator. The TLT-LB algorithm is based on the modification of the conventional TLB algorithm to improve the distribution of the tasks between different VMs. The performance of the TLT-LB algorithm compared to the TLB, round robin (RR), and active monitoring load balancer (AMLB) algorithms has been evaluated using two different configurations. Interestingly, the TLT-LB significantly balances the load between the VMs by reducing the loading gap between the heaviest loaded and the lightest loaded VMs to be 6.45% compared to 68.55% for the TLB and AMLB algorithms. Furthermore, the TLT-LB algorithm considerably reduces the average response time and processing time compared to the TLB, RR, and AMLB algorithms.</span>


2022 ◽  
Vol 15 (3) ◽  
pp. 1-32
Author(s):  
Nikolaos Alachiotis ◽  
Panagiotis Skrimponis ◽  
Manolis Pissadakis ◽  
Dionisios Pnevmatikatos

Disaggregated computer architectures eliminate resource fragmentation in next-generation datacenters by enabling virtual machines to employ resources such as CPUs, memory, and accelerators that are physically located on different servers. While this paves the way for highly compute- and/or memory-intensive applications to potentially deploy all CPUs and/or memory resources in a datacenter, it poses a major challenge to the efficient deployment of hardware accelerators: input/output data can reside on different servers than the ones hosting accelerator resources, thereby requiring time- and energy-consuming remote data transfers that diminish the gains of hardware acceleration. Targeting a disaggregated datacenter architecture similar to the IBM dReDBox disaggregated datacenter prototype, the present work explores the potential of deploying custom acceleration units adjacently to the disaggregated-memory controller on memory bricks (in dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms). A fundamental computational kernel is the Phylogenetic Likelihood Function (PLF), which dominates the total execution time (up to 95%) of widely used maximum-likelihood methods. Numerous efforts to boost PLF performance over the years focused on accelerating computation; since the PLF is a data-intensive, memory-bound operation, performance remains limited by data movement, and memory disaggregation only exacerbates the problem. We describe two near-memory processing models, one that addresses the problem of workload distribution to memory bricks, which is particularly tailored toward larger genomes (e.g., plants and mammals), and one that reduces overall memory requirements through memory-side data interpolation transparently to the application, thereby allowing the phylogeny size to scale to a larger number of organisms without requiring additional memory.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-35
Author(s):  
Muhammad Junaid ◽  
Adnan Sohail ◽  
Fadi Al Turjman ◽  
Rashid Ali

Over the years cloud computing has seen significant evolution in terms of improvement in infrastructure and resource provisioning. However the continuous emergence of new applications such as the Internet of Things (IoTs) with thousands of users put a significant load on cloud infrastructure. Load balancing of resource allocation in cloud-oriented IoT is a critical factor that has a significant impact on the smooth operation of cloud services and customer satisfaction. Several load balancing strategies for cloud environment have been proposed in the past. However the existing approaches mostly consider only a few parameters and ignore many critical factors having a pivotal role in load balancing leading to less optimized resource allocation. Load balancing is a challenging problem and therefore the research community has recently focused towards employing machine learning-based metaheuristic approaches for load balancing in the cloud. In this paper we propose a metaheuristics-based scheme Data Format Classification using Support Vector Machine (DFC-SVM), to deal with the load balancing problem. The proposed scheme aims to reduce the online load balancing complexity by offline-based pre-classification of raw-data from diverse sources (such as IoT) into different formats e.g. text images media etc. SVM is utilized to classify “n” types of data formats featuring audio video text digital images and maps etc. A one-to-many classification approach has been developed so that data formats from the cloud are initially classified into their respective classes and assigned to virtual machines through the proposed modified version of Particle Swarm Optimization (PSO) which schedules the data of a particular class efficiently. The experimental results compared with the baselines have shown a significant improvement in the performance of the proposed approach. Overall an average of 94% classification accuracy is achieved along with 11.82% less energy 16% less response time and 16.08% fewer SLA violations are observed.


2022 ◽  
Vol 54 (8) ◽  
pp. 1-38
Author(s):  
Alexandre H. T. Dias ◽  
Luiz. H. A. Correia ◽  
Neumar Malheiros

Virtual machine consolidation has been a widely explored topic in recent years due to Cloud Data Centers’ effect on global energy consumption. Thus, academia and companies made efforts to achieve green computing, reducing energy consumption to minimize environmental impact. By consolidating Virtual Machines into a fewer number of Physical Machines, resource provisioning mechanisms can shutdown idle Physical Machines to reduce energy consumption and improve resource utilization. However, there is a tradeoff between reducing energy consumption while assuring the Quality of Service established on the Service Level Agreement. This work introduces a Systematic Literature Review of one year of advances in virtual machine consolidation. It provides a discussion on methods used in each step of the virtual machine consolidation, a classification of papers according to their contribution, and a quantitative and qualitative analysis of datasets, scenarios, and metrics.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-26
Author(s):  
Zakaria Benomar ◽  
Francesco Longo ◽  
Giovanni Merlino ◽  
Antonio Puliafito

In Cloud computing deployments, specifically in the Infrastructure-as-a-Service (IaaS) model, networking is one of the core enabling facilities provided for the users. The IaaS approach ensures significant flexibility and manageability, since the networking resources and topologies are entirely under users’ control. In this context, considerable efforts have been devoted to promoting the Cloud paradigm as a suitable solution for managing IoT environments. Deep and genuine integration between the two ecosystems, Cloud and IoT, may only be attainable at the IaaS level. In light of extending the IoT domain capabilities’ with Cloud-based mechanisms akin to the IaaS Cloud model, network virtualization is a fundamental enabler of infrastructure-oriented IoT deployments. Indeed, an IoT deployment without networking resilience and adaptability makes it unsuitable to meet user-level demands and services’ requirements. Such a limitation makes the IoT-based services adopted in very specific and statically defined scenarios, thus leading to limited plurality and diversity of use cases. This article presents a Cloud-based approach for network virtualization in an IoT context using the de-facto standard IaaS middleware, OpenStack, and its networking subsystem, Neutron. OpenStack is being extended to enable the instantiation of virtual/overlay networks between Cloud-based instances (e.g., virtual machines, containers, and bare metal servers) and/or geographically distributed IoT nodes deployed at the network edge.


2022 ◽  
pp. 1-22
Author(s):  
Vhatkar Kapil Netaji ◽  
G.P. Bhole

The allocation of resources in the cloud environment is efficient and vital, as it directly impacts versatility and operational expenses. Containers, like virtualization technology, are gaining popularity due to their low overhead when compared to traditional virtual machines and portability. The resource allocation methodologies in the containerized cloud are intended to dynamically or statically allocate the available pool of resources such as CPU, memory, disk, and so on to users. Despite the enormous popularity of containers in cloud computing, no systematic survey of container scheduling techniques exists. In this survey, an outline of the present works on resource allocation in the containerized cloud correlative is discussed. In this work, 64 research papers are reviewed for a better understanding of resource allocation, management, and scheduling. Further, to add extra worth to this research work, the performance of the collected papers is investigated in terms of various performance measures. Along with this, the weakness of the existing resource allocation algorithms is provided, which makes the researchers to investigate with novel algorithms or techniques.


Author(s):  
R. Y. Sharykin

The article discusses the implementation in Java of the stochastic collaborative virus defense model developed within the framework of the Distributed Object-Based Stochastic Hybrid Systems (DOBSHS) model and its analysis. The goal of the work is to test the model in conditions close to the real world on the way to introducing its use in the practical environment. We propose a method of translating a system specification in the SHYMaude language, intended for the specification and analysis of DOBSHS models in the rewriting logic framework, into the corresponding Java implementation. The resulting Java system is deployed on virtual machines, the virus and the group virus alert system are modeled stochastically. To analyze the system we use several metrics, such as the saturation time of the virus propagation, the proportion of infected nodes upon reaching saturation and the maximal virus propagation speed. We use Monte Carlo method with the computation of confidence intervals to obtain estimates of the selected metrics. We perform analysis on the basis of the sigmoid virus propagation graph over time in the presence of the defense system. We implemented two versions of the system using two protocols for transmitting messages between nodes, TCP/IP and UDP. We measured the influence of the protocol type and the associated costs on the defense system effectiveness. To assess the potential of cost reduction associated with the use of different message transmission protocols, we performed analysis of the original DOBSHS model modified to model message transmission delays. We measured the influence of other model parameters important for the next steps towards the practical use of the model. To address the system scalability, we propose a hierarchical approach to the system design to make possible its use with a large number of nodes.


2022 ◽  
Author(s):  
Arezoo Ghasemi ◽  
Abolfazl Toroghi Haghighat ◽  
Amin Keshavarzi

Abstract The process of mapping Virtual Machines (VMs) to Physical Ma- chines (PMs), which is defined as VM placement, affects Cloud Data Centers (DCs) performance. To enhance the performance, optimal placement of VMs regarding conflicting objectives has been proposed in some research, such as Multi-Objective VM reBalance (MOVMrB) and Reinforcement Learning VM reBalance (RLVMrB) in recent years. The MOVMrB algorithm is based on the BBO meta-heuristic algorithm and the RLVMrB algorithm inspired by reinforcement learning, which in both of them the non-dominance method is used to evaluate generated solutions. Although this approach reaches accept- able results, it fails to consider other solutions which are optimal regarding all objectives, when it meets the best solution based on one of these objectives. In this paper, we propose two enhanced multi-objective algorithms, Fuzzy- RLVMrB and Fuzzy-MOVMrB, that are able to consider all objectives when evaluating candidate solutions in solution space. All four algorithms aim to balance the load between VMs in terms of processor, bandwidth, and memory as well as horizontal and vertical load balance. We simulated all algorithms using the CloudSim simulator and compared them in terms of horizontal and vertical load balance and execution time. The simulation results show that Fuzzy-RLVMrB and Fuzzy-MOVMrB algorithms outperform RLVMrB and MOVMrB algorithms in terms of vertical load balancing and horizontal load balancing. Also, the RLVMrB and Fuzzy-RLVMrB algorithms are better in execution time than the MOVMrB and Fuzzy-MOVMrB algorithms.


2022 ◽  
Author(s):  
Luca Abeni ◽  
Alessandro Biondi ◽  
Enrico Bini
Keyword(s):  

2022 ◽  
Vol 71 (2) ◽  
pp. 3019-3033
Author(s):  
Tahir Alyas ◽  
Iqra Javed ◽  
Abdallah Namoun ◽  
Ali Tufail ◽  
Sami Alshmrany ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document