scholarly journals Benchmarking and Performance Evaluations on Various Configurations of Virtual Machine and Containers for Cloud-Based Scientific Workloads

2021 ◽  
Vol 11 (3) ◽  
pp. 993
Author(s):  
Syed Asif Raza Shah ◽  
Ahmad Waqas ◽  
Moon-Hyun Kim ◽  
Tae-Hyung Kim ◽  
Heejun Yoon ◽  
...  

Cloud computing manages system resources such as processing, storage, and networking by providing users with multiple virtual machines (VMs) as needed. It is one of the rapidly growing fields that come with huge computational power for scientific workloads. Currently, the scientific community is ready to work over the cloud as it is considered as a resource-rich paradigm. The traditional way of executing scientific workloads on cloud computing is by using virtual machines. However, the latest emerging concept of containerization is growing more rapidly and gained popularity because of its unique features. Containers are treated as lightweight as compared to virtual machines in cloud computing. In this regard, a few VMs/containers-associated problems of performance and throughput are encountered because of middleware technologies such as virtualization or containerization. In this paper, we introduce the configurations of VMs and containers for cloud-based scientific workloads in order to utilize the technologies to solve scientific problems and handle their workloads. This paper also tackles throughput and efficiency problems related to VMs and containers in the cloud environment and explores efficient resource provisioning by combining four unique methods: hyperthreading (HT), vCPU cores selection, vCPU affinity, and isolation of vCPUs. The HEPSCPEC06 benchmark suite is used to evaluate the throughput and efficiency of VMs and containers. The proposed solution is to implement four basic techniques to reduce the effect of virtualization and containerization. Additionally, these techniques are used to make virtual machines and containers more effective and powerful for scientific workloads. The results show that allowing hyperthreading, isolation of CPU cores, proper numbering, and allocation of vCPU cores can improve the throughput and performance of virtual machines and containers.

2019 ◽  
Vol 15 (4) ◽  
pp. 13-29
Author(s):  
Harvinder Chahal ◽  
Anshu Bhasin ◽  
Parag Ravikant Kaveri

The Cloud environment is a large pool of virtually available resources that perform thousands of computational operations in real time for resource provisioning. Allocation and scheduling are two major pillars of said provisioning with quality of service (QoS). This involves complex modules such as: identification of task requirement, availability of resource, allocation decision, and scheduling operation. In the present scenario, it is intricate to manage cloud resources, as Service provider aims to provide resources to users on productive cost and time. In proposed research article, an optimized technique for efficient resource allocation and scheduling is presented. The proposed policy used heuristic based, ant colony optimization (ACO) for well-ordered allocation. The suggested algorithm implementation done using simulation, shows better results in terms of cost, time and utilization as compared to other algorithms.


2020 ◽  
Vol 17 (4) ◽  
pp. 1990-1998
Author(s):  
R. Valarmathi ◽  
T. Sheela

Cloud computing is a powerful technology of computing which renders flexible services anywhere to the user. Resource management and task scheduling are essential perspectives of cloud computing. One of the main problems of cloud computing was task scheduling. Usually task scheduling and resource management in cloud is a tough optimization issue at the time of considering quality of service needs. Huge works under task scheduling focuses only on deadline issues and cost optimization and it avoids the significance of availability, robustness and reliability. The main purpose of this study is to develop an Optimized Algorithm for Efficient Resource Allocation and Scheduling in Cloud Environment. This study uses PSO and R factor algorithm. The main aim of PSO algorithm is that tasks are scheduled to VM (virtual machines) to reduce the time of waiting and throughput of system. PSO is a technique inspired by social and collective behavior of animal swarms in nature and wherein particles search the problem space to predict near optimal or optimal solution. A hybrid algorithm combining PSO and R-factor has been developed with the purpose of reducing the processing time, make span and cost of task execution simultaneously. The test results and simulation reveals that the proposed method offers better efficiency than the previously prevalent approaches.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 735-745
Author(s):  
V. Lavanya ◽  
M. Saravanan ◽  
E.P. Sudhakar

In this paper, a self-adaptive load balancing technique is proposed using live migration of heterogeneous virtual machines (VM) in a Hyper-V based cloud environment. A cloud supported plugin as a management activity within the infrastructure as a service strategy. It is proposed to assist the load balancing process in such a way so that all hypervisors are almost equally loaded once the overload status gets triggered. In the cloud computing environment, load balancing plays a major role if the large number of events triggered has a high impact on the performance of the system. The efficiency of cloud computing is based on the efficient load balancing having a self-adjustable technique using live migration of VMs across clusters of nodes. The proposed load balancing model is efficient in performance improvement by efficient resource utilization and also it helps to avoid the situation occurrence of server hanging by the cause of server overload within the infrastructure of multiple Microsoft Hyper-V hypervisors environment.


2014 ◽  
Vol 509 ◽  
pp. 182-188
Author(s):  
Bin Chen ◽  
Zhi Jian Wang ◽  
Rong Zhi Qi ◽  
Xin Lv

Cloud Computing has become another buzzword in recent years. Follow the popular research and use of the cloud system the performance become the bottleneck of the Newborn. More and more researches are turning their attention to analyze the performance of the cloud services. However, it is hard to extract accurate information from the different type of the cloud components such as datacenter, host, Virtual Machines (VM) in the cloud. Thus, it is significant to collect sufficient row data of the Cloud systems for the performance analysis. In this paper, we described an analysis framework to evaluate comprehensive performance guideline of cloud computing center. The analysis architecture is built based on the performance agent and server interface method (PASI), which consists of performance client (PMC), performance agent (PMA) and performance server (PMS), and we put forward a mathematical model based on the PASI information and queuing theory to forecast the idle rate and availability of the cloud environment. It is proved that the PASI architecture is correctly and effectively evaluates the performance of the cloud component and whole cloud environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jitendra Kumar Samriya ◽  
Subhash Chandra Patel ◽  
Manju Khurana ◽  
Pradeep Kumar Tiwari ◽  
Omar Cheikhrouhou

Cloud computing is the most prominent established framework; it offers access to resources and services based on large-scale distributed processing. An intensive management system is required for the cloud environment, and it should gather information about all phases of task processing and ensuring fair resource provisioning through the levels of Quality of Service (QoS). Virtual machine allocation is a major issue in the cloud environment that contributes to energy consumption and asset utilization in distributed cloud computing. Subsequently, in this paper, a multiobjective Emperor Penguin Optimization (EPO) algorithm is proposed to allocate the virtual machines with power utilization in a heterogeneous cloud environment. The proposed method is analyzed to make it suitable for virtual machines in the data center through Binary Gravity Search Algorithm (BGSA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). To compare with other strategies, EPO is energy-efficient and there are significant differences. The results of the proposed system have been evaluated through the JAVA simulation platform. The exploratory outcome presents that the proposed EPO-based system is very effective in limiting energy consumption, SLA violation (SLAV), and enlarging QoS requirements for giving capable cloud service.


2020 ◽  
Vol 13 (4) ◽  
pp. 156-170
Author(s):  
Suliman Mohamed Fati ◽  
Ayman Kamel Jaradat ◽  
Ibrahim Abunadi ◽  
Ahmed Sameh Mohammed

Cloud computing, as a trend technology, has stemmed from the concept of virtualization. Virtualization makes the resources available to the public to use without any concern for ownership or maintenance cost. In addition, the hosted applications in cloud computing platforms are highly interactive and require intensive resources. The new trend is to duplicate these applications in multiple virtual machines based on demand. Such a scheme needs an efficient resource provisioning to manage the resource assignment to multiple virtual machines properly. One of the issues in the current resource provisioning technique is assigning the resources proactively without predicting the workload of hosted applications, which cause load imbalance and resource wasting. Thus, this paper proposes a new model to predict the application workload. The experimental results show the ability of the proposed model to allocate more virtual machines and to balance the workload among the physical machines.


2021 ◽  
Author(s):  
Marta Chinnici ◽  
Asif Iqbal ◽  
ah lian kor ◽  
colin pattinson ◽  
eric rondeau

Abstract Cloud computing has seen rapid growth and environments are now providing multiple physical servers with several virtual machines running on those servers. Networks have grown larger and have become more powerful in recent years. A vital problem related to this advancement is that it has become increasingly complex to manage networks. SNMP is one standard which is applied as a solution to this management of networks problem. This work utilizes SNMP to explore the capabilities of SNMP protocol and its features for monitoring, control and automation of virtual machines and hypervisors. For this target, a stage-wise solution has been formed that obtains results of experiments from the first stage uses SNMPv3 and feed to the second stage for further processing and advancement. The target of the controlling experiments is to explore the extent of SNMP capability in the control of virtual machines running in a hypervisor, also in terms of energy efficiency. The core contribution based on real experiments is conducted to provide empirical evidence for the relation between power consumption and virtual machines.


Author(s):  
Marcus Tanque

Cloud computing consists of three fundamental service models: infrastructure-as-a-service, platform-as-a service and software-as-a-service. The technology “cloud computing” comprises four deployment models: public cloud, private cloud, hybrid cloud and community cloud. This chapter describes the six cloud service and deployment models, the association each of these services and models have with physical/virtual networks. Cloud service models are designed to power storage platforms, infrastructure solutions, provisioning and virtualization. Cloud computing services are developed to support shared network resources, provisioned between physical and virtual networks. These solutions are offered to organizations and consumers as utilities, to support dynamic, static, network and database provisioning processes. Vendors offer these resources to support day-to-day resource provisioning amid physical and virtual machines.


Author(s):  
Christos Stergiou ◽  
Kostas E. Psannis

Mobile cloud computing provides an opportunity to restrict the usage of huge hardware infrastructure and to provide access to data, applications, and computational power from every place and in any time with the use of a mobile device. Furthermore, MCC offers a number of possibilities but additionally creates several challenges and issues that need to be addressed as well. Through this work, the authors try to define the most important issues and challenges in the field of MCC technology by illustrating the most significant works related to MCC during recent years. Regarding the huge benefits offered by the MCC technology, the authors try to achieve a more safe and trusted environment for MCC users in order to operate the functions and transfer, edit, and manage data and applications, proposing a new method based on the existing AES encryption algorithm, which is, according to the study, the most relevant encryption algorithm to a cloud environment. Concluding, the authors suggest as a future plan to focus on finding new ways to achieve a better integration MCC with other technologies.


2014 ◽  
Vol 4 (4) ◽  
pp. 1-6 ◽  
Author(s):  
Manisha Malhotra ◽  
Rahul Malhotra

As cloud based services becomes more assorted, resource provisioning becomes more challenges. This is an important issue that how resource may be allocated. The cloud environment offered distinct types of virtual machines and cloud provider distribute those services. This is necessary to adjust the allocation of services with the demand of user. This paper presents an adaptive resource allocation mechanism for efficient parallel processing based on cloud. Using this mechanism the provider's job becomes easier and having the least chance for the wastage of resources and time.


Sign in / Sign up

Export Citation Format

Share Document