Integrated Solving Strategy for Cloud Computing

2010 ◽  
Vol 44-47 ◽  
pp. 3299-3303 ◽  
Author(s):  
Ming Ye ◽  
Jun Zhou ◽  
Da Fei Xia ◽  
Wei Yao Jia

Cloud computing, which refers to an emerging computing model where machines in large data centers can be used to deliver services in a scalable manner, has become popular for corporations in need of inexpensive, large scale computing. Recently, However, the study of integrated solving strategy is rare. In this paper, we propose a novel integrated solving strategy for cloud computing. For this purpose, we present cloud computing architectures , platforms, and applications to deliver services and meet the needs of their constituents and information and services supported by this integrated solving strategy. At the same time ,this paper also argues that we focused on cloud computing integrated solving stratrgy is an essential part of the government IT environment.

2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


2015 ◽  
pp. 648-659
Author(s):  
Yale Li ◽  
Yushi Shen ◽  
Yudong Liu

Cloud Computing has the potential to trigger a major computing model transformation for the IT industry. This chapter briefly describes the business and technical benefits of Cloud Computing and explains the technical challenges in Cloud Computing, such as the network bottleneck. One of the solutions to address the network problem is the Content Delivery Network (CDN). Here, the basics of the Akamai CDN technology is digested. Then, the authors conduct a CDN experiment in the Microsoft public cloud, Windows Azure, to demonstrate the benefits of CDN integration with the cloud. The results show significant gain in large data download by the utilization of a CDN. Finally, a couple of academic research ideas are summarized for future improvements on the CDN model.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 13
Author(s):  
Mekala Sandhya ◽  
Ashish Ladda ◽  
Dr. Uma N Dulhare ◽  
. . ◽  
. .

In this generation of Internet, information and data are growing continuously. Even though various Internet services and applications. The amount of information is increasing rapidly. Hundred billions even trillions of web indexes exist. Such large data brings people a mass of information and more difficulty discovering useful knowledge in these huge amounts of data at the same time. Cloud computing can provide infrastructure for large data. Cloud computing has two significant characteristics of distributed computing i.e. scalability, high availability. The scalability can seamlessly extend to large-scale clusters. Availability says that cloud computing can bear node errors. Node failures will not affect the program to run correctly. Cloud computing with data mining does significant data processing through high-performance machine. Mass data storage and distributed computing provide a new method for mass data mining and become an effective solution to the distributed storage and efficient computing in data mining. 


Author(s):  
Paul T. Jaeger ◽  
Jimmy Lin ◽  
Justin M. Grimes ◽  
Shannon N. Simmons

Cloud computing – the creation of large data centers that can be dynamically provisioned, configured, and reconfigured to deliver services in a scalable manner – places enormous capacity and power in the hands of users. As an emerging new technology, however, cloud computing also raises significant questions about resources, economics, the environment, and the law. Many of these questions relate to geographical considerations related to the data centers that underlie the clouds: physical location, available resources, and jurisdiction. While the metaphor of the cloud evokes images of dispersion, cloud computing actually represents centralization of information and computing resources in data centers, raising the specter of the potential for corporate or government control over information if there is insufficient consideration of these geographical issues, especially jurisdiction. This paper explores the interrelationships between the geography of cloud computing, its users, its providers, and governments.


2017 ◽  
Vol 27 (3) ◽  
pp. 605-622 ◽  
Author(s):  
Marcin Markowski

AbstractIn recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoying Wang ◽  
Xiaojing Liu ◽  
Lihua Fan ◽  
Xuhan Jia

As cloud computing offers services to lots of users worldwide, pervasive applications from customers are hosted by large-scale data centers. Upon such platforms, virtualization technology is employed to multiplex the underlying physical resources. Since the incoming loads of different application vary significantly, it is important and critical to manage the placement and resource allocation schemes of the virtual machines (VMs) in order to guarantee the quality of services. In this paper, we propose a decentralized virtual machine migration approach inside the data centers for cloud computing environments. The system models and power models are defined and described first. Then, we present the key steps of the decentralized mechanism, including the establishment of load vectors, load information collection, VM selection, and destination determination. A two-threshold decentralized migration algorithm is implemented to further save the energy consumption as well as keeping the quality of services. By examining the effect of our approach by performance evaluation experiments, the thresholds and other factors are analyzed and discussed. The results illustrate that the proposed approach can efficiently balance the loads across different physical nodes and also can lead to less power consumption of the entire system holistically.


2014 ◽  
Vol 513-517 ◽  
pp. 1406-1413
Author(s):  
Tan Shuang ◽  
Jian Feng Zhang ◽  
Zhi Kun Chen

Several trends are opening up the era of cloud computing. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This work studies the problem of ensuring the integrity of data storage in cloud computing. We use RSAs homomorphic property to construct the protocol of provable data possession. In our protocol, we can aggregate multiple Provable Data possession into one, and reduce the overhead of communication. While prior work on ensuring remote data integrity often lacks the specific implementations, this paper achieves an effective proof of storage protocol. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.


2019 ◽  
Vol 5 ◽  
pp. e211
Author(s):  
Hadi Khani ◽  
Hamed Khanmirza

Cloud computing technology has been a game changer in recent years. Cloud computing providers promise cost-effective and on-demand resource computing for their users. Cloud computing providers are running the workloads of users as virtual machines (VMs) in a large-scale data center consisting a few thousands physical servers. Cloud data centers face highly dynamic workloads varying over time and many short tasks that demand quick resource management decisions. These data centers are large scale and the behavior of workload is unpredictable. The incoming VM must be assigned onto the proper physical machine (PM) in order to keep a balance between power consumption and quality of service. The scale and agility of cloud computing data centers are unprecedented so the previous approaches are fruitless. We suggest an analytical model for cloud computing data centers when the number of PMs in the data center is large. In particular, we focus on the assignment of VM onto PMs regardless of their current load. For exponential VM arrival with general distribution sojourn time, the mean power consumption is calculated. Then, we show the minimum power consumption under quality of service constraint will be achieved with randomize assignment of incoming VMs onto PMs. Extensive simulation supports the validity of our analytical model.


Sign in / Sign up

Export Citation Format

Share Document