scholarly journals Efficient virtual data center request embedding based on row-epitaxial and batched greedy algorithms

Author(s):  
SIVARANJANI BALAKRISHNAN ◽  
SURENDRAN DORAISWAMY

Data centers are becoming the main backbone of and centralized repository for all cloud-accessible services in on-demand cloud computing environments. In particular, virtual data centers (VDCs) facilitate the virtualization of all data center resources such as computing, memory, storage, and networking equipment as a single unit. It is necessary to use the data center efficiently to improve its profitability. The essential factor that significantly influences efficiency is the average number of VDC requests serviced by the infrastructure provider, and the optimal allocation of requests improves the acceptance rate. In existing VDC request embedding algorithms, data center performance factors such as resource utilization rate and energy consumption are not taken into consideration. This motivated us to design a strategy for improving the resource utilization rate without increasing the energy consumption. We propose novel VDC embedding methods based on row-epitaxial and batched greedy algorithms inspired by bioinformatics. These algorithms embed new requests into the VDC while reembedding previously allocated requests. Reembedding is done to consolidate the available resources in the VDC resource pool. The experimental testbed results show that our algorithms boost the data center objectives of high resource utilization (by improving the request acceptance rate), low energy consumption, and short VDC request scheduling delay, leading to an appreciable return on investment.

Energies ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 2880
Author(s):  
Abbas Akbari ◽  
Ahmad Khonsari ◽  
Seyed Mohammad Ghoreyshi

In recent years, a large and growing body of literature has addressed the energy-efficient resource management problem in data centers. Due to the fact that cooling costs still remain the major portion of the total data center energy cost, thermal-aware resource management techniques have been employed to make additional energy savings. In this paper, we formulate the problem of minimizing the total energy consumption of a heterogeneous data center (MITEC) as a non-linear integer optimization problem. We consider both computing and cooling energy consumption and provide a thermal-aware Virtual Machine (VM) allocation heuristic based on the genetic algorithm. Experimental results show that, using the proposed formulation, up to 30 % energy saving is achieved compared to thermal-aware greedy algorithms and power-aware VM allocation heuristics.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
N. Fumo ◽  
V. Bortone ◽  
J. C. Zambrano

Data centers are facilities that primarily contain electronic equipment used for data processing, data storage, and communications networking. Regardless of their use and configuration, most data centers are more energy intensive than other buildings. The continuous operation of Information Technology equipment and power delivery systems generates a significant amount of heat that must be removed from the data center for the electronic equipment to operate properly. Since data centers spend up to half their energy on cooling, cooling systems becomes a key factor for energy consumption reduction strategies and alternatives in data centers. This paper presents a theoretical analysis of an absorption chiller driven by solar thermal energy as cooling plant alternative for data centers. Source primary energy consumption is used to compare the performance of different solar cooling plants with a standard cooling plant. The solar cooling plants correspond to different combinations of solar collector arrays and thermal storage tank, with a boiler as source of energy to ensure continuous operation of the absorption chiller. The standard cooling plant uses an electric chiller. Results suggest that the solar cooling plant with flat-plate solar collectors is a better option over the solar cooling plant with evacuated-tube solar collectors. However, although solar cooling plants can decrease the primary energy consumption when compared with the standard cooling plant, the net present value of the cost to install and operate the solar cooling plants are higher than the one for the standard cooling plant.


Information ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 113 ◽  
Author(s):  
Joao Ferreira ◽  
Gustavo Callou ◽  
Albert Josua ◽  
Dietmar Tutsch ◽  
Paulo Maciel

Due to the high demands of new technologies such as social networks, e-commerce and cloud computing, more energy is being consumed in order to store all the data produced and provide the high availability required. Over the years, this increase in energy consumption has brought about a rise in both the environmental impacts and operational costs. Some companies have adopted the concept of a green data center, which is related to electricity consumption and CO2 emissions, according to the utility power source adopted. In Brazil, almost 70% of electrical power is derived from clean electricity generation, whereas in China 65% of generated electricity comes from coal. In addition, the value per kWh in the US is much lower than in other countries surveyed. In the present work, we conducted an integrated evaluation of costs and CO2 emissions of the electrical infrastructure in data centers, considering the different energy sources adopted by each country. We used a multi-layered artificial neural network, which could forecast consumption over the following months, based on the energy consumption history of the data center. All these features were supported by a tool, the applicability of which was demonstrated through a case study that computed the CO2 emissions and operational costs of a data center using the energy mix adopted in Brazil, China, Germany and the US. China presented the highest CO2 emissions, with 41,445 tons per year in 2014, followed by the US and Germany, with 37,177 and 35,883, respectively. Brazil, with 8459 tons, proved to be the cleanest. Additionally, this study also estimated the operational costs assuming that the same data center consumes energy as if it were in China, Germany and Brazil. China presented the highest kWh/year. Therefore, the best choice according to operational costs, considering the price of energy per kWh, is the US and the worst is China. Considering both operational costs and CO2 emissions, Brazil would be the best option.


2020 ◽  
Vol 16 (6) ◽  
pp. 155014772093577
Author(s):  
Zan Yao ◽  
Ying Wang ◽  
Xuesong Qiu

With the rapid development of data centers in smart cities, how to reduce energy consumption and how to raise economic benefits and network performance are becoming an important research subject. In particular, data center networks do not always run at full load, which leads to significant energy consumption. In this article, we focus on the energy-efficient routing problem in software-defined network–based data center networks. For the scenario of in-band control mode of software-defined data centers, we formulate the dual optimal objective of energy-saving and the load balancing between controllers. In order to cope with a large solution space, we design the deep Q-network-based energy-efficient routing algorithm to find the energy-efficient data paths for traffic flow and control paths for switches. The simulation result reveals that the deep Q-network-based energy-efficient routing algorithm only trains part of the states and gets a good energy-saving effect and load balancing in control plane. Compared with the solver and the CERA heuristic algorithm, energy-saving effect of the deep Q-network-based energy-efficient routing algorithm is almost the same as the heuristic algorithm; however, its calculation time is reduced a lot, especially in a large number of flow scenarios; and it is more flexible to design and resolve the multi-objective optimization problem.


2014 ◽  
Vol 644-650 ◽  
pp. 2961-2964
Author(s):  
Xiao Long Tan ◽  
Wen Bin Wang ◽  
Yu Qin Yao

With the rapid grow of the volume of data and internet application, as an efficient and promising infrastructure, data center has been widely deployed .data center provide a variety of perform for network services, applications such as video stream, cloud compute and so on. All this services and applications call for volume, compute, bandwidth, and latency. Existing data centers lacks enough flexible so they provide poor support in QOS, deployability, manageability, and defense when facing attacks. Virtualized data centers are a good solution to these problems. Compared to existing data centers, virtualized data centers do better in resource utilization, scalability, and flexibility.


Author(s):  
Bhupesh Kumar Dewangan ◽  
Amit Agarwal ◽  
Venkatadri M. ◽  
Ashutosh Pasricha

Cloud computing is a platform where services are provided through the internet either free of cost or rent basis. Many cloud service providers (CSP) offer cloud services on the rental basis. Due to increasing demand for cloud services, the existing infrastructure needs to be scale. However, the scaling comes at the cost of heavy energy consumption due to the inclusion of a number of data centers, and servers. The extraneous power consumption affects the operating costs, which in turn, affects its users. In addition, CO2 emissions affect the environment as well. Moreover, inadequate allocation of resources like servers, data centers, and virtual machines increases operational costs. This may ultimately lead to customer distraction from the cloud service. In all, an optimal usage of the resources is required. This paper proposes to calculate different multi-objective functions to find the optimal solution for resource utilization and their allocation through an improved Antlion (ALO) algorithm. The proposed method simulated in cloudsim environments, and compute energy consumption for different workloads quantity and it increases the performance of different multi-objectives functions to maximize the resource utilization. It compared with existing frameworks and experiment results shows that the proposed framework performs utmost.


Author(s):  
Dan Comperchio ◽  
Sameer Behere

Data center cooling systems have long been burdened by high levels of redundancy requirements, resulting in inefficient system designs to satisfy a risk-adverse operating environment. As attitudes, technologies, and sustainability awareness change within the industry, data centers are beginning to realize higher levels of energy efficiency without sacrificing operational security. By exploiting the increased temperature and humidity tolerances of the information technology equipment (ITE), data center mechanical systems can leverage ambient conditions to operate in economization mode for increased times during the year. Economization provides one of the largest methodologies for data centers to reduce their energy consumption and carbon footprint. As outside air temperatures and conditions become more favorable for cooling the data center, mechanical cooling through vapor-compression cycles is reduced or entirely eliminated. One favorable method for utilizing low outside air temperatures without sacrificing indoor air quality is through deploying rotary heat wheels to transfer heat between the data center return air and outside air without introducing outside air into the white space. A metal corrugated wheel is rotated through two opposing airstreams with varying thermal gradients to provide a net cooling effect at significantly reduced electrical energy over traditional mechanical cooling topologies. To further extend the impacts of economization, data centers are also able to significantly raise operating temperatures beyond what is traditionally found in comfort cooling applications. The increase in the dry bulb temperature provided to the inlet of the information technology equipment, as well as an elevated temperature rise across the equipment significantly reduces the energy use within a data center.


Author(s):  
Adrienne B. Little ◽  
Srinivas Garimella

Of the total electricity consumption by the United States in 2006, more than 1% was used on data centers alone; a value that continues to rise rapidly. Of the total amount of electricity a data center consumes, at least 30% is used to cool server equipment. The present study conceptualizes and analyzes a novel paradigm consisting of integrated power, cooling, and waste heat recovery and upgrade systems that considerably lowers the energy footprint of data centers. Thus, on-site power generation equipment is used to supply primary electricity needs of the data center. The microturbine-derived waste heat is recovered to run an absorption chiller that supplies the entire cooling load of the data center, essentially providing the requisite cooling without any additional expenditure of primary energy. Furthermore, the waste heat rejected by the data center itself is boosted to a higher temperature with a heat transformer, with the upgraded thermal stream serving as an additional output of the data center with no additional electrical power input. Such upgraded heat can be used for district heating applications in neighboring residential buildings, or as process heat for commercial end uses such as laundries, hospitals and restaurants. With such a system, the primary energy usage of the data center as a whole can be reduced by about 23 percent while still addressing the high-flux cooling loads, in addition to providing a new income stream through the sales of upgraded thermal energy. Given the large and fast-escalating energy consumption patterns of data centers, this novel, integrated approach to electricity and cooling supply, and waste heat recovery and upgrade will substantially reduce primary energy consumption for this important end use worldwide.


2020 ◽  
Vol 3 (3) ◽  
pp. 272-282
Author(s):  
Yanan Liu ◽  
Xiaoxia Wei ◽  
Jinyu Xiao ◽  
Zhijie Liu ◽  
Yang Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document