J.Refonaa virtual desktop has further elevated the importance of

J.Refonaa

Assistant Professor: School of Computing

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Sathyabama Institute Of Science And Technology

Chennai, Tamil Nadu                       

[email protected]

 

Dr. M. Lakshmi                                                        Professor: School of Computing                         Sathyabama Institute Of Science And Technology  Chennai, Tamil Nadu                        [email protected]

Shilpa Roy

Student: School of Computing

Sathyabama Institute Of Science And Technology

Chennai, Tamil Nadu

[email protected]

Ria Roy

Student: School of Computing

Sathyabama Institute Of Science And Technology

Chennai, Tamil Nadu

[email protected]

 

Abstract

 Green computing is the environmentally responsible and eco-friendly use of computers and their resources. In broader terms, it is also defined as the study of designing, engineering, manufacturing, using and disposing of computing devices in a way that reduces their environmental impact. Many IT manufacturers and vendors are continuously investing in designing energy-efficient computing devices, reducing the use of dangerous materials and encouraging the recyclability of digital devices. Green computing practices came into prominence in 1992, when the Environmental Protection Agency (EPA) launched the Energy Star program.

Green computing is also known as green information technology (green IT).

Keywords: Green Computing, Power and Energy saving.

 

Introduction

CLOUD computing has been gaining more and more traction in the past few years, and it is changing the way we access and retrieve information. The recent emergence of virtual desktop has further elevated the importance of computing clouds. As a crucial technique in modern computing clouds, virtualization enables one physical machine (PM) to host many performance-isolated virtual machines (VMs). It greatly benefits a computing cloud where VMs running various applications are aggregated together to improve resource utilization. It has been shown in previous work that, the cost of energy consumption, e.g., power supply, and cooling, occupies a significant fraction of the total operating costs in a cloud. Therefore, making optimal utilization of underlying resources to reduce the energy consumption is becoming an important issue. To cut back the energy consumption in clouds, server consolidation is proposed to tightly pack VMs to reduce the number of running PMs; however, VMs’ performance may be seriously affected if VMs are not appropriately placed, especially in a highly consolidated cloud. We observed that the variability and burstiness of VM workload widely exists in modern computing clouds, as evidenced in prior studies.

Fig: General Diagram of Green Computing Process.

 Related works

Most of prior studies 3, 15, 16 on server consolidation focused on optimization the number of active Vms from the perspective of bin packing (BP). A heterogeneity-aware resource management system for dynamic capacity provisioning in clouds was developed in 17. Stable resource

allocation in geographically-distributed clouds was considered in 18. Network-aware virtual machine placement was

considered in 19. Scalable virtual network models were

designed in 8, 20 to allow cloud tenants to explicitly

specify computing and networking requirements to achieve

predictable performance.

In a computing cloud, burstiness of workload widely

exists in real applications, which becomes an inevitable

characteristic in server consolidation 1, 4, 6, 7, 21.

Some recent works 22, 23 used stochastic bin-packing

(SBP) techniques to deal with variable workloads, where

workload is modeled as random variable. Some other

research 10, 24, 25 studied the SBP problem assuming

VM workload follows normal distribution. Several

other studies 26, 27 focused on workload prediction

while the application runs. Different from them, in our

model a lower limit of provisioning is set at the normal

workload level which effectively prevents VM interference

caused by unpredictable behaviors from co-located

VMs.

Markov chain was used to inject burstiness into a traditional

benchmark in 7. Several works 5, 28, 29 studied

modeling and dynamic provisioning of bursty workload in

cloud computing. A previous study 30 proposed to

reserve a constant level of hardware resource on each PM to

tolerate workload fluctuation; but how much resource

should be reserved was not given. To the best of our knowledge,

we are the first to quantify the amount of reserved

resources with consideration on various, but distinct, workload

burstiness.

Materials and Methodologies

                The use of cloud computing is becoming widespread. The data centres have growing energy needs for food and cooling. In 2012, for $ 1 spent on equipment there was $ 1 spent to feed and cool it. That is why reducing consumption has a strong economic impact. Moreover, there is also an ecological impact because the environmental footprint is not negligible. Indeed, in 2008, data centres emitted 116 million tons of carbon dioxide, which is more than the total emissions of Nigeria.

With this awareness, there is a growing demand for clean products and services. The green cloud thus appeared with initiatives to reduce the energy consumption of data centres and their emissions of CO 2. Such efforts have become important marketing tools.

The challenge of Green Cloud computing is therefore to minimize the use of resources while continuing to satisfy service quality and robustness requirements.

                                                                   

The green cloud computing is the act of optimizing IT resources to minimize its environmental footprint. This includes the control of materials, energy, water, and other scarce resources, as well as the limitation of electronic waste from manufacturing to recycling of components.

Fig: Chart CO2 emissions data centre

 

Steps of working process of Green computing in computer networks.

There are different methods to evaluate the environmental footprint of cloud computing each associated with a metric, here we present the most used.

 

4.1. Power consumption Change

PUE (Power Usage Effectiveness) is a widely used metric for calculating power consumption; it is the ratio between the total energy consumption of the infrastructure and the consumption of computer equipment (processors, memories, storage).

{ displaystyle PUE = { frac { text {total consumption}} { text {computer consumption}}}} 

This widely used method should not, however, be used to compare two data centre, because the way of taking these measurements is not standardized and may vary according to what is taken into account, in particular for buildings not dedicated to the data centre. data (shared energy resources). It is used only to take two measurements over time to estimate the impact of a modification.

 

4.2. Emissions of C0 2

Although the CPUE is a fairly accurate measure, it does not take into account how energy is created, and therefore its carbon footprint, the Carbon Usage Effectiveness (CUE ) calculates the amount of CO 2 emitted per KWh used . This value depends on the technology used to generate electricity. If this energy is not produced by the data centre, then all the technologies used by the countries to supply electricity must be taken into account. This value varies according to the time of day as well as the period of the year, these variations are also to be taken into account in the calculation.

{ displaystyle CUE = { frac {kgCO_ {2}} {KWh}} * PUE}

4.3. Renewable energy coefficient

To determine the environmental impact of a data centre, the renewable energy coefficient is used:

{ displaystyle CER = { frac { text {Renewable energy used}} { text {Total energy used}}}It is this metric that has been used since 2012 to estimate the environmental impact of grids.

 

4.4Other mathematical indicators Change

Other indicators exist to calculate the energy efficiency of a data centre:

Data Centre infrastructure efficiency

{ displaystyle DCiE = { tfrac {1} {PUE}}}Thermal Design Power 

The TDP is the maximum amount that a component can actually use.

4.5.Performance per watt 

Quantifies the maximum computing capacity of one processor per watt used. This marker should be as high as possible.

4.5.1. Compute Power efficiency

In addition to the performance per Watt, the CPE takes into account the consumption outside the calculation time and the percentage of use of these components: { displaystyle CPE = { tfrac { text {usage percentage}} {PUE}}}.

4.5.2. Energy Reuse Factor 

Calculates the amount of energy reused in data centres: { displaystyle ERF = { tfrac { text {consumed reused energy}} { text {total energy consumed}}}

4.5.3. Water Usage Effectiveness 

Measures the amount of water consumed by a data centre: { displaystyle WUE = { tfrac { text {Amount of water used}} { text {Total energy used}}}

Fig: Power Consumption in the Data centre

 

Conclusion

In a highly consolidated computing cloud, the VM performance is prone to degradation without an appropriate VM placement strategy, if various and distinct burstiness exists. To alleviate this problem, we have to activate more PMs, leading to more energy consumption. To balance the performance and energy consumption with respect to bursty workload, we propose to reserve a certain amount of resources on each PM that form a queueing system to accommodate burstiness. To quantify the amount of reserved resources is not a trivial problem. In this paper, we propose a burstiness-aware server consolidation algorithm based on the two-state Markov chain. We use a probabilistic performance constraint and show that the proposed algorithm is able to guarantee this performance constraint.

References

•       1 M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A.

             Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M.

Zaharia, “A view of cloud computing,” Commun. ACM, vol. 53,

no. 4, pp. 50–58, 2010.

•       2 M.-H. Oh, S.-W. Kim, D.-W. Kim, and S.-W. Kim, “Method and

architecture for virtual desktop service,” U.S. Patent 20 130 007

737, 2013.

•       3 M. Marzolla, O. Babaoglu, and F. Panzieri, “Server consolidation

in clouds through gossiping,” in Proc. IEEE Int. Symp. World Wireless,

Mobile Multimedia Netw., pp. 1–6. 

•       4 W. Vogels, “Beyond server consolidation,” ACM Queue, vol. 6,

no. 1, pp. 20–26, 2008.

•       5 N. Bobroff, A. Kochut, and K. Beaty, “Dynamic placement of virtual

machines for managing SLA violations,” in Proc. IFIP/IEE