Hopp til hovedinnhold

We are generating astounding amounts of data. Estimates show that the amount of data worldwide will increase by 60% per year, reaching 175 zettabytes by 2025, up from 33 in 2018. Roughly half of it will be stored in public clouds and the rest in other types of data centers.

The digital economy requires a massive infrastructure

Today's digital economy comprises of services such as Uber, Office365 and Netflix built on top of the data we store in the cloud. But what is the cloud? Storing, processing and transporting information across all digital services we employ requires a massive infrastructure. The cloud is nothing more than a vast amount of data centers scattered around the world covering large masses of land which in sum operate millions of servers, processing data on behalf of those services, in terms serving billions of users.

test

Aerial view of a large scale data center in San Diego (Source: Wordpress)

The need to store and process this exponentially increasing amount of data has led to a golden period of development in the data center industry. The number of hyperscale datacenters in the world powering the cloud has experienced a 100% increase the last 5 years alone. Today there are 8 million datacenters and 500 hyperscale datacenters in the world, but the total amount of hyperscale data centers needed to manage the expected amounts of data in the years to come is three times today's amount.

Why is that a problem?

There are 5 billion Google searches performed each day, each resulting in 6-8 servers to be activated. In terms of greenhouse gases, each search is equivalent to about 0.2 grams of CO2. Despacito going viral, reaching 6.3 billion views, burned alone as much energy as 40 000 U.S. homes.

Add on top of this all the other services powered by the cloud such as internet banking, ticketing services, Netflix, Spotify, Uber and Office365, you'll soon realize that data centers powering the cloud have become as mission critical and essential as water and electricity.

We depend on it at work to be able to send and receive email, share and collaborate on documents, code, schematics and files. We use it to buy shoes and other merchandise online, and we communicate with our colleagues, friends and family both near and far via services that run on the cloud. We've become dependent on the cloud to be able to perform tasks both at work and at home. When cloud services go down, the world halts. This begs for a number of questions with regards to environmental impact.

What is the problem?

In addition to the actual housing of the data centers, you need to fill each data center with lots and lots of hardware, hundreds of miles of cable, refrigerant and water for cooling, batteries and diesel generators to mitigate power outages, and last but not least - an enormous amount of energy to operate the hardware. I'll describe some of the issues.

Hardware

It is hard to imagine how much data 175 ZB represents, but if you were able to store that onto BluRay discs, then you’d have a stack of discs that can get you to the moon 23 times. To store it on hard drives, you would need 11 billion drives(!). Imagine the resources necessary to manufacture only the disk drives, and you will begin to realize that data centers consume a lot of resources.

Since data centers require so much hardware, it is natural to assess the lifecycle impact of the hardware used in the data centers. For data center operations, hardware needs to be manufactured, shipped, installed, operated and decommissioned. Each of these processes requires resources. The manufacturing process for computer electronics for instance requires the mining and extraction of both regular and rare earth metals such as neodymium, which is used in traditional hard drives, and terbium which is used in solid state electronics. Although rare earth metals is a somewhat misleading term, the extraction and processing of ore to produce concentrates usually involves usage of lots of heavy machinery and chemicals and leaves significant impacts on the environment via the open pits the mines imprint on the geography.

Hardware also usually has a fairly limited life span. Usually hardware is replaced every 3-5 years to keep up with performance requirements and reliability concerns for operations. Decomissioning hardware might entail both hazardous materials such as refrigerant used in cooling systems, electronic waste and recycling electronics. Especially recycling electronics will have a big impact on the life cycle assessment of the hardware.

Real estate

Data centers require a lot of space. Hyperscale data centers require hundreds of acres of land in order to accomodate the hardware. One of the issues are that cloud providers usually want data centers to be as close as possible to most end users, in order to provide best performance and lowest latency. This has led to a competition for real estate between data centers and humans, and in Amsterdam, which houses 30% of all hyperscale data centers in Europe, it led to a temporary ban on building new ones.

Energy

Servers in data centers are on 24/7/365. As each data center can have hundreds of thousands of servers, they naturally consume vast amounts of energy. According to estimates from the International Energy Agency, data centers in the world account for 200 TWh/year energy consumption. This is 1% of the total worldwide energy consumption.

1% might not sound like a big deal, but that is the equivalent of the total energy consumption of Indonesia, a country of 250 million citizens and the 4th most populated country in the world. What is more important, the source of this energy might come from fossil fuels such as coal. If all this energy were produced by coal plants, it would result in an annual emission of 1.2 billion metrics tonnes of CO2. In comparison, the total aviation industry emits roughly 0.9 billion metric tonnes of CO2.

Although many cloud providers have pledged to decarbonize their data centers, none have ditched fossil fuels entirely, and most of them rely on renewable energy credits rather than directly utilizing renewable energy sources such as solar or wind power. Greenpeace has been following up on the cloud providers pledges, and there are big differences, as laid out in this recent Wired article.

Some alarmist predictions indicate that due to the increasing number of data centers in the world, the total energy consumption of data centers could rise to as much as 8% of total world energy consumption by 2030.

That's why environmental impact of the cloud first and foremost is dictated by the amount and source of its energy.

Is there some light at the end of the tunnel?

Maybe. In order to answer that, we need to look at what has been done until now.

Increased energy efficiency on component level

Although various sources report an increased energy consumption due to the increased number of data centers, the International Energy Agency refutes this, claiming the total energy consumption of data center operations worldwide will be stable for at least the next three years despite a projected 80% increase in data center traffic and 50% increase in data center workloads.

This requires some explanation. Enter Moore's Law. For the last 50 years, driven by manufacturing process improvements, we've been able to reduce the size of transistors at an exponentially steady state each 18 months. In turn, that has resulted in an exponentially larger number of transistors in each new processor generation, substantially improving computing power. But the scaling has also reduced power consumption with the same rate. So, even though we've seen a rapid increase in number of data centers, the technology improvement has been reducing power consumption by an equal amount.

However, these efforts have lately come to a halt. We are getting close to physical boundaries with regards to how small each transistor can be manufactured. The transistors today measure 7nm. In comparison, human DNA is 2.5nm. The processor industry has solved this by increasing the number of cores in each processor, and improving computing power by parallelization. Today, you can find processors with up to 64 cores, and GPUs with hundreds of cores, enabling parallellization of specific tasks. But not all computations are parallelizable, and this too will hit a rooftop in the future.

A third technology improvement has been dynamic scaling based on demand. Since servers must always be on, being able to automatically scale down processor speed can have a tremendous impact on power consumption when the need for computing is low. Most processors and servers nowadays have the ability to idle or throttle down when not in use, consuming only a fraction of energy.

Lastly, replacing older hard drives with SSD drives reduces the energy consumption by half.

Increased energy efficiency on data center level

Energy is the single largest expense for a data center operation, this is in particular true for hyperscale operators of public clouds. These companies have invested heavily in improving their infrastructure in orde to reduce power bills. A standard measure used in industry is power usage effectiveness (PUE) of a data center - the ratio of total power required to run an entire facility versus the direct power involved in compute and storage. While smaller data centers are still being measured with PUE values greater than 2, large hyperscale cloud data centers have over the past 10 years decreased this value, beginning to record PUE value of 1.1 or less, which is very close to the theoretically perfect PUE of 1.0.

What will the future bring?

In the past decade, manufacturing improvement and targeted efforts towards improving energy efficiency on both component level and data center level, via reductions in PUE, has been ensuring us an efficient offset in total energy consumption in data centers, even due to strong growth of both hyperscale and regular data centers.

No more low-hanging fruits

However, now that we are starting to hit some physical and theoretical limitations, these low-hanging fruits are gone. The shift away from small, inefficient data centers towards much larger cloud and hyperscale data centers seem sevident. The Lawrence Berkeley National Laboratory estimated that if 80 percent of servers in the U.S. were moved over to optimized hyperscale facilities, this would result in a 25 percent drop in their energy usage. A prediction by the IAE is that this trend is already on its way, as illustrated in the chart below.

IEA, "Global data centre energy demand by data centre type", IEA, Paris https://www.iea.org/data-and-statistics/charts/global-data-centre-energy-demand-by-data-centre-type

Continued efforts on improving energy efficiency

Meanwhile, these hyperscale operators continue to innovate. Google for instance entered into a collaboration with DeepMind to improve data center cooling via Machine Learning and just recently launched a fully automated solution for their data centers, rendering a PUE of 1.06 on certain facilities.

A typical day of PUE (power usage effectiveness) with ML turned on and off. Source: DeepMind

The cloud vendors also continue to improve on runtimes, virtualization, compression and software that runs our workloads on top of their hardware improving the overall computation density. For instance, Google recently launched a new task scheduler which assigns resources dynamically, hence increasing hardware utilization in massive-multiparallell environments. Microsoft has done substantial work to improve performance in their .NET Core libraries for the same reasons.

Why is this important and what can you do?

Resistance to both the cloud and the Borg is futile. Limiting viral videos, Google searches and users from using online services is obviously to no purpose.

However, for us in the industry of making such services, there lies a responsibility to inform and acquaint our leaders, customers and decision makers about environmental impacts of their decisions and where our workloads run.

As a part of this advent calendar, our CTO wrote an article about private PaaS being considered harmful. He was mainly arguing the benefits of public clouds versus private clouds and data centers. I hope this article has contributed to illuminate a new perspective. Not only are private clouds considered harmful from an innovation perspective, but from an environmental aspect as well, where hyperscale clouds continue to innovate not only on the breath of services, but also on energy efficiency on a scale.

However, there lies a responsibility on all of us developers as well to utilize libraries, coding techniques and compression algorithms which consume less storage and less energy. Mobile developers are well aware of the power restrictions due to batteries. It's time the rest of us follows and give their contribution to lowering data storage and processing for our workloads. The benefit? Reducing storage, memory and CPU for your workloads has an economic benefit - you pay less money to the cloud vendor. Win-win.

Relevant resources recommended by the author

Did you like the post?

Feel free to share it with friends and colleagues