A look at how using Docker and Containerisation helped Ember reduce carbon emissions and lead us towards net zero hosting.
“Internet infrastructure is definitely not green,” says Max Schulze, chairman of the Sustainable Digital Infrastructure Alliance, a nonprofit seeking to decarbonise the global digital economy. “The average utilisation rate of a server in a data centre is 11 percent. But the infrastructure around data centres is not dynamic. A low server utilisation rate still means 100 percent power consumption, as if the server is running at 100 percent utilisation.”
Coordinated, purposeful action by tech companies large and small is needed to enact the operational and systemic changes required to reduce the sector’s greenhouse gas emissions. There can be many drivers for this change, from compliance with local environmental laws and regulations to embarking on sustainability efforts in order to foster a culture of innovation. Along the way, companies like Ember have discovered that adopting container technologies can play a part in reducing their environmental footprint.
According to an analysis of Gartner data by ParkMyCloud, companies will spend $26.6 billion on idle cloud resources in 2021. Helping data centres reduce carbon emissions could have a significant and lasting impact. Containers allow Ember to do just that. We have migrated a significant number of services to containerised technologies, where the existing servers had an average utilisation rate of 4 percent. Adopting Amazon’s ECS orchestration service allowed Ember to scale back to 8 VM hosts with an average CPU utilisation rate of 50 percent. Cost savings aside, Ember’s adoption of containers equates to a 70 percent reduction in carbon emissions from electricity and less machine wastage.
Furthermore, our ECS hosts are configured as ECS Spot instances. Amazon EC2 Spot Instances let us take advantage of unused EC2 capacity in the AWS cloud, again meaning more efficient infrastructure utilisation and greatly reduced costs.
Automatic scaling is the ability to increase or decrease the desired count of tasks in an Amazon ECS service automatically. Amazon ECS leverages the Application Auto Scaling service to provide this functionality. For more information, see the Application Auto Scaling User Guide.
Amazon ECS publishes metrics with the service’s average CPU and memory usage. We use these metrics to scale out a service (add more tasks) to deal with high demand at peak times, and to scale in a service (run fewer tasks) to reduce costs during periods of low utilisation.
Ember’s implementation is based on Target tracking scaling policies—increasing or decreasing the number of tasks that a service runs based on a target value for a specific metric - in our case, average CPU utilisation. This is similar to the way that a thermostat maintains the temperature of a home - select the desired temperature and the thermostat does the rest.
However, we can also apply different policies if required, for example, Scheduled Scaling—increasing or decreasing the number of tasks that your service runs based on the date and time.