Information Technology has become so much a part of our world that it would be virtually impossible to even imagine life without it. It has penetrated and integrated with literally every facet of human existence and endeavor. One of the most remarkable features of the growth of IT is the way it gathers momentum.
The more we use IT, the greater the demand on IT infrastructure. This leads to the development of newer technologies and techniques that make computing and communication more efficient and powerful. This results in the growth and development of infrastructure that puts even more computing muscle within the reach of more and more people. This in turns leads to greater and newer uses for it, which creates even more demand, prompting the creation of even more robust and powerful infrastructure, and so it goes on, gathering mass and pace, apparently without limits.
Picture Credit: Fliker, Daniel X. O'Neil
One of the breakthroughs that facilitated the growth and spread of IT was the development of communication and networking technologies that enabled virtually unlimited numbers of relatively inexpensive computers to work as a team. This resulted in the sort of computing power that was, earlier, the exclusive domain of powerful, expensive high-end computers, but at a fraction of the cost. Moreover, these clusters are scalable; when more computing capacity is needed, just plug more machines into the network. This computing model lies at the very foundation of the most recent spurt in IT penetration worldwide. It is also the enabler of phenomena such as cloud computing.
For all the growth and evolution of infrastructure, however, the IT still wouldn’t be as all pervasive as it is today if it weren’t for the simultaneous evolution of our skills in optimally using its capabilities to store, sort, transport, process and manage data. Although huge strides have been made on this front, much remains to be achieved.
Picture Credit: Fliker, Danard Vincente
For example, take the case of the enormous volumes of data that has to be processed by, say, a search engine such as Google. The data that has to be processed is generated and stored in computers and networks spread around the globe. If all this data were to be transferred to a central location to be processed, the costs incurred for using communication networks for the transfer would be crippling.
In order to avoid this, the process itself, i.e. the programme required to process the data, is moved to the machine or cluster of machines that hold the data. The results from all the processing machines is collected and correlated through a process referred to as “reducing” to derive the final solution. Known as MapReduce for the two parts of the process, Mapping the data and Reducing the results of the distributed process, this programming and implementation model has come to be widely adopted for processing large data sets using equally large, distributed clusters of relatively inexpensive computers.
However, while it helps minimize the cost of data transfer for processing, it creates another problem. In order to ensure that sufficient computing power is always available for any requirement, vast arrays of machines have to be maintained on standby. The overwhelming majority of these machines would not be needed most of the time, but they nevertheless have to be maintained ready for any sudden spurt in demand for processing power.
Since the average computer in standby mode consumes about 70% of the power it uses when fully active, this results in enormous consumption of energy that is, in effect, wasted since no work is actually being done. Given our dependence on non-renewable, highly polluting hydrocarbon resources for power generation, this has damning implications for the fragile planet that we live on.
Given the sheer numbers of computers in clusters spread across the globe, which runs into the hundreds of thousands, ways to minimize the number of idle machines standing by consuming power need to be found at the earliest. Nidhi Tiwari, a Research Scholar at the IITB-Monash Research Academy in Mumbai, has been making significant progress on this front. Under the guidance of Prof. Umesh Bellur (IITB), Prof. Maria Indrawan (Monash) and Dr. Santonu Sarkar (Infosys), Nidhi has been working to find ways to reduce the energy consumed by huge map-reduce clusters without impacting their performance and other quality attributes.
In the first phase of her project, has been collecting the requisite empirical data by conducting comprehensive energy and performance characterizations of the map-reduce systems. The first phase of the project has revealed that:
- Increases in the number of machines in the cluster do not always yield corresponding increases in energy efficiency or data processing speed, especially if the job is I/O intensive and the reduce phase involves a lot of data-transfer.
- Network bandwidth has a significant impact on the energy efficiency of the map-reduce jobs. The larger the data size, the higher the impact.
- Deliberate use of power management features can improve the energy efficiency of different types of map-reduce jobs.
- Energy efficiency of map-reduce jobs improves when minimum energy is wasted in re-executing killed/failed tasks.
Picture Credit: Fliker, stwn
In the second phase, predictive models were created using the data generated from first phase. These predictive models will help in configuring the map-reduce systems in an energy efficient manner.
Based on the outcomes of the first two phases, predictive models are being created to forecast the performance and energy based on the degree of parallelism, cluster size and frequency, for a given workload type and size. This model will be used to configure energy efficient clusters. The model and the heuristics derived from the experimentations will be used to design energy-aware scheduling algorithm in the third phase. These model based configuration and scheduling algorithm will make map-reduce systems energy efficient. Use of such energy efficient clusters will help in reducing the operational cost of the data centers deploying large map-reduce clusters.
Says Nidhi, "We are dependent on Nature and its resources for our existence and well-being. So it is our responsibility to save them for future generations. The concept of sustainability, to my mind, involves doing more with less. The improvement in energy efficiency of map-reduce clusters will, I believe, contribute significantly to the sustainability of data centers."
The IITB-Monash Research Academy is a pioneering joint-venture research partnership between the leading institutions in India and Australia. The Acaedmy, as it is commonly referred to, offers research scholars the opportunity to study for a dually-badged PhD from both IIT Bombay in India and Monash University in Australia. Students spend time at both countries over the course of their research and many of them work on projects that are strongly-interdisciplinary in nature and with an applied research focus.
Research Scholar: Nidhi Tiwari, IITB-Monash Research Academy
Improving Energy Efficiency of MapReduce systems
Prof. Umesh Bellur, Prof. Maria Indrawan, Dr. Santonu Sarkar
Contact email@example.com for more information on this, and other projects.
The above story was written by Chhavi Sachdev based on inputs from the research student and IITB-Monash Research Academy.
Copyright IITB-Monash Research Academy