We are delighted to share that, Dr. Pratik Gholkar (our alumnus) along with his supervisors, Prof. Yogendra Shastri (IITB) and Prof. Akshat Tanksale (Monash) has has devised a new way to produce hydrogen and methane with significantly reduced carbon footprint using microalgae. The study titled: ‘Renewable hydrogen and methane production from microalgae: A techno-economic and life cycle assessment study’ can be read here- https://linkinghub.elsevier.com/retrieve/pii/S0959652620337719
The research group has used reactive flash volatilisation (RFV) gasification technology to produce hydrogen using microalgae, giving rise to newer and cleaner forms of energy. Findings show the greenhouse gas emissions of hydrogen production using RFV on microalgae is 36% less compared to the steam reforming of methane gas – the current best practice for hydrogen production. With additional renewable energy processes, such as hydro-electricity, integrated with the researchers’ hydrogen production process, carbon emissions could drop by as much as 87%.
Currently, the production of microalgae does not meet commercial demand. However, microalgae cultivation for energy applications could also provide additional revenue streams for rural communities, potentially making them self-sufficient, researchers say.
Dr Pratik Gholkar, said “Assuming a market price of $10/kg for hydrogen compressed to 700 bar pressure, the payback period for hydrogen production was 3.78 years with nearly 25% investment return. Moreover, the life cycle climate change impact was 7.56 kg of carbon dioxide for every kilogram of hydrogen produced,”
“This is an exciting look into the resources and technology available to the world in our quest to reduce the use of fossil fuels and drastically cut the amount of carbon emissions.”
Using India-based JSW Steel (the funding agency for this research) as a case study for their source of CO2 for microalgae cultivation, the research team estimated just under 12,800 kg an hour of microalgae would be available for hydrogen production at a rate of 1240 kg/h.
While the costs to develop infrastructure to cultivate microalgae and then refine it into hydrogen and methane are expensive, the overall return on investment in the long-term could make hydrogen and methane cost-effective and environmentally friendly fuel sources.
Dr Yogendra Shastri from the Department of Chemical Engineering at IIT Bombay said climate change concerns have led to an increasing push for cleaner energy options, and microalgae could be a potential candidate to produce renewable fuel. He said “Hydrogen is acknowledged as clean fuel since it doesn’t lead to the emission of greenhouse gases when used. However, the production of hydrogen also needs to be sustainable. Biodiesel production from microalgae is limited due to low lipid extraction efficiency, less than 20%, and the high cost of microalgae harvesting and drying. Furthermore, microalgae-based hydrogen and methane production haven’t yet been commercialised due to expensive pre-treatment, such as harvesting, drying and lipid extraction; low carbon conversion efficiency; and tar accumulation.”
Prof. Akshat Tanksale from Monash University said “Hydrogen and methane are clean sources of fuel and green chemical synthesis only if they are produced from renewable resources. At present, 96% of hydrogen and all methane is produced using non-renewable resources. Microalgae as a feedstock is attractive due to its high carbon dioxide fixation efficiency, growth rate, photosynthetic efficiency, ability to grow in brackish water – like rivers and lakes – and the ability to cultivate it on land not suitable for agriculture. Water and renewable electricity integration with microalgae harvesting can bring down the costs and increase the sustainability of hydrogen production from this process.”
*Image Source: https://www.theguardian.com/education/gallery/2015/jan/23/a-language-family-tree-in-pictures. Image Credits: Minna Sundberg
Natural Language Processing (NLP) is a research area that marries linguistics with machine learning. My supervisor, Prof Pushpak Bhattacharyya, is fond of saying, “In NLP, linguistics is the eye, while computation is the body.” Our laboratory has been dedicated to language processing research for around 20 years, and has pushed the boundary in NLP research for Indian languages. For the past five years, we have been exploring the shared vocabulary among Indian languages, especially in terms of Cognates and False Friends.
Cognates are word pairs which share the same meaning and a similar spelling across languages. For example, the French and English word pair, Liberté / Liberty. In some cases, similar words have a common meaning only in some contexts, and such word pairs are called partial cognates. For instance, the word “police” in French can translate to “police”, “policy” or “font”, depending on the context.
On the other hand, False Friends are word pairs which share a similar spelling, but different meanings. Such a phenomenon is commonly observed by scholars who study diachronic linguistics or historical linguistics.
Cognates have a special place in Indian languages, as many Indian languages borrow words from Sanskrit. For such words, grammar also provides us with the category of Tatsama and Tadbhava words. There are dictionaries which even provide us with a plethora of such word-sets across Indian languages.
‘Tatsama’ words are easy to identify as they share identical spelling, whereas ‘Tadbhava’ words differ, as compared to how they were initially spelt. Computational algorithms have been using the notion of character sets or phonemes to identify such word pairs. This is the point where our work provides a more accurate alternative and identifies that Tadbhava words differ in terms of spelling / phonetics. We incorporate the notion of similarity based on word meaning (semantics) and the context of a word, to be able to identify such words with much better precision (to be honest, the recall is higher too!).
NLP research has experienced a sudden boom in recent times in this sub-area, which models word semantics. Cross-lingual (across multiple languages) NLP research, however, is still in the nascent stages where modelling of semantics across languages is slowly maturing. Computational algorithms require a lot of data to train themselves so that they can accurately model and predict the correct word meaning, even when dealing with the same language.
We, graduate research scholars of the IITB-Monash Research Academy, study for a dually-badged PhD from IIT Bombay and Monash University, spending time at both institutions to enrich our research experience. The Academy is a collaboration between India and Australia that endeavours to strengthen relationships between the two countries. Its CEO, M S Unnikrishnan, says, “The IITB-Monash Research Academy represents an extremely important collaboration between Australia and India. Established in 2008, it is now a strong presence in the context of India-Australia collaborations.”
For Indian languages, research in such an area where we require models to correctly predict the similarity in meaning across languages still remains a difficult problem. Lack of structured data for the task makes it a tough egg to crack. However, because most Indian languages are closely related to each other, the task at hand becomes relatively more natural. We use knowledge graphs like Wordnet and Indian language texts available to generate such word pairs in abundance. Using the context of such word pairs, we are able to say with a certain probability that these words carry the same meaning across languages. Our work also shows improvement when applied to the task of automatic translation. The results show promise that cognate detection and shared vocabulary can indeed improve NLP for Indian languages.
Both machine learning-based algorithms and modern deep learning-based techniques help us achieve better performance compared to the previous approaches towards the problem of cognate detection. As previous approaches use spelling and / or phonetics into account, they lack the inherent linguistic need of modelling the semantics of words across languages. We apply the same approach to detect false friends among Indian languages.
False friends hurt the accuracy of the translation task, and cross-lingual search task as computational algorithms predominantly take the spelling of words into account. Using our approach, we can classify false friends and generate a list of such word pairs. A direct translation of such words should be avoided as it can sometimes lead to disastrous results. For example, the word “gift” in the German language means “poison”, and I am sure you do not want to get an anniversary “gift” from your spouse. Interestingly, “gift” in the Swedish language also means “poison”, but in a different context, it could mean “marriage” as well. Now, we do not want our machines to tell us that “marriage = poison”; unfortunately, that is what they currently do.
Our research for both cognates and false friends achieves more than 90% accuracy for more than 12 Indian languages.
We also use the notion of word semantics and apply it to historical texts in Sanskrit grammar. Our research findings help us gain an insight into how these ancient texts have been passed on in a grammatical tradition and can be traced back to a hypothetical root. Our insights also show us how, with time, multiple variants of the same document are generated.
Prof Malhar Kulkarni, my co-supervisor says, “Texts are important sources of intellectual history and establishing a particular text using extant available resources is an important task for the historical linguistics community”. We create such a base for the most popular commentary on Panini’s Sanskrit grammar, known as Kāśikāvṛtti. With the help of data accumulation by philologists and the computational semantic modelling, we generate a pretty accurate version of the descendance of this text. Given my overall experience in the area of semantics and the long withstanding interest of the NLP research community, we hope to make further strides to help computers understand language.
Research scholar: Diptesh Kanojia, IITB-Monash Research Academy
Project title: Computational Phylogenetics for Variant Manuscripts in Sanskrit
Supervisors: Prof Pushpak Bhattacharya, Prof Malhar Kulkarni, Prof Reza Haffari
Contact details: email@example.com
This story was written by Diptesh Kanojia.
Copyright IITB-Monash Research Academy
Early on in his 2016 book, The Curse of Cash, Harvard economist Ken Rogoff makes a startling statement: “Though most people are aware of the hygiene problems associated with handling cash, one can imagine paper currency being an agent of transmission in some future pandemic.” That is indeed some prescient thinking, given that we know little about the potential transmission risk of COVID-19 from handling paper currency around the world.
This analysis shows that although there is at best a weak correlation between the amount of paper currency in circulation per resident and the COVID-19 transmission rates, it varies widely between countries. This article then reviews emerging evidence on survival rates of COVID-19 on different surfaces and finds no difference between paper and plastic. This is not to discount the risks from digital methods of payments such as touchscreen phones or point-of-sale (PoS) machines, but to emphasize that central banks around the world must adapt to changes in payment methods in a post-COVID world. Indeed, in the latest Bulletin from the Bank for International Settlements (BIS), authors Raphael Auer, Giulio Cornelli, and Jon Frost find that the pandemic presents a significant challenge for central banks around the world to bolster trust in cash.
“There are two measures economists use to study currency demand, the currency to Gross Domestic Product (GDP) ratio and the currency in circulation per capita,” says Professor Pushpa L. Trivedi, a Professor of Economics at the Department of Humanities and Social Sciences, Indian Institute of Technology, Bombay (IITB). India, alongside countries such as Hong Kong and Japan, are more cash-reliant than other economies around the world by the former measure, but not necessarily by the latter owing to differences in exchange rates. India’s cash usage has also been linked to the size of its large informal sector, which contributes more than 50% to India’s national income.
To investigate the link between currency in circulation and the spread of COVID-19, we look at correlations across countries. The figure below plots the currency per capita (in terms of hundreds of US Dollars) to the average number of new cases per million persons in that country. Note that this does not permit any comment on causality, and is based heavily on reported infection rates which may be underestimated for lack of testing. Indeed, when we look at an alternate measure, the currency-to-GDP ratio, and total cases per million, the relationship suggests no correlation. It is also worth noting that in areas worst affected by the pandemic (such as the United States of America and the Eurozone), a large fraction of cash is actually held outside these countries and thus may not be associated with domestic infection rates.
Figure 1: Currency held per capita and COVID-19 spread (top); Currency-to-GDP ratio and COVID-19 spread (bottom)
Dr. Mehmet Özmen, Lecturer in Economics at the University of Melbourne’s Faculty of Business and Economics says that this (lack of) association could be because of differences in attitudes related to cash across countries. “Countries like the USA, Canada, Australia, and Britain regularly conduct detailed surveys on how their citizens use cash and other payment methods like debit and credit cards,” said Özmen. He continues, “There’s not much evidence on what drives the preferences for payments in India though,” sharing that in a recent ongoing project in Mumbai, a whopping 94% of nearly 15000 recorded transactions were carried out in cash, typically in transactions of less than Rs. 500 (Approx. USD 7.2). In examining some of the factors that affect the demand for cash in India, recent studies point toward the availability of alternate payment instruments as a key determinant. Thus advisories in COVID-19 times (even from the Government of India) suggesting avoidance of cash where digital alternatives are available may have good intentions but ultimately may be speculative.
Aggregate data on payment systems for May and June are yet to be released by the Reserve Bank of India, but data up till April 2020 suggests a sharp decline in digital payments, and a further reduction in cash withdrawals at automated teller machines (ATMs). Furthermore, recently released data from the National Payments Corporation of India (NPCI) suggests that there has also been a decline in overall mobile and digital payments in March 2020, compared to earlier months. Trivedi suggests that the former could be explained by an overall drop in economic activity since March 24 (when the nation-wide lockdown was announced by Prime Minister Narendra Modi), whereas the latter might more intuitively be due to mobility restrictions put in place during the lockdown. Cash withdrawals at ATMs (largely using debit cards) fell by more than half from Rs. 6569 lakh (approx. USD 9.2 million) in January 2020 to Rs. 2866 lakh (approx. USD 3.7 million) in April, 2020.
Figure 2: Change in digital transactions during the pandemic in India
Source: Reserve Bank of India Weekly Statistical Bulletin, 2020
What happens to cash in a post-COVID world? In the United States, for example, many credit card companies see an opportunity during the pandemic to make a more structural shift away from cash. But this is predicated on the assumption that the novel coronavirus lasts longer on paper currency notes than it does on credit/debit cards, or touchscreen surfaces. Although there is no information available on the newest strain of the coronavirus, scientific studies compiled by a team of medical researchers in Germany suggests that the time that the virus survives on paper, plastic, and glass surfaces is similar (between 4 and 5 days, depending on the type of strain). Thus, it remains entirely plausible that COVID-19 could transmit through card usage, or other touch-based payment methods (e.g., mobile applications) as well.
Indeed, communication around cash usage becomes a bigger problem when official advisories come from the World Health Organization (WHO) that suggest cash was a carrier for the virus. That was quickly clarified, while Mike Orcutt, in his article in the MIT Technology Review, actually says that you are more likely to contract the disease from others in the aisles at grocery markets rather than at the checkout counters.
Stray incidents aside, COVID-19 has some important implications for currency management policies in India. First, given the large share of the informal sector and low adoption of cashless payment methods (despite demonetization), restricting cash use may create more problems than solutions. This is especially true as recent reports suggest a substantial fraction of this workforce consists of migrants who face severe uncertainty around their livelihood in light of continuing lockdown restrictions.
Finally, as a counterpoint, studies suggest that ethanol is one of the most effective disinfectants for surfaces potentially affected by COVID-19. As India still uses paper-based currency notes, it may be difficult in practice to disinfect the large volumes of notes in circulation and with the public. What is easier, and therefore more likely to be preferred, is cashless modes of transactions that are easier to disinfect using alcohol-based sanitizers. Countries such as Canada, the United Kingdom, and Switzerland (and other European Union countries, but not the US) have polymer-based banknotes, that are perhaps easier to disinfect without affecting the longevity of the banknote. Although India has, in the past, explored this option, it will perhaps truly take a pandemic to test this idea out.
Anirudh Tagat is a PhD Scholar at the IITB-Monash Research Academy, Mumbai. The views expressed in this article do not represent that of IIT Bombay, Monash University, the Academy, or the sponsors of this research.
This article was published in the online data journalism portal, IndiaSpend- https://www.indiaspend.com/no-clear-link-to-currency-notes-and-covid-19-spread/
- Fears of cash usage being associated with #COVID-19 spread don’t appear to be backed up the data. We explore what is happening to cash in a post-COVID-19 world in India and elsewhere.
- Cash usage is so persistent in India even after #demonetisation — nearly 94% of all transactions in a recent survey were made out in cash. So what does this mean for a country continuing to deal with the pandemic?
- In the US and elsewhere, credit card companies see an opportunity during the pandemic to move to cashless payments. But this assumes that coronavirus lasts longer on paper currency notes than it does on credit/debit cards, or touchscreen surfaces.
- Indians are withdrawing less cash from ATMs and making fewer digital payments since January up to May, this is much lower than what it was during the same time last year.
- Advisories from organizations like WHO and the Government of India in COVID-19 times to avoid cash where digital alternatives are available may have good intentions but are ultimately speculative.
- Is cash usage really associated with #COVID-19 spread across the world? In our analysis with cross-country data, we don’t find sufficient evidence to support this claim. We explore what is happening to cash in a post-COVID-19 world in India and elsewhere.
- Indians are withdrawing less cash from ATMs and making fewer digital payments since January up to May, this is much lower than what it was during the same time last year. Part of this could be due to lower economic activity overall, but also on account of the lockdown imposed in various parts of India.
Early on in the battle against COVID-19, advisories from bodies like WHO and various Governments cautioned against the usage of cash, arguing they might be virus carriers. In this analysis, we examine whether COVID-19 spread is associated with cash held in that country. The claim finds little support in the data, but central bankers around the world need to be more careful about their policies on cash usage and promoting digital payments.
I grew up in the coastal state of Kerala and have spent a large part of my life close to the sea. This probably triggered in me a desire to study how ocean basins are formed. Hence, after completing my Masters in Marine Geophysics, I enrolled for a PhD with the IITB-Monash Research Academy. My project is titled, ‘A thermo-mechanical study of the southern Red Sea – Afar triple junction region: implications on the rift evolution’.
The IITB-Monash Research Academy is a collaboration between India and Australia that endeavours to strengthen scientific relationships between the two countries. Graduate research scholars like me study for a dually-badged PhD from both IIT Bombay and Monash University, spending time at both institutions to enrich our research experience.
The earth has a radius of about 6,371 km. Interestingly, the crust — or the uppermost thin layer on which we live — is just 30-70 km thick beneath continents and 5-10 km thick beneath oceans. This thin peel of the earth and its underlying layer (called the upper mantle) — together named as lithosphere — move, evolve, and lead to almost all the significant geological processes on the earth’s surface such as volcanic eruptions and earthquakes. The lithosphere is not a single shell but is broken up into different moving plates. The plates interact with each other along their boundaries, which give rise to formation of mountains like the Himalayas and deep cavities like the Mariana trench (the deepest spot in the world), formation and closure of oceans, etcetera. A place where three such plates meet, called a triple junction, is extremely significant.
Afar — where the African, Arabian, and Somali plates meet — is one such triple junction. The three arms of the Afar triple junction, the East African rift, Red Sea and Gulf of Aden, represent present-day examples of rupturing of the continental lithosphere to form ocean basins.
Oceans are formed — and gradually disappear — on the earth’s surface continuously when such plates move away from and towards each other, respectively. The rupturing of the lithosphere is the first stage of ocean basin formation. As the plates continue to drift apart, small ocean basins form between the newly broken continents, floored by fresh hot rock that is upwelled from below the plate which will eventually cool down. This early stage of ocean formation is called the juvenile stage and the Red Sea is a perfect example of this. The Red Sea, one arm of the Afar triple junction, is being formed because of the drift of Arabian and African plates away from each other.
The Red Sea, being an incipient ocean basin, provides a unique opportunity to understand the process of ocean formation through continental rupture. Also, the southern part of the Red Sea is morphologically and geologically distinct from its northern part, as it is close to the Afar mantle plume near Ethiopia. Afar plume is a large column of hot rock rising from deeper mantle interacting with the base of the lithosphere of the Afar area and forming a large volcanic mass over the surface. I am using geophysical data sets such as gravity data, magnetic data, seismic data, etcetera, and methods of geophysical modelling to investigate ocean basin formation processes and the influence of the Afar Plume on the evolution of the Red Sea.
So why is this work important?
The Red Sea has an important place in present-day plate tectonics because it is the only place where stretched and thinned continental lithosphere is transforming into the oceanic basin, pushing the African and Arabian plates away from each other. Thus, the Red Sea is a natural laboratory that allows geologists and geophysicists to test the recent concepts of the development of sea floor spreading.
In the northern part of the Red Sea, the continent is in an extension stage where African and Arabian plates are still attached without any oceanic crust forming in between. But the southern Red Sea has developed active sea floor spreading like any other oceanic spreading centre with continuous oceanic crust formation. Why they differ like this is still unclear to the geophysical community. The two important factors behind the divergence of plates are tectonism and magmatism. Fierce debates continue on both sides about the dominance of either of these factors over the evolution of the Red Sea. Through my research, I hope to provide some inputs to the scientific community that is interested in the evolution of ocean basins, particularly of the Red Sea.
Says Prof Murali Sastry, CEO of the Academy, “The IITB-Monash Research Academy represents an extremely important collaboration between Australia and India. Established in 2008, the Academy now is a strong presence in the context of India-Australia scientific collaborations. Sreenidhi’s project targets an area where limited research has been carried out. We wish her all success.”
The earth not only provides a home to us but also presents itself as a puzzle to enthusiasts through its infinite number of wonderful secrets. The anxiety to solve these puzzles is rooted in our unending love towards the earth. Geophysics is physics of the Earth, and we geophysicists try to study geological processes on the earth — whether its origin is shallow or deeper — using geophysical methods. I hope my research will add value to this vital and constantly evolving body of work on the evolution of the Red Sea.
Research scholar: Sreenidhi K. S., IITB-Monash Research Academy
Project title: A thermo-mechanical study of the southern Red Sea – Afar triple junction region: implications on the rift evolution
Supervisors: Prof. M. Radhakrishna, Prof. Peter Betts
Contact details: firstname.lastname@example.org
This story was written by Mr Krishna Warrier based on inputs from the research student, her supervisors, and the IITB-Monash Research Academy. Copyright IITB-Monash Research Academy.
Children at Anganwadi Centre in rural West Bengal
Children are our future. For any nation to develop, it is important to ensure their well-being. However, child malnutrition is widespread in several countries including India. Child malnutrition is the outcome of a complex interaction of different factors. It is time this interaction is identified, problematized and analyzed — so that child malnutrition can hopefully be eradicated. This is what got me interested in my PhD project titled, ‘Culture and Malnutrition: An Analysis of the Socio-Cultural Dimension of Child Malnutrition in Rural India’.
We, graduate research scholars of the IITB-Monash Research Academy, study for a dually-badged PhD from IIT Bombay and Monash University, spending time at both institutions to enrich our research experience. The Academy is a collaboration between India and Australia that endeavors to strengthen relationships between the two countries.
India is ranked 102 out of 117 countries on the Global Hunger Index 2019. The rate of malnourishment here is abnormally high. Child malnutrition is, without doubt, one of the biggest social problems that the country is facing today.
My project attempts to provide a nuanced understanding of the socio-cultural dimension of child malnutrition in rural India.
Child malnutrition is also one of the biggest health issues the world is facing today. While the importance of socio-cultural factors — for example, gender, religion, caste, family and kinship on matters of dietary practices, childcare, infant health care, and nutritional outcome — has always been known, there is hardly any rigorous research on this in India, especially from an ethnographic perspective. Through my research I aim to gain fresh insights into child malnutrition by engaging with the key cultural aspects of Indian society that have a strong bearing on child health, including malnutrition.
The participants of my ethnographic study are the primary caregivers of children aged between three months and six years. I have used ethnographic techniques like thick fieldwork, participant observation, narrative interviews and focus-group discussion for data collection. The fieldwork was conducted in a village named Bhangar-1 in West Bengal (India). The participants were selected through a combination of random and snowball sampling from the Anganwadi centres. To explore the socio-cultural determinants of child malnutrition, I observed and interviewed caregivers of children for eight months.
So, what has emerged from this research so far?
The narratives of the participants reflect the complexities of the socio-cultural dimension of child malnutrition. The findings of the study suggest that nutritional vulnerability is not simply due to the unavailability, but also due to the accessibility of resources, which involves an interplay of power and hierarchy. The structured norms, values and beliefs of a community influence kinship practices, intra-household decision making, the position of women within the family and their autonomy. These practices significantly influence the mother-child dyad and the nutritional health of the child.
The single factor that motivated me to take up this project was that though there is a vast corpus of research on child malnutrition, it lacks theoretical consolidation. There is a clear need to provide a sociological analysis of child malnutrition.
This study has significant policy implications. It highlights the need to consider the socio-cultural dimensions of the community while framing the social protection programs and policies. It contributes to the understanding of the intra-household contexts where important practices and strategies take place to ensure nutritional well-being of both the mother and the child. Apart from the mother-child dyad, it highlights the need to integrate the wider household and community environment, social structure and institutions related to kinship, family, gender, hierarchical patterns of authority and normative systems of values and beliefs to present a sociological understanding of child malnutrition in rural India.
Prof Murali Sastry, CEO of the IITB-Monash Research Academy, often says, “The Academy was conceived as a unique model for how two leading, globally focused academic organisations can come together in the spirit of collaboration to deliver solutions and outcomes to grand challenges facing industry and society.”
I am confident that this research project will add a significant brick or two in the edifice we are trying to build to eradicate child nutrition.
Research scholar: Pragati Dubey, IITB-Monash Research Academy
Project title: Culture and Malnutrition: An Analysis of the Socio-Cultural Dimension of Child Malnutrition in Rural India
Supervisors: Professor Devanathan Parthasarathy, Assoc Professor Dharma Arunachalam
Contact details: email@example.com
This story was written by Pragati Dubey. Copyright IITB-Monash Research Academy.
If you collide into Dhairya Vyas, he will apologise, break into a smile, and then give you a lecture on why collisions are important!
“From a cricket pitch where a batsman hits a ball, to a construction site where rocks are crushed,” says this mild-mannered research scholar with the IITB-Monash Research Academy, “contacts and collisions are present everywhere in our daily lives. Some occur at high speeds like a bullet hitting a target; others are slower like the dropping of a mobile phone.”
No prizes for guessing that Dhairya analyses collisions and his research project is titled, ‘Modeling Frictional Collisions with SPH’.
In industries, collisions between granular bodies are used in applications like shot peening, milling, crushing, and mixing. Since it is difficult and often expensive to use experimental techniques to analyse such applications, numerical methods like the Discrete Element Method (DEM) are used instead, reveals Dhairya.
“DEM can model frictional collisions between the interacting objects and has proven to be useful in analysing the bulk flow behaviour of granular systems. However, while analysing the flow of granules, we also need to identify how the interacting bodies deform and break, and this is not easily possible using DEM, especially in more intricate applications involving complex geometries,” he adds. “So numerical methods which can accurately model both — frictional interactions and deformation and breakage — need to be identified. One such method is Smooth Particle Hydrodynamics (SPH), which has been widely used to analyse high velocity collisions like ballistic impacts. However, it lacks accurate friction models and hasn’t been tested for analysing low velocity impacts. Therefore, in this project, we incorporate accurate friction models in SPH and test its performance in modeling low velocity collisions.”
The IITB-Monash Research Academy is a collaboration between India and Australia that endeavours to strengthen scientific relationships between the two countries. Graduate research scholars like Dhairya study for a dually-badged PhD from both IIT Bombay and Monash University, spending time at both institutions to enrich their research experience.
What got Dhairya interested in this subject?
“Computational modeling, the process of how we transform real-world occurrences like water flowing in a river or a meteorite hitting the Earth’s surface in the form of equations and numbers, has always fascinated me. What is even more interesting is that since we were initially unable to solve most such equations, we developed computers, which further transforms these numbers and equations into 1s and 0s and solves them for us. This fascination and curiosity is why I find my research exciting.”
Why should this project matter?
We hope to provide a powerful predictive tool for engineers who design equipment that is used to handle granular material. Through the help of computer simulations, they will be able to compare the durability and performance of different designs and select the most suitable ones. This will not only minimise the cost of designing (by minimising experimental tests) but also will lead to the development of durable and efficient components. This will eventually reduce the price of the finished product, says Dhairya.
Says Prof Murali Sastry, CEO, IITB-Monash Research Academy, “The handling of particulate materials in industry often involves equipment subject to highly abrasive conditions leading to progressive wear of the equipment and reduced process efficiencies. Despite the significant costs of wear and erosion, there has been little work done in its numerical simulation. This project will help shed light on this relatively unexplored area.”
Yes, there is lots to learn from researchers like Dhairya Vyas. We hope you collide into him soon!
Research scholar: Dhairya Vyas, IITB-Monash Research Academy
Project title: Modeling Frictional Collisions with SPH
Supported by: Data61, CSIRO
Supervisors: Prof. Devang Khakhar, Prof. Murray Rudman, Dr. Sharen Cummins, Dr. Gary Delaney,
Contact details: firstname.lastname@example.org
This story was written by Mr Krishna Warrier based on inputs from the research student, his supervisors, and the IITB-Monash Research Academy. Copyright IITB-Monash Research Academy.
The growing demand for energy along with its limited supply from fossil fuels, is a global concern. This has led to a tremendous increase in research in various energy disciplines.
Presently, crystalline Si (silicon) dominates the market for solar cells with an efficiency of 26%. However, due to the energy-intensive fabrication process, the panels lead to higher energy payback time. Therefore, to keep up with the growing energy demand along with comparatively lower energy payback time, organic solar cells which are flexible and easy to process are gaining popularity for a different range of applications.
With the deployment of organic light emitting diode (OLED) in televisions and phones, the future of organic solar cells appears bright. This field is interesting as organic materials and solar energy are both abundant and can be used for various direct applications. However, the commercialization of organic solar cells involves the challenge of film uniformity at a large scale so that they can be printed from roll to roll. My research work involves understanding of bulk heterojunction morphology for different polymer and small molecules, and at different processing conditions to understand its effect on the performance of the device using various structural and spectroscopic characterizations at a nanoscale level.
Organic solar cells are devices that produce electricity when photons are absorbed by the active layer, which is formed by polymers and small molecules. On mixing a p-type semiconductor (donor) with an n-type semiconductor (acceptor), a bulk heterojunction morphology is formed which opens a wide area to synthesize materials that can absorb the maximum solar spectrum. With the development of novel non-fullerene acceptors, organic solar cells have reached a maximum efficiency of 17.6% by covering a more substantial part of solar spectrum. The organic solar cell can be used for application in low power requirement devices such as flexible screen chargers, electronic clothing, and transparent window films on office buildings. One of the commercially available OSC is HeLi-on solar panel by Infinity PV which is a flexible solar panel with a battery to charge electrical gadgets.
To increase the efficiency of organic solar cells, it is important to choose the absorber material wisely, so that it can form an optimum morphology of the bulk heterojunction for efficient charge generation, separation and collection. As the active layer of these organic solar cells are either amorphous or semi-crystalline in nature, it is imperative to study the film morphology using various microscopic techniques to improve the efficiency.
The outcome of my work about bulk heterojunction morphology can directly be helpful for chemists who design new materials to improve the efficiency of OSCs, and industries that want to process roll to roll printable solar cells. The more significant impact of the work will be to benefit the community by helping to fulfill the energy demands and making life comfortable a few years down the line. Another example of OSC application includes building integrated photovoltaic (BIPV), where solar panels can be installed on the roof, walls and even on windows and these are feasible due to transparent and flexible properties of OSCs. A prototype model HeliaSol, which are flexible solar films developed by Heliatek is already installed on the façade of a warehouse in Germany which is expected to generate 6.7 kWh electricity per year.
The IITB-Monash Research Academy is a collaboration between India and Australia that endeavors to strengthen scientific relationships between the two countries. Graduate research scholars like me study for a dually-badged PhD from both IIT Bombay and Monash University, spending time at both institutions to enrich their research experience.
As stated by Prof Murali Sastry, CEO of the Academy, “The Academy represents an extremely important collaboration between Australia and India. Established in 2008, it is now a strong presence in the context of India-Australia scientific collaborations. Urvashi’s work involves studying the effect of solvent additive on the morphology of a polymer and fullerene blend, and correlation of different morphology with charge separation and charge transport. It can be a step towards the commercialization of OSCs which will be very helpful to the community in future. We wish her all success.”
Research scholar: Urvashi Bothra, IITB-Monash Research Academy
Title: Micro-structural and micro-spectroscopic investigation of bulk heterojunction organic solar cells
Contact Details: email@example.com
Supervisors: Prof. Dinesh Kabra, Prof. Christopher R. McNeill
This story was written by Mr Krishna Warrier based on inputs from the research student, his supervisors, and the IITB-Monash Research Academy. Copyright IITB-Monash Research Academy.
I first walked into a chemistry laboratory in Grade 8, and instantly fell in love with the Round Bottom Flask (RBF) and the varied smells, colours, and textures emerging from it. No matter what you put into the flask you would invariably get a new product each time you placed it on a Bunsen flame. This was pure magic!
Over time, I realized that this ‘multi-talented’ RBF was not that great after all. When extrapolated into a vessel of a larger size – say an Industrial Tank Reactor – the RBF transformed into a dangerous weapon!
Why? When you put a large amount of reagents A and B in a tank reactor, plenty of heat is generated with very cramped space for dissipation. This invariably leads to an explosion. One way to prevent this is to slow down the reaction by adding a large amount of solvent(s). However, this could lead to two problems — decreased yield and excess effluent generation. What is needed is a balance between speed and control, and this is where I am hoping to make a difference.
The IITB-Monash Research Academy, where I have enrolled for a PhD, is a collaboration between India and Australia that endeavours to strengthen scientific relationships between the two countries. Graduate research scholars study for a dually-badged PhD from both IIT Bombay and Monash University, spending time at both institutions to enrich their research experience.
My project is an earnest attempt to make the current industrial chemical manufacturing better — using a combination of Continuous Flow Chemistry and Heterogeneous Catalysis.
In flow chemistry, a reaction is performed by pumping in the reactive starting materials through tubes or coils as seen in Figure (1) instead of a flask.
In conventional RBFs and industrial tank reactors, the volume that could be held at a time is constant and repeated different batches of reactions are needed to produce a large yield. On the contrary, a reaction could be performed continuously through flow — the process does not stop as long as the starting reagents are pumped in. This not only enables on-demand production but also reduces batch-to-batch variability. Depending on the number of reactions needed to attain the final product, multiple steps could be conveniently performed by simply tailoring the number, nature, type and dimensions of the reactor coils involved. This not only makes the overall process quicker and safer, but also improves the yield, with added advantages like the ability to characterize the reaction progress in-line or during the flow of the reagents and the ability to automate the entire series of reactions no matter how large the process.
To cut a long story short, a reaction that would take 24 hours in a conventional round-bottom reactor in a lab-scale, could be completed within less than 30 minutes in a continuous flow reactor; that too in a safe, efficient and continuous (scalable) fashion.
So how does Heterogeneous catalysis help? This is a type of catalysis where the phase of the catalyst differs from the phase of the reactants or products. In contrast, during homogeneous catalysis, the reactants, products and catalyst exist in the same phase. Heterogeneous catalysis offers many advantages. A reaction that has been uncatalyzed will take hours or even days more to get completed than its catalysed counterpart. So, we see that both these methods are excellent in their own individual ways for making a process faster, easier and more feasible. Imagine what the result would be if the two could be combined?
My project focusses on the concept of process intensification through continuous flow. In simplified terms — a way to make reactions easier, safer and more efficient for both humans and nature.
How well a heterogeneous catalyst can function in a reaction depends on various factors, of which the two most important are morphology (structural shape or size of a material) and yield. It is in these areas that Continuous Flow Chemistry can be employed to simultaneously achieve both – speed on synthesis and control on the process — without the use of any highly technical resources or equipment.
In order to demonstrate this, we synthesized two materials through continuous flow; the morphology of which is depicted in Figure (2). It yielded results which were not just comparable in morphology and quicker (involving less than or equal to half the amount of time needed through the traditional batch techniques), but were scalable as well. For instance, KCC-1 could be produced within 0.5-1 hour through continuous flow when the traditional batch techniques needed 1-4 hours; and PANI, within 5 minutes and a throughput of 17-30 g/h through continuous flow as compared to 24 hours with a maximum throughput of only 3 g/h. These studies prove how continuous flow synthesis can provide a controlled and scalable solution for synthesizing crucial catalytic materials.
Further, polymeric emulsion foams, called PolyHIPEs or PHPs, have been synthesized (morphology included in Figure (2) as well) through conventional batch techniques and can be employed in the demonstration of reaction efficiency in scalable high-throughput dynamically stirred continuous flow reactors, in various industrially important processes like Suzuki coupling, which form the synthetic base of a plethora of pharmaceutical molecules and commercially important products.
Now if I get a chance to go back to school, I will definitely take my Grade 8 chemistry teacher out to lunch!
Research scholar: Karuna Veeramani, IITB-Monash Research Academy
Project title: Design, Synthesis and Applications of Heterogeneous Catalysts for Continuous Flow Chemistry
Supervisors: Prof Anil Kumar (IIT Bombay), Prof Neil Cameron (Monash University)
Contact details: Karuna.Veeramani@monash.edu; firstname.lastname@example.org
This story was written by Karuna Veeramani.
Copyright IITB-Monash Research Academy.
The first stage of a typical tumour’s growth is called the avascular stage. At this point it possesses no blood vessels, and absorbs the nutrients needed for its growth from the inter-cellular fluid.
Gopikrishnan C R, a research scholar at the IITB-Monash Research Academy, is working on a project that does the modelling, numerical simulation and mathematical analysis of this stage of tumour growth. He is hopeful that his research will one day be able to save lives!
The project stands on three pillars: modelling of tumour growth in different circumstances, numerical simulations of the models, and mathematical analysis of the numerical methods employed to simulate the models.
Says Gopikrishnan, “What got me most interested in this project is that it links two mathematical communities, those who focus on the modelling part and those who do the analytical work. Both look at the same problem and understand the dynamics of tumour growth theoretically, using two different perspectives. The modelling community focuses on ‘how’, while the analysis community tackles ‘why’, and both are equally important.”
So how has his research progressed this far? “We have devised a method which addresses the moving boundary problem in tumour growth models. We have theoretically proved and illustrated the reliability and cost-effectiveness of the method. So, we now have a generic framework by which we can address tumour growth problems. This method has significant theoretical advantages as well. It helps answer deeper questions like whether the problem has a solution, and, if yes, whether our computer simulations correctly approximate the solution.”
But Gopikrishnan has no plans to stop here.
“Since we have developed a generic framework for basic tumour growth problems, we are now in a position to add complexities to the model. We can study the effect of an external cancer drug or about the depletion of nutrients or about developing blood vessels and how they pass on to the stage of malignancy. In a laboratory, testing all this takes many weeks and are costly in terms of money. But it can be reduced to hours if not minutes if we use using modelling and computer simulations.
“Basically, we observe the starting stage of a growing tumour and compare it with mathematically well-observed natural phenomena. A tumour is like a bunch of cells embedded in the intercellular fluid. In turn, the cells too behave like a fluid which is viscous than the intercellular fluid owing to its rough cell membrane. So a tumour can be imagined as mixture of two fluids, a viscous one and an inviscous one (see Figure 1).
“A lot of research has been conducted in physics and mathematics on the theory of mixtures. Therefore, we model a tumour as a mixture of two fluids interacting with each other. The next question is: what are the important interactions? When the cells die its organelles disintegrate and become a part of the fluid and cells absorb fluid to divide and grow. In summary, cells and intercellular fluid constantly exchange matter with each other. This leads to a model based on mass conservation laws, which we numerically solve, study the solutions minutely, and then develop ways to improve the model.”
The IITB-Monash Research Academy is a collaboration between India and Australia that endeavours to strengthen scientific relationships between the two countries. Graduate research scholars like Gopikrishnan study for a dually-badged PhD from both IIT Bombay and Monash University, spending time at both institutions to enrich their research experience.
Prof Murali Sastry, CEO, IITB-Monash Research Academy, says, “We wish Gopikrishnan the very best. India loses approximately 700,000 lives every year to cancer. What could be better than saving some of these!”
Please click here to watch a one minute animation of Gopikrishnan’s thesis.
Research scholar: Gopikrishnan C R, IITB-Monash Research Academy
Project title: Numerical methods for free boundary problems in three dimensions with applications in biology
Supervisors: A/Prof Jerome Droniou, Dr Jennifer Flegg, and Prof Neela Nataraj
Contact details: email@example.com
This story was written by Mr Krishna Warrier based on inputs from the research student, his supervisors, and the IITB-Monash Research Academy. Copyright IITB-Monash Research Academy.
“Will it rain today?”
“How long do I have to wait for my bus?”
“Is the road from the bus stop to my home well lit?”
We are increasingly exposed to sensing and prediction in our daily lives. Uncertainty is both inherent to these systems and usually poorly communicated. To design data presentations that non-experts can understand and take decisions on, we must study how users interpret their data and what goals they have for it. This informs the way that we should communicate results from our models, and visualise qualitative features of the data, which in turn determines what models we must use in the first place.
Visualisation is the actual process of mapping the data to visuals for easy communication. The viewer’s interpretation of a visual is the final stage of visualisation, after which the viewer may decide how to consume the visualisation.
Most viewers consume the visualisation with either of the following two goals in mind:
– gaining new insight into the data represented in the visual, or
– gaining a better understanding of the real phenomena itself.
Often a trial-and-error approach leads to finding the most expressive and effective (graphically articulate) visualisation. Yes, the trial-and-error design process involves developing the visualisation in accordance with the already established theory and principles and user study with an iterative design process where the actual user is kept in the loop.
However, the value of a visual for the purpose of a particular interpretation is not obvious to the viewer before its use for interpretation. The same visual might bring about new insights to one user, but not to another; the same visual might be effective for one problem, but not for another; the same animation might be adequate to understand a problem on one type of hardware, but not on another.
In order to generate the most meaningful visualisation for a specific instance, a careful mapping process from “data to visuals” is necessary. And it will vary a great deal depending upon the preconceived knowledge of the users, their mental models, and the design of the visualisation, among other factors.
The “user model” describes the collective information the system has of a particular user.
A visualisation is subjectively interpreted by the viewer in dependency of past experiences, education, gender, culture, situation, and individual limitations, abilities, and requirements. For instance, colour-deficient viewers are limited in interpreting colour pictures; a person with deficient fine motor skills will have problems accurately pointing at small objects on the screen. In order to create a user model, the system needs to learn facts about the user. Most of these facts can be extracted from observing the user perform special tasks.
A complete user model evolves in several stages, whereby each style of user modelling is being used. Typically, the extraction of information starts with explicit modelling to inquire about gender, age, or education. Subsequently, the user has to complete special tasks that reveal the limitations of his/her vision and/or preferences. By continuously observing the user in his/her use of the visualisation system, the user model can be improved over time. Significant information of the user model is expected from the completion of special tasks.
I work on a research project titled, ‘Deep User Models for Visual Analytics’. With an aim to understand how to communicate the uncertainty to non-experts who have no technical background and also at the same time maintaining the relevance of the project for domain experts we built our first study around the public transport in Melbourne, Australia. Through this project, we are trying to understand the Perception of Visual Uncertainty Representation by Non-Experts. The motivation is that understanding and communicating uncertainty and sensitivity information is difficult; uncertainty is part of everyday life for any type of decision-making process, and some of the previous studies done are unclear and could be improved.
The question we tried to answer is: Can we build visualisations of uncertainty distributions (specifically, public transport arrival times) that people understand? More specifically, our study investigated whether a particular visualisation of uncertainty information in predicting the arrival time of one bus and the departure of another could be used to help people make a transfer, but will involve a more complex visualisation.
We are looking at how to tune models to people’s error preferences in a simple, lightweight way. It is not enough to add an effective visualisation on the existing models. Even an effective representation of uncertainty, in this case, might not be optimal if the model is not tuned to reflect people’s error preferences. Given known costs for each type of error, cost-sensitive classification can be employed to fit a model that makes predictions that reflect error preferences.
Our first study related to designing a transit mobile application for public transport in Melbourne, tries to help commuters make transfers among various modes (bus, train, and tram) by making use of visualisation to communicate the associated uncertainty in arrival and departure times. The findings from this study will help design user facing applications can leverage the power of visualisation to communicate uncertainty information to non-experts.
Our planned second study, will try to build user models in an attempt to understand how non-experts and experts perceive visualisations in their daily life. The findings from this study will help us come up with guidelines for designing visualisations that people can understand and then take effective decisions.
We, graduate research scholars of the IITB-Monash Research Academy, study for a dually-badged PhD from IIT Bombay and Monash University, spending time at both institutions to enrich our research experience. The Academy is a collaboration between India and Australia that endeavours to strengthen relationships between the two countries. According to its CEO, Prof Murali Sastry, “The IITB-Monash Research Academy was conceived as a unique model for how two leading, globally focused academic organisations can come together in the spirit of collaboration to deliver solutions and outcomes to grand challenge research questions facing industry and society.”
He is right! Visualisations are often targeted for experts in a domain. I have always been fascinated by how a good visualisation design can help us understand the underlined information, trigger an emotion, and guide us in taking an informed decision. This project offered me a chance to develop a deep understanding of how visualisations are perceived by the people. This will guide the designer leverage the power of visualisations to communicate complex phenomena to people.
Research scholar: Amit Jena, IITB-Monash Research Academy
Project title: Deep User Models for Visual Analytics
Supervisors: Prof. Venkatesh Rajamanickam, Prof. Tim Dwyer, Dr. Ulrich Engelke, Dr. Cecile Paris
Industry Supervisors: Dr. Ulrich Engelke and Dr. Cecile Paris, Data61 CSIRO
Contact details: firstname.lastname@example.org
The above story was written by Amit Jena. Copyright IITB-Monash Research Academy.