MIT News - Environment - Climate - Climate change MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Fri, 28 Feb 2020 13:00:46 -0500 Machine learning picks out hidden vibrations from earthquake data Technique may help scientists more accurately map vast underground geologic structures. Fri, 28 Feb 2020 13:00:46 -0500 Jennifer Chu | MIT News Office <p>Over the last century, scientists have developed methods to map the structures within the Earth’s crust, in order to identify resources such as oil reserves, geothermal sources, and, more recently, reservoirs where excess carbon dioxide could potentially be sequestered. They do so by tracking seismic waves that are produced naturally by earthquakes or artificially via explosives or underwater air guns. The way these waves bounce and scatter through the Earth can give scientists an idea of the type of structures that lie beneath the surface.</p> <p>There is a narrow range of seismic waves — those that occur at low frequencies of around 1 hertz — that could give scientists the clearest picture of underground structures spanning wide distances. But these waves are often drowned out by Earth’s noisy seismic hum, and are therefore difficult to pick up with current detectors. Specifically generating low-frequency waves would require pumping in enormous amounts of energy. For these reasons, low-frequency seismic waves have largely gone missing in human-generated seismic data.</p> <p>Now MIT researchers have come up with a machine learning workaround to fill in this gap.</p> <p>In a paper appearing in the journal <em>Geophysics</em>, they describe a method in which they trained a neural network on hundreds of different simulated earthquakes. When the researchers presented the trained network with only the high-frequency seismic waves produced from a new simulated earthquake, the neural network was able to imitate the physics of wave propagation and accurately estimate the quake’s missing low-frequency waves.</p> <p>The new method could allow researchers to artificially synthesize the low-frequency waves that are hidden in seismic data, which can then be used to more accurately map the Earth’s internal structures.</p> <p>“The ultimate dream is to be able to map the whole subsurface, and be able to say, for instance, ‘this is exactly what it looks like underneath Iceland, so now you know where to explore for geothermal sources,’” says co-author Laurent Demanet, professor of applied mathematics at MIT. “Now we’ve shown that deep learning offers a solution to be able to fill in these missing frequencies.”</p> <p>Demanet’s co-author is lead author Hongyu Sun, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences.</p> <p><strong>Speaking another frequency</strong></p> <p>A neural network is a set of algorithms modeled loosely after the neural workings of the human brain. The algorithms are designed to recognize patterns in data that are fed into the network, and to cluster these data into categories, or labels. A common example of a neural network involves visual processing; the model is trained to classify an image as either a cat or a dog, based on the patterns it recognizes between thousands of images that are specifically labeled as cats, dogs, and other objects.</p> <p>Sun and Demanet adapted a neural network for signal processing, specifically, to recognize patterns in seismic data. They reasoned that if a neural network was fed enough examples of earthquakes, and the ways in which the resulting high- and low-frequency seismic waves travel through a particular composition of the Earth, the network should be able to, as they write in their paper, “mine the hidden correlations among different frequency components” and extrapolate any missing frequencies if the network were only given an earthquake’s partial seismic profile.</p> <p>The researchers looked to train a convolutional neural network, or CNN, a class of deep neural networks that is often used to analyze visual information. A CNN very generally consists of an input and output layer, and multiple hidden layers between, that process inputs to identify correlations between them.</p> <p>Among their many applications, CNNs have been used as a means of generating visual or auditory “deepfakes” — content that has been extrapolated or manipulated through deep-learning and neural networks, to make it seem, for example, as if a woman were talking with a man’s voice.</p> <p>“If a network has seen enough examples of how to take a male voice and transform it into a female voice or vice versa, you can create a sophisticated box to do that,” Demanet says. “Whereas here we make the Earth speak another frequency — one that didn’t originally go through it.”</p> <p><strong>Tracking waves</strong></p> <p>The researchers trained their neural network with inputs that they generated using the Marmousi model, a complex two-dimensional geophysical model that simulates the way seismic waves travel through geological structures of varying density and composition. &nbsp;</p> <p>In their study, the team used the model to simulate nine “virtual Earths,” each with a different subsurface composition. For each Earth model, they simulated 30 different earthquakes, all with the same strength, but different starting locations. In total, the researchers generated hundreds of different seismic scenarios. They fed the information from almost all of these simulations into their neural network and let the network find correlations between seismic signals.</p> <p>After the training session, the team introduced to the neural network a new earthquake that they simulated in the Earth model but did not include in the original training data. They only included the high-frequency part of the earthquake’s seismic activity, in hopes that the neural network learned enough from the training data to be able to infer the missing low-frequency signals from the new input.</p> <p>They found that the neural network produced the same low-frequency values that the Marmousi model originally simulated.</p> <p>“The results are fairly good,” Demanet says. “It’s impressive to see how far the network can extrapolate to the missing frequencies.”</p> <p>As with all neural networks, the method has its limitations. Specifically, the neural network is only as good as the data that are fed into it. If a new input is wildly different from the bulk of a network’s training data, there’s no guarantee that the output will be accurate. To contend with this limitation, the researchers say they plan to introduce a wider variety of data to the neural network, such as earthquakes of different strengths, as well as subsurfaces of more varied composition.</p> <p>As they improve the neural network’s predictions, the team hopes to be able to use the method to extrapolate low-frequency signals from actual seismic data, which can then be plugged into seismic models to more accurately map the geological structures below the Earth’s surface. The low frequencies, in particular, are a key ingredient for solving the big puzzle of finding the correct physical model.</p> <p>“Using this neural network will help us find the missing frequencies to ultimately improve the subsurface image and find the composition of the Earth,” Demanet says.</p> <p>This research was supported, in part, by Total SA and the U.S. Air Force Office of Scientific Research.</p> MIT researchers have used a neural network to identify low-frequency seismic waves hidden in earthquake data. The technique may help scientists more accurately map the Earth’s interior.Image: Christine Daniloff, MITEAPS, Earthquakes, Environment, Geology, Mathematics, Research, School of Science, Machine learning, Artificial intelligence, Earth and atmospheric sciences Deep cuts in greenhouse emissions are tough but doable, experts say Speakers at MIT climate symposium outline the steps needed to achieve global carbon neutrality by midcentury. Thu, 27 Feb 2020 10:55:57 -0500 David Chandler | MIT News Office <p>How can the world cut its greenhouse gas emissions in time to avert the most catastrophic impacts of global climate change? It won’t be easy, but there are reasons to be optimistic that the problems can still be solved if the right kind of significant actions are taken within the next few years, according to panelists at the latest MIT symposium on climate change.</p> <p>The symposium, the fourth in a series of six this academic year, was titled “Economy-wide deep decarbonization: Beyond electricity.” Symposium co-chair Ernest Moniz explained in his introductory remarks that while most efforts to curb greenhouse gas emissions tend to focus on electricity generation, which produces 28 percent of the total emissions, “72 percent of the emissions we need to address are outside the electricity sector.” These sectors include transportation, which produces 29 percent; industry, which accounts for 22 percent; commercial and residential buildings, at 12 percent; and agriculture, at 9 percent; according to 2017 figures.</p> <p>While many commitments have been made by nations, states, and cities to zero out or drastically cut their electricity-related emissions, Moniz pointed out that in recent years many places, including Boston, have expanded those commitments beyond electricity. “We’re now seeing economy-wide net-zero goals in cities, including Boston,” said Moniz, who is the Cecil and Ida Green Professor of Physics and Engineering Systems Emeritus at MIT and a former U.S. Secretary of Energy.</p> <p>As the generation of electricity continues to get cleaner, he said, the next step will be to extend electrification to other sectors such as home heating and heavy transport. Then, to deal with the remaining sources that are too difficult or expensive to decarbonize, technologies to remove carbon from power plant emissions or directly out of the air will be needed. Such carbon dioxide removal technology will be essential, he said, to provide enough flexibility in planning for climate change mitigation.</p> <p>The symposium, held Tuesday in MIT’s Wong Auditorium and webcast live, was divided into three panels, addressing decarbonization of the transportation system and industry, development of low-carbon fuels, and large-scale carbon management including carbon removal from the air.</p> <p>While electrification of passenger cars has been accelerating in recent years and is expected to increase dramatically over the coming decade, other parts of the transportation system such as aircraft and heavy trucks will be more difficult and take longer to address.</p> <p>MIT professor of mechanical engineering Yang Shao-Horn described progress in increasing the amount of energy that can be stored in batteries of a given weight, a technology that will be crucial to enabling solar and wind power to produce an increasingly large share of electricity. With many new models of electric vehicles entering the market now, that industry “is experiencing explosive growth,” she said; the number of electric vehicles on the road is expected to grow a hundredfold over the next decade.</p> <p>Lithium-ion batteries have become today’s standard for energy storage, and the amount of power they can store per pound has improved tenfold over the last 10 years, Shao-Horn said. But further progress will require new battery chemistries, which are being pursued in many labs, including her own. Researchers are exploring a variety of promising avenues, including metal-air batteries using Earth-abundant metals. For some applications such as aircraft, however, batteries may never be sufficient. Instead, cost-effective ways of using carbon-free technology to make a liquid or gas fuel, such as hydrogen, will be needed. “Development of such fuels is still in its infancy,” she said, and requires more research.</p> <p>John Wall, former chief technology officer for Cummins, one of the world’s leading makers of diesel engines for heavy vehicles, said that after 100 years in business, that company last year introduced its first electric truck. But what’s really needed, at least in the near term, he said, are carbon-neutral “drop-in” fuels that can be used in existing vehicles with little or no modification.</p> <p>Wall said that battery technology has reached or will soon reach a point where electrification of heavy vehicles “is credible up to urban class-7 trucks,” which encompasses most vehicles smaller than 18-wheeler tractor-trailers and heavy dumptrucks. But there are limitations, he said, such as the fact that city buses must be able to complete their daily scheduled routes without needing to be recharged, which at this point means many of them would require a backup power source such as a fuel cell.</p> <p>Symposium co-chair Kristala Prather, the Arthur D. Little Professor of Chemical Engineering at MIT, addressed what is needed to develop low-carbon alternative fuels from biomass. She pointed out that biofuels have been controversial, and many pilot programs for biofuels, such as incentives for ethanol made from corn, have had disappointing results and fallen well short of their production goals. Given that poor track record, “Why are we still talking about biofuels?” she asked.</p> <p>She is still optimistic about the potential of biofuels, she said, even though there remain many challenges. For one thing, the raw materials to produce fuels from biomass are abundant and widely distributed. “We have the biomass to be able to make this transition” away from petroleum-based fuel, she said. “You can’t make something out of nothing, but we have the something.”</p> <p>She said that the tools of biotechnology can be applied to improving or developing new processes for harnessing microbes to generate fuel from agricultural products. These products can be grown on marginal lands that would not be suitable for food crops and thus would not be in competition with food production.</p> <p>But there are still challenges to be worked out, such as the fact that many of these processes produce toxic byproducts that require disposal or that may interfere with the production process itself. Nevertheless, with active research ongoing around the world, she said, “I do remain optimistic that we will be able to produce biofuels at scale, but it’s going to take a lot of ingenuity.”</p> <p>Francis O’Sullivan, an adjunct professor at MIT’s Sloan School of Management and senior vice president for strategy at the wind energy company Orsted Onshore North America, said hydrogen could provide an important bridge fuel as the U.S. and the world work to decarbonize transportation. But he pointed out that not all hydrogen is created equal. Most of what’s produced currently is made from fossil fuels through a process that releases carbon dioxide. Efficient, scalable electrolysis systems will be needed to produce hydrogen using just water and electricity produced from clean sources.</p> <p>In the power sector, he said, “there is a significant role for hydrogen, in concert with renewables,” for example in transportation and in industrial processes. Though there are many issues to be solved in terms of efficient storage and transportation of hydrogen, “it does allow us a lot of flexibility, and therefore is a pathway worth exploring.” And there is progress in that direction, O’Sullivan said. For example, the U.K. is currently building a 100-megawatt electrolysis plant to produce hydrogen, powered by offshore wind turbines. But currently such projects would not be feasible without government subsidies.</p> <p>Howard Herzog, a senior research engineer at the MIT Energy Initiative, said that about 30 percent of the world’s total greenhouse gas emissions comes from sources that can be classified as “difficult to eliminate.” Therefore, developing ways to capture and store carbon, either at the emissions source or directly out of the air, will be essential for meeting decarbonization targets. The easiest way to do that is at the emissions-producing plants themselves, where the gas is much more highly concentrated.</p> <p>But direct air capture may be the only way to clean up those emissions that come not from energy sources themselves but from certain production processes. For example, cement production releases as much carbon dioxide from the limestone being heated as it does from the power to provide that heating. But though direct air capture is “a very seductive concept,” he said, achieving it “is not that easy.”</p> <p>“The question is not whether we can get carbon dioxide out of air — we do it today. The real question is the cost,” Herzog said. While estimates vary, he says the true cost today is around $1,000 per ton of carbon dioxide removed, and to be truly competitive it would need to be about a tenth of that. Still, some pilot plants have been built, including one in Texas that can capture 1.6 million tons of carbon dioxide per year.</p> <p>Ruben Juanes, a professor of civil and environmental engineering at MIT, discussed ways of dealing with the carbon dioxide that gets captured by these methods. A number of different processes have been proposed and some have been implemented, including the use of depleted oil and gas wells, and deep underground saline aquifers — formations deep enough and salty enough that nobody would ever want to use them as water sources.</p> <p>“They are ubiquitous. They provide a gigantic capacity that is available at scale,” he said.</p> <p>But because the scale of the problem is so big, there still remain challenges, such as getting the carbon dioxide from its source to the underground storage location. The amount of carbon dioxide involved is comparable to the total amount of petroleum currently distributed worldwide through pipelines and supertankers, and so would require an enormous creation of new infrastructure to move.</p> <p>While that may not be an ultimate solution, “we can think of this as a bridge technology” to use until better systems are developed, he said. “If we want to make good on our efforts” to eliminate global greenhouse gas emissions, “we need to have that bridge.”</p> <p>Arun Majumdar, a professor at Stanford University&nbsp;and formerly the founding director of the Advanced Research Projects Agency for Energy (ARPA-E), said that overall, “this is a gigaton-scale problem,” and that in order to have any chance of meeting the international target of keeping global warming below 2 degrees Celsius, we would have to limit total global emissions from now on to the equivalent of 800 gigatons of carbon dioxide. That means that at emissions rates of 40 gigatons a year, “we only have 20 years left” to use fossil fuels. So any solutions, to be viable, need to be capable of working at gigaton levels.</p> <p>That’s still just a small fraction of the amount of carbon going in and out of the air through natural carbon cycles that have been “in balance for millions of years,” he pointed out. “They’re now thrown out of balance.” But therein may lie some potential solutions. For example, the amount of carbon that gets sequestered in the ground by growing plants is strongly dependent on their root depth, and developing crops with deeper roots could provide food and carbon sequestration at the same time. “I want to grow mega-carrots!” he said, putting a humorous spin on a serious proposal that he outlined in detail in a recent research paper.</p> <p>But predicting the outcome of any of these possible countermeasures is daunting, partly because so many aspects of the climate system remain poorly understood. For example, melting of permafrost in the northern landmasses could result in sudden, large releases of methane, a very potent greenhouse gas. “We really need to look into it,” he said, because so far, “none of the climate models capture it,” and thus they could be understating the severity of the climate challenge. He suggests an urgent need for more research on potential materials that could selectively absorb methane.</p> <p>Majumdar said that the target increase of 2 degrees “is kind of baked in” already, and that we should be prepared for the possibility of an actual average temperature rise this century of something like 3 to 3.5 degrees. “We should be looking at options” for dealing with such extremes, including the controversial possibility of geoengineering projects to try to limit the amount of sunlight reaching Earth’s surface.</p> <p>Herzog added that any measures we can take today will be far more cost-effective than what we may have to do in later decades. “It costs $20, $30 or $40 a ton to keep carbon dioxide out” of the atmosphere today, he said, but if we leave the task to future generations, “to take it out of the air will cost hundreds of dollars a ton.”</p> <p>Majumdar said that though the challenges are daunting, they also represent a golden opportunity for research. “I do believe science and engineering and technology can play a role” in solving the problems, he said. In fact, he said, he wishes he were a student just starting out today, with so many areas where research could play a major role in addressing these global needs. “I wish I was a freshman,” he said. “You want to solve problems? This is a big one!”</p> Ernest Moniz, former U.S. Secretary of Energy and founding director of the MIT Energy Initiative, introduced the fourth MIT symposium on climate change.Photo: Jake BelcherESI, MIT Energy Initiative, Climate, Climate change, Special events and guest speakers, Sustainability, Batteries, Global Warming, Renewable energy, Energy storage, Automobiles, Policy, Environment, Emissions MIT Solve announces 2020 global challenges Tech-based solutions sought for challenges in work environments, education for girls and women, maternal and newborn health, and sustainable food. Tue, 25 Feb 2020 16:15:01 -0500 Claire Crowther | MIT Solve <p>On Feb. 25, MIT Solve launched its <a href="">2020 Global Challenges</a>: Good Jobs and Inclusive Entrepreneurship, Learning for Girls and Women, Maternal and Newborn Health, and Sustainable Food Systems, with&nbsp;over $1 million in prize funding&nbsp;available across the challenges.</p> <p>Solve seeks tech-based solutions from social entrepreneurs around the world that address these four challenges. Anyone, anywhere can apply by the June 18 deadline. This year, to guide applicants, Solve created a course with <em>MITx</em> entitled “<a href="">Business and Impact Planning for Social Enterprises</a>,” which introduces core business-model and theory-of-change concepts to early-stage entrepreneurs.</p> <p>Finalists will be invited to attend Solve Challenge Finals on Sept. 20 in New York City during U.N. General Assembly week. At the event, they will pitch their solutions to Solve’s Challenge Leadership Groups, judging panels comprised of industry leaders and MIT faculty. The judges will select the most promising solutions as Solver teams.</p> <p>“Based all over the world, our Solver teams are incredibly diverse and have innovative solutions that turn air pollution into ink, recycle and resell used textiles, crowdsource data on wheelchair accessibility in public spaces, and much more,” says Solve Executive Director Alex Amouyel. “World-changing ideas can come from anywhere, and if you have a relevant solution, we want to hear it.”</p> <p>Solver teams participate in a nine-month program that connects them to the resources they need to scale. To date, Solve has facilitated more than 175 partnerships providing resources such as mentorship, technical expertise, and impact planning. In the past three years, Solve has brokered over $14 million in funding commitments to Solver teams and entrepreneurs.</p> <p>Solve’s challenge design process collects insights and ideas from industry leaders, MIT faculty, and local community voices alike. To develop the 2020 Global Challenges, Solve consulted more than 500 subject matter experts and hosted 14 Challenge Design Workshops in eight countries — in places ranging from Silicon Valley to London to Lagos to Ho Chi Minh City. Solve’s open innovation platform garnered more than 26,000 online votes on challenge themes.</p> <ol> <li> <p>Good Jobs and Inclusive Entrepreneurship:<strong> </strong>How can marginalized populations access and create good jobs and entrepreneurial opportunities for themselves?</p> </li> <li> <p>Learning for Girls and Women:<strong> </strong>How can marginalized girls and young women access quality learning opportunities to succeed?</p> </li> <li> <p>Maternal and Newborn Health:<strong> </strong>How can every pregnant woman, new mother, and newborn access the care they need to survive and thrive?</p> </li> <li> <p>Sustainable Food Systems:<strong> </strong>How can we produce and consume low-carbon, resilient, and nutritious food?</p> </li> </ol> <p>As a marketplace for social impact innovation, Solve’s mission is to solve world challenges. Solve finds promising tech-based social entrepreneurs around the world, then brings together MIT’s innovation ecosystem and a community of members to fund and support these entrepreneurs to help scale their impact. Organizations interested in joining the Solve community can learn more and <a href="">apply for membership here</a>.</p> <div></div> Renewed products consist of upcycled or recycling materials. The Renewal Workshop is an MIT Solver team that works to save textiles from landfill.Photo: The Renewal Workshop MIT Solve, Special events and guest speakers, Global, Technology and society, Innovation and Entrepreneurship (I&E), International development, Artificial intelligence, Learning, Environment, Health, Community, Startups, Crowdsourcing MIT-powered climate resilience solution among top 100 proposals for MacArthur $100 million grant High-scoring 100&amp;Change applications featured in Bold Solutions Network. Tue, 25 Feb 2020 11:30:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>The John D. and Catherine T. MacArthur Foundation unveiled that a proactive climate resilience system co-developed by MIT and <a href="" target="_blank">BRAC</a>, a leading development organization, was one of the highest-scoring proposals, designated as the Top 100, in its <a href="">100&amp;Change</a><em> </em>competition in 2020 for a single $100 million grant to help solve one of the world's most critical social challenges<em>. </em></p> <p>The MIT/BRAC system, known as the Climate Resilience Early Warning System Network&nbsp;(<a href="">CREWSNET</a>), aims to empower climate‑threatened populations to make timely, science-driven decisions about their future. Starting with western Bangladesh but scalable to other frontline nations across the globe, CREWSNET&nbsp;will combine leading-edge climate forecasting and socioeconomic analysis with innovative resilience services to enable people to make and implement informed decisions about adaptation and relocation — and thereby minimize loss of life, livelihoods, and property.</p> <p>“Climate change is one of the most urgent threats facing human civilization today, and while the world’s most vulnerable did not create this challenge, they are the first to inherit it,” said John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory who serves as a CREWSNET project leader along with principal investigator Elfatih Eltahir, the Breene M. Kerr Professor of Hydrology and Climate at MIT. “We at MIT are excited and proud to have partnered with BRAC, a proven, global leader in humanitarian assistance and development programming, to create a new, proactive model for climate adaptation and individual empowerment.”</p> <p>“In its earliest days, BRAC worked tirelessly to rebuild communities devastated by climate disasters. Almost 50 years later, we continue to innovate our poverty alleviation and climate change adaptation programming, which reaches tens of millions of people each year. We are thrilled to partner with MIT now to incorporate their advanced technology, research, and scientific capabilities to tackle the myriad of challenges created by climate change, first in Bangladesh and then globally,” says Ashley Toombs, director of External Affairs at BRAC USA, the U.S.-based affiliate, whose portfolio includes climate change adaptation.</p> <p>100&amp;Change is a distinctive competition that is open to organizations and collaborations working in any field, anywhere in the world. Proposals must identify a problem and offer a solution that promises significant and durable change. The second round of the competition had a <a href="">promising start</a>: 3,690 competition registrants submitted 755 proposals. Of those, 475 passed an initial administrative review.</p> <p>The Top 100 represent the top 21 percent of competition submissions. The proposals were rigorously vetted, undergoing MacArthur’s initial administrative review, a <a href="">Peer-to-Peer</a> review, an evaluation by an <a href="">external panel of judges</a>, and a technical review by specialists whose expertise was matched to the project.</p> <p>Each proposal was evaluated using <a href="">four criteria</a>: impactful, evidence-based, feasible, and durable. MacArthur’s board of directors will select up to 10 finalists from among these high-scoring proposals this spring<strong>. </strong></p> <p><strong>“</strong>MacArthur seeks to generate increased recognition, exposure, and support for the high-impact ideas designated as the Top 100<strong>,” </strong>says Cecilia Conrad, CEO of Lever for Change and MacArthur managing director at 100&amp;Change. “Based on our experience in the first round of 100&amp;Change, we know the competition will produce multiple compelling and fundable ideas. We are committed to matching philanthropists with powerful solutions and problem solvers to accelerate social change.”</p> <p>Since the inaugural competition, other funders and philanthropists have committed an additional $419 million to date to support bold solutions by 100&amp;Change<em> </em>applicants. Building on the success of 100&amp;Change, MacArthur created <a href="">Lever for Change</a> to unlock significant philanthropic capital by helping donors find and fund vetted, high-impact opportunities through the design and management of customized competitions. In addition to 100&amp;Change, Lever for Change is managing the Chicago Prize, the Economic Opportunity Challenge, and the Larsen Lam ICONIQ Impact Award.</p> <p>The <a href="">Bold Solutions Network</a> launched on Feb. 19, featuring CREWSNET as one of the Top 100 from 100&amp;Change. The searchable online collection of submissions contains a project overview, 90-second video, and two-page factsheet for each proposal. Visitors can sort by subject, location, sustainable development goal, or beneficiary population to view proposals based on area of interest.</p> <p>The Bold Solutions Network will showcase the highest-rated proposals that emerge from the competitions Lever for Change manages. Proposals in the Bold Solutions Network undergo extensive evaluation and due diligence to ensure each solution promises real and measurable progress to accelerate social change.</p> <p>The Bold Solutions Network was designed to provide an innovative approach to identifying the most effective, enduring solutions aligned with donors’ philanthropic goals and to help top applicants gain visibility and funding from a wide array of funders. Organizations that are part of the network will have continued access to a variety of technical support and learning opportunities focused on strengthening their proposals and increasing the impact of their work.</p> A proactive climate resilience system co-developed by MIT and BRAC was included among the top 100 entries in the MacArthur Foundation's 100andChange competition.Image courtesy of the John D. and Catherine T. MacArthur Foundation.Joint Program on the Science and Policy of Global Change, Lincoln Laboratory, Center for Global Change Science, EAPS, Climate, Climate change, Environment, Policy, Agriculture, Water, Contests and academic competitions, Civil and environmental engineering, School of Engineering Instrument may enable mail-in testing to detect heavy metals in water Whisk-shaped device absorbs trace contaminants, preserves them in dry state that can be shipped to labs for analysis. Tue, 25 Feb 2020 11:05:07 -0500 Jennifer Chu | MIT News Office <p>Lead, arsenic, and other heavy metals are increasingly present in water systems around the world due to human activities, such as pesticide use and, more recently, the inadequate disposal of electronic waste. Chronic exposure to even trace levels of these contaminants, at concentrations of parts per billion, can cause debilitating health conditions in pregnant women, children, and other vulnerable populations.</p> <p>Monitoring water for heavy metals is a formidable task, however, particularly for resource-constrained regions where workers must collect many liters of water and chemically preserve samples before transporting them to distant laboratories for analysis.</p> <p>To simplify the monitoring process, MIT researchers have developed an approach called <a href="" target="_blank">SEPSTAT</a>, for solid-phase extraction, preservation, storage, transportation, and analysis of trace contaminants. The method is based on a small, user-friendly device the team developed, which absorbs trace contaminants in water and preserves them in a dry state so the samples can be easily dropped in the mail and shipped to a laboratory for further analysis.</p> <p><img alt="A whisk-like device lined with small pockets filled with gold polymer beads, fits inside a typical sampling bottle, and can be twirled to pick up any metal contaminants in water." src="/sites/" style="width: 500px; height: 281px;" /><br /> <span style="font-size:12px;">A whisk-like device lined with small pockets filled with gold polymer beads, fits inside a typical sampling bottle, and can be twirled to pick up any metal contaminants in water.</span></p> <p>The device resembles a small, flexible propeller, or whisk, which fits inside a typical sampling bottle. When twirled inside the bottle for several minutes, the instrument can absorb most of the trace contaminants in the water sample. A user can either air-dry the device or blot it with a piece of paper, then flatten it and mail it in an envelope to a laboratory, where scientists can dip it in a solution of acid to remove the contaminants and collect them for further analysis in the lab.</p> <p>“We initially designed this for use in India, but it’s taught me a lot about our own water issues and trace contaminants in the United States,” says device designer Emily Hanhauser, a graduate student in MIT’s Department of Mechanical Engineering. “For instance, someone who has heard about the water crisis in Flint, Michigan, who now wants to know what’s in their water, might one day order something like this online, do the test themselves, and send it to a lab.”</p> <p>Hanhauser and her colleagues recently <a href="" target="_blank">published their results</a> in the journal <em>Environmental Science and Technology</em>. Her MIT co-authors are Chintan Vaishnav of the Tata Center for Technology and Design and the MIT Sloan School of Management; John Hart, associate professor of mechanical engineering; and Rohit Karnik, professor of mechanical engineering and associate department head for education, along with Michael Bono of Boston University.</p> <p><strong>From teabags to whisks</strong></p> <p>The team originally set out to understand the water monitoring infrastructure in India. Millions of water samples are collected by workers at local laboratories all around the country, which are equipped to perform basic water quality analysis. However, to analyze trace contaminants, workers at these local labs need to chemically preserve large numbers of water samples and transport the vessels, often over hundreds of kilometers, to state capitals, where centralized labs have facilities to properly analyze trace contaminants.</p> <p>“If you’re collecting a lot of these samples and trying to bring them to a lab, it’s pretty onerous work, and there is a significant transportation barrier,” Hanhauser says.</p> <p><img alt="After the device is pulled out and dried, it can preserve any metal contaminants that it has picked up, for long periods of time. The device can be flattened and mailed to a lab, where the contaminants can be further analyzed. " src="/sites/" style="width: 500px; height: 281px;" /><br /> <span style="font-size:12px;">After the device is pulled out and dried, it can preserve any metal contaminants that it has picked up, for long periods of time. The device can be flattened and mailed to a lab, where the contaminants can be further analyzed.&nbsp;</span></p> <p>In looking to streamline the logistics of water monitoring, she and her colleagues wondered whether they could bypass the need to transport the water, and instead transport the contaminants by themselves, in a dry state.&nbsp;</p> <p>They eventually found inspiration in dry blood spotting, a simple technique that involves pricking a person’s finger and collecting a drop of blood on a card of cellulose. When dried, the chemicals in the blood are stable and preserved, and the cards can be mailed off for further analysis, avoiding the need to preserve and ship large volumes of blood.</p> <p>The team started thinking of a similar collection system for heavy metals, and looked through the literature for materials that could both absorb trace contaminants from water and keep them stable when dry.</p> <p>They eventually settled on ion-exchange resins, a class of material that comes in the form of small polymer beads, several hundreds of microns wide. These beads contain groups of molecules bound to a hydrogen ion. When dipped in water, the hydrogen comes off and can be exchanged with another ion, such as a heavy metal cation, that takes hydrogen’s place on the bead. In this way, the beads can absorb heavy metals and other trace contaminants from water.</p> <p>The researchers then looked for ways to immerse the beads in water, and first considered a teabag-like design. They filled a mesh-like pocket with beads and dunked it in water they spiked with heavy metals. They found, though, that it took days for the beads to adequately absorb the contaminants if they simply left the teabag in the water. When they stirred the teabag around, turbulence sped the process somewhat, but it still took far too long for the beads, packed into one large teabag, to absorb the contaminants.</p> <p>Ultimately, Hanhauser found that a handheld stirring design worked best to take up metal contaminants in water within a reasonable amount of time. The device is made from a polymer mesh cut into several propeller-like panels. Within each panel, Hanhauser hand-stitched small pockets, which she filled with polymer beads. She then stitched each panel around a polymer stick to resemble a sort of egg beater or whisk.</p> <p><strong>Testing the waters</strong></p> <p>The researchers fabricated several of the devices, then tested them on samples of natural water collected around Boston, including the Charles and Mystic rivers. They spiked the samples with various heavy metal contaminants, such as lead, copper, nickel, and cadmium, then stuck a device in the bottle of each sample, and twirled it around by hand to catch and absorb the contaminants. They then placed the devices on a counter to dry overnight.</p> <p>To recover the contaminants from the device, they dipped the device in hydrochloric acid. The hydrogen in the solution effectively knocks away any ions attached to the polymer beads, including heavy metals, which can then be collected and analyzed with instruments such as mass spectrometers.</p> <p>The researchers found that by stirring the device in the water sample, the device was able to absorb and preserve about 94 percent of the metal contaminants in each sample. In their recent trials, they found they could still detect the contaminants and predict their concentrations in the original water samples, with an accuracy range of 10 to 20 percent, even after storing the device in a dry state for up to two years.</p> <p>With a cost of less than $2, the researchers believe that the device could facilitate transport of samples to centralized laboratories, collection and preservation of samples for future analysis, and acquisition of water quality data in a centralized manner, which, in turn, could help to identify sources of contamination, guide policies, and enable improved water quality management.</p> <p>The researchers have now partnered with a company in India, in hopes of commercializing the device. Together, their project was recently chosen as one of 26 proposals out of more than 950 to be funded by the Indian government under its Atal New India Challenge program.</p> <p>This research was funded, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab, the MIT Tata Center, and the National Science Foundation. &nbsp;</p> MIT graduate student Emily Hanhauser demonstrates a new device that may simplify the logistics of water monitoring for trace metal contaminants, particularly in resource-constrained regions.Image: Melanie Gonick/MITEnvironment, Pollution, Water, Health, India, Sensors, Mechanical engineering, Research, Tata Center, School of Engineering, Sloan School of Management, J-WAFS, National Science Foundation (NSF) Mars 2020: The search for ancient life is on Researchers in the Department of Earth, Atmospheric and Planetary Sciences will help direct Mars 2020 rover sample acquisition. Fri, 21 Feb 2020 15:10:01 -0500 Kate S. Petersen | EAPS <p>Planetary scientists believe that Mars was once warmer, had a significant atmosphere, and maintained abundant flowing water that carved out river channels and pooled in lakes. These conditions would, at least theoretically, support life. But following a July 2020 launch, a 34 million mile journey, and an elaborately choreographed descent though the scant Martian atmosphere, NASA’s Mars 2020 rover will encounter an entirely different world. Freezing. Dry. And with an atmosphere so thin that even if the temperature warmed enough to melt the polar ice caps, the water would immediately evaporate.</p> <p>What happened to Mars? And did life ever exist on our dusty, red neighbor?</p> <p>To investigate this Martian mystery, the Mars 2020 rover will collect cores of sediments and rocks that will be sealed in tubes and eventually brought to Earth. Once they’ve arrived, the cores can be analyzed with the same instruments and techniques researchers use to understand the deep history of Earth. However, the samples must be chosen strategically because only about three dozen can be brought back — not very many for researchers attempting to characterize the biological and geological history of an entire planet.</p> <p>Associate professor of geobiology Tanja Bosak and professor of planetary sciences Ben Weiss, both in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), have been selected as participating scientists on the mission. They will be among the group of 10 people who will decide which samples to collect. Bosak has also been selected for an additional leadership position as a member of the Project Science Group, which outlines mission strategies and coordinates the different groups of scientists involved with Mars 2020.</p> <p><strong>Looking for Martian fossils</strong></p> <p>Bosak, an expert in fossilization processes, says that looking for signs of ancient life on Mars is especially challenging, because “if there was any life, it would have been microbial. [What] we need to do is to look for something microscopic.” Fossil relics of any kind are rare compared to the original population, and soft-bodied, microbial fossils are even more exceptional. Given this, “we need to look for life in environments where that life would have been abundant and likely to be preserved,” she says.</p> <p>Planetary scientists suspect past life would have been abundant near water, which informed their choice to send the rover to Jezero crater, believed to be the site of an ancient lake. Satellite images suggest that the lake was fed by a river, which deposited sediments in a delta at its mouth. Cameras and portable analytical instruments on the rover will help the researchers acquire compositional data about these sediments before they decide to take a precious core sample. Bosak plans to use these tools to search for deposits of carbonate, clay minerals, and amorphous silica. These substrates or their analogs are known to preserve microbial structures or fossils on Earth.</p> <p>Once the researchers have located interesting sediments, they can use the rover’s drill to dig in a bit, and a camera to look inside. On Earth, microbial fossilization processes can create visible formations in sediments that can’t be duplicated by abiotic geological processes. If the rover peers into a drill hole and sees this type of formation, it would be very compelling evidence of past life on Mars. However, researchers may have to wait until the samples can be scrutinized with more sophisticated technology back on Earth to&nbsp;see if they’ve found signs of life or chemical precursors of life.</p> <p><strong>Martian magnetism</strong></p> <p>If there is a story of life on Mars, there may also be a story of death — the catastrophic loss of an atmosphere, and potentially with it, a temperate environment and liquid water.</p> <p>“The big question is, why did Mars go from being warm and wet to cold and dry,” says Weiss, a planetary geophysicist. “One of the leading ideas is that it lost its atmosphere.” If Mars lost a once-stable atmosphere, there would no longer be a greenhouse effect to keep the planet warm, or the atmospheric pressure necessary to keep liquid water from boiling. But what could have happened to the atmosphere?</p> <p>“The hypothesis is that Mars lost its magnetic field and then the atmosphere was stripped away,” says Weiss. Planetary magnetic fields are generated by the churning of molten metals in the planet’s core. The field repels and redirects charged radiation from the sun that would otherwise destructively react with molecules in the atmosphere. If the Martian atmosphere was destroyed by unmediated solar radiation after losing its magnetic field, evidence of this event may be found inside Martian rocks.</p> <p>When a rock is forming on a planet with a magnetic field, the electrons in the atoms that make up the rock will align themselves to the magnetic field. The stronger the planet’s magnetism, the more electrons in the rock will align themselves to it. Without a magnetic field, electrons orient randomly.</p> <p>By acquiring a series of rock samples ranging in age from the very old to very young, and determining the proportion of aligned electrons in each sample, researchers could potentially track the disappearance of the Martian magnetic field. Weiss says that this timeline could then be compared to the record of climate change on Mars to see if loss of the magnetic field did, in fact, precede cooling and loss of water.</p> <p><strong>Scientific consensus</strong></p> <p>While Bosak and Weiss approach the Martian mystery from different angles, they and the other participating scientists will strive to ensure that each sample they collect will be useful to scientists across disciplines. “These 30 or so samples basically have to be shared [by] all of humanity. So, our job is not to just represent our own particular interests, but to represent the entire community today and conceive of what future generations might want,” says Weiss. “It's going to be a big, long debate every time we take any sample.”</p> <p>Finding evidence of past life on Mars would be a remarkable discovery, but Bosak points out that there would still be a lot to learn about life even if Mars is completely sterile. “Mars is very similar to us in the sense that it did have liquid water. If we see that everything [on Mars] is sterile, that really does invite a whole bunch of questions about why it’s sterile,” she says. “What was not quite right for life? Or, if you find some evidence for prebiotic chemistry or something, [but not evidence of] cellular life, what made it stop?” Either way, “this is a really super exciting opportunity to get some answers.”</p> <p>JPL built and will manage operations of the Mars 2020 rover for NASA’s Science Mission Directorate. NASA's Launch Services Program, based at the agency's Kennedy Space Center in Florida, is responsible for launch management.</p> This artist's rendition depicts NASA's Mars 2020 rover studying rocks with its robotic arm.Image courtesty of NASA/JPL-Caltech.EAPS, School of Science, Earth and atmospheric sciences, Mars, Geology, NASA, Space, astronomy and planetary science, Environment, Climate MIT continues to advance toward greenhouse gas reduction goals Investments in energy efficiency projects, sustainable design elements essential as campus transforms. Fri, 21 Feb 2020 14:20:01 -0500 Nicole Morell | Office of Sustainability <p>At MIT, making a better world often starts on campus. That’s why, as the Institute works to find solutions to complex global problems, MIT has taken important steps to grow and transform its physical campus: adding new capacity, capabilities, and facilities to better support student life, education, and research. But growing and transforming the campus relies on resource and energy use — use that can exacerbate the complex global problem of climate change. This raises the question: How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?</p> <p>It’s a question — and a challenge — that MIT is committed to tackling.</p> <p><strong>Tracking toward 2030 goals</strong></p> <p>Guided by the <a href="" target="_blank">2015 Plan for Action on Climate Change</a>, MIT continues to work toward a goal of a minimum of 32 percent reduction in campus greenhouse gas emissions by 2030. As reported in the MIT Office of Sustainability’s (MITOS) <a href="!2019%20ghg%20emissions" target="_blank">climate action plan update</a>, campus greenhouse gas (GHG) emissions rose by 2 percent in 2019, in part due to a longer cooling season as well as the new MIT.nano facility coming fully online. Despite this, overall net emissions are 18 percent below the 2014 baseline, and MIT continues to track toward its 2030 goal.</p> <p>Joe Higgins, vice president for campus services and stewardship, is optimistic about MIT’s ability to not only meet, but exceed this current goal. “With this growth [to campus], we are discovering unparalleled opportunities to work toward carbon neutrality by collaborating with key stakeholders across the Institute, tapping into the creativity of our faculty, students, and researchers, and partnering with industry experts. We are committed to making steady progress toward achieving our GHG reduction goal,” he says.</p> <p><strong>New growth to campus </strong></p> <p>This past year marked the first full year of operation for the new MIT.nano facility. This facility includes many energy-intensive labs that necessitate high ventilation rates to meet the requirements of a nano technology clean room fabrication laboratory. As a result, the facility’s energy demands and GHG emissions can be much higher than a traditional science building. In addition, this facility — among others — uses specialty research gases that can act as potent greenhouse gases. Still, the 214,000-square-foot building has a number of sustainable, high-energy-efficiency design features, including an innovative air filtering process to support clean room standards while minimizing energy use. For these sustainable design elements, the facility was recognized with an International Institute for Sustainable Laboratories (I2SL) 2019 <a href="" target="_blank">Go Beyond Award</a>.</p> <p>In 2020, MIT.nano will be joined by new residential and multi-use buildings in both West Campus and Kendall Square, with the Vassar Street Residence and Kendall Square Sites 4 and 5 set to be completed. In keeping with MIT’s target for LEED v4 Gold Certification for new projects, these buildings were designed for high energy efficiency to minimize emissions and include a number of other sustainability measures, from green roofs to high-performance building envelopes. With new construction on campus, integrated design processes allow for sustainability and energy efficiency strategies to be adopted at the outset.</p> <p><strong>Energy efficiency on an established campus</strong></p> <p>For years, MIT has been keenly focused on increasing the energy efficiency and reducing emissions of its existing buildings, but as the campus grows, reducing emissions of current buildings through deep energy enhancements is an increasingly important part of offsetting emissions from new growth.</p> <p>To best accomplish this, the Department of Facilities — in close collaboration with the Office of Sustainability — has developed and rolled out a governance structure that relies on cross-functional teams to create new standards and policies, identify opportunities, develop projects, and assess progress relevant to building efficiency and emissions reduction. “Engaging across campus and across departments is essential to building out MIT’s full capacity to advance emissions reductions,” explains Director of Sustainability Julie Newman.</p> <p>These cross-functional teams — which include Campus Construction; Campus Services and Maintenance; Environment, Health, and Safety; Facilities Engineering; the Office of Sustainability; and Utilities — have focused on a number of strategies in the past year, including both building-wide and targeted energy strategies that have revealed priority candidates for energy retrofits to drive efficiency and minimize emissions.</p> <p>Carlo Fanone, director of facilities engineering, explains that “the cross-functional teams play an especially critical role at MIT, since we are a district energy campus. We supply most of our own energy, we distribute it, and we are the end users, so the teams represent a holistic approach that looks at all three of these elements equally — supply, distribution, and end-use — and considers energy solutions that address any or all of these elements.” Fanone notes that MIT has also identified 25 facilities on campus that have a high energy-use intensity and a high greenhouse gas emissions footprint. These 25 buildings account for up to 50 percent of energy consumption on the MIT campus. “Going forward,” Fanone says, “we are focusing our energy work on these buildings and on other energy enhancements that could have a measurable impact on the progress toward MIT’s 2030 goal.”</p> <p>Armed with these data, the Department of Facilities last year led retrofits for smart lighting and mechanical systems upgrades, as well as smart building management systems, in a number of buildings across campus. These building audits will continue to guide future projects focused on improving and optimizing energy elements such as heat recovery, lighting, and building systems controls.</p> <p>In addition to building-level efficiency improvements, MIT’s <a href="">Central Utilities Plant</a> upgrade is expected to contribute significantly to the reduction of on-campus emissions in upcoming years. The upgraded plant — set to be completed this year — will incorporate more efficient equipment and state-of-the-art controls. Between this upgrade, a fuel switch improvement made in 2015, and the building-level energy improvements, regulated pollutant emissions on campus are expected to reduce by more than 25 percent and campus greenhouse gas emissions by 10 percent from 2014 levels, helping to offset a projected 10 percent increase in greenhouse gas emissions due to energy demands created by new growth.</p> <p><strong>Climate research and action on campus</strong></p> <p>As MIT explores energy efficiency opportunities, the campus itself plays an important role as an incubator for new ideas.</p> <p>In 2019, MITOS director Julie Newman and professor of mechanical engineering Timothy Gutowski are once again teaching 11.S938 / 2.S999 (Solving for Carbon Neutrality at MIT) this semester. <strong>“</strong>The course, along with others that have emerged across campus, provides students the opportunity to devise ideas and solutions for real-world challenges while connecting them back to campus. It also gives the students a sense of ownership on this campus, sharing ideas to chart the course for carbon-neutral MIT,” Newman says.</p> <p>Also on campus, a new energy storage project is being developed to test the feasibility and scalability of using different battery storage technologies to redistribute electricity provided by variable renewable energy. Funded by a Campus Sustainability Incubator Fund grant and led by Jessika Trancik, associate professor in the Institute for Data, Systems, and Society, the project aims to test software approaches to synchronizing energy demand and supply and evaluate the performance of different energy-storage technologies against these use cases. It has the benefit of connecting on-campus climate research with climate action. “Building this storage testbed, and testing technologies under real-world conditions, can inform new algorithms and battery technologies and act as a multiplier, so that the lessons we learn at MIT can be applied far beyond campus,” says Trancik of the project.</p> <p><strong>Supporting on-campus efforts</strong></p> <p>MIT’s work toward emissions reductions already extends beyond campus as the Institute continues to benefit from the Institute’s 25-year commitment to purchase electricity generated through its <a href="" target="_self">Summit Farms Power Purchase Agreement</a> (PPA), which enabled the construction of a 650-acre, 60-megawatt solar farm in North Carolina. Through the purchase of 87,300 megawatt-hours of solar power, MIT was able to offset over 30,000 metric tons of greenhouse gas emissions from our on-campus operations in 2019.</p> <p>The Summit Farms PPA model has provided inspiration for similar projects around the country and has also demonstrated what MIT can accomplish through partnership. MIT continues to explore the possibility of collaborating on similar large power-purchase agreements, possibly involving other local institutions and city governments.</p> <p><strong>Looking ahead</strong></p> <p>As the campus continues to work toward reducing emissions, Fanone notes that a comprehensive approach will help MIT address the challenge of growing a campus while reducing emissions.</p> <p>“District-level energy solutions, additional renewables, coupled with energy enhancements within our buildings, will allow MIT to offset growth and meet our 2030 GHG goals,” says Fanone. Adds Newman, “It’s an exciting time that MIT is now positioned to put the steps in place to respond to this global crisis at the local level.”</p> How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?Photo: Maia Weinstock Sustainability, MIT.nano, Facilities, Campus buildings and architecture, Campus development, IDSS, Mechanical engineering, Climate change, Energy, Greenhouse gases, Community Seeding oceans with iron may not impact climate change Study finds Earth’s oceans contain just the right amount of iron; adding more may not improve their ability to absorb carbon dioxide. Mon, 17 Feb 2020 14:59:59 -0500 Jennifer Chu | MIT News Office <p>Historically, the oceans have done much of the planet’s heavy lifting when it comes to sequestering carbon dioxide from the atmosphere. Microscopic organisms known collectively as phytoplankton, which grow throughout the sunlit surface oceans and absorb carbon dioxide through photosynthesis, are a key player.</p> <p>To help stem escalating carbon dioxide emissions produced by the burning of fossil fuels, some scientists have proposed seeding the oceans with iron — an essential ingredient that can stimulate phytoplankton growth. Such “iron fertilization” would cultivate vast new fields of phytoplankton, particularly in areas normally bereft of marine life.</p> <p>A new MIT study suggests that iron ferilization may not have a significant impact on phytoplankton growth, at least on a global scale.</p> <p>The researchers studied the interactions between phytoplankton, iron, and other nutrients in the ocean that help phytoplankton grow. Their simulations suggest that on a global scale, marine life has tuned ocean chemistry through these interactions, evolving to maintain a level of ocean iron that supports a delicate balance of nutrients in various regions of the world.</p> <p>“According to our framework, iron fertilization cannot have a significant overall effect on the amount of carbon in the ocean because the total amount of iron that microbes need is already just right,’’ says lead author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences.&nbsp;</p> <p>The paper’s co-authors are Rogier Braakman, Gael Forget, Stephanie Dutkiewicz, and Mick Follows at MIT.</p> <p><strong>Ligand soup</strong></p> <p>The iron that phytoplankton depend on to grow comes largely from dust that sweeps over the continents and eventually settles in ocean waters. While huge quantities of iron can be deposited in this way, the majority of this iron quickly &nbsp;sinks, unused, to the seafloor.</p> <p>“The fundamental problem is, marine microbes require iron to grow, but iron doesn’t hang around. Its concentration in the ocean is so miniscule that it’s a treasured resource,” Lauderdale says.</p> <p>Hence, scientists have put forth iron fertilization as a way to introduce more iron into the system. But iron availability to phytoplankton is much higher if it is bound up with certain organic compounds that keep iron in the surface ocean and are themselves produced by phytoplankton. These compounds, known as ligands, constitute what Lauderdale describes as a “soup of ingredients” that typically come from organic waste products, dead cells, or siderophores — molecules that the microbes have evolved to bind specifically with iron.</p> <p>Not much is known about these iron-trapping ligands at the ecosystem scale, and the team wondered what role the molecules play in regulating the ocean’s capacity to promote the growth of phytoplankton and ultimately absorb carbon dioxide.</p> <p>“People have understood how ligands bind iron, but not what are the emergent properties of such a system at the global scale, and what that means for the biosphere as a whole,” Braakman says. “That’s what we’ve tried to model here.”</p> <p><strong>Iron sweet spot</strong></p> <p>The researchers set out to characterize the interactions between iron, ligands, and macronutrients such as nitrogen and phosphate, and how these interactions affect the global population of phytoplankton and, concurrently, the ocean’s capacity to store carbon dioxide.</p> <p>The team developed a simple three-box model, with each box representing a general ocean environment with a particular balance of iron versus macronutrients. The first box represents remote waters such as the Southern Ocean, which typically have a decent concentration of macronutrients that are upwelled from the deep ocean. They also have a low iron content given their great distance from any continental dust source.</p> <p>The second box represents the North Atlantic and other waters that have an opposite balance: high in iron because of proximity to dusty continents, and low in macronutrients. The third box is a stand-in for the deep ocean, which is a rich source of macronutrients, such as phosphates and nitrates.</p> <p>The researchers simulated a general circulation pattern between the three boxes to represent the global currents that connect all the world’s oceans: The circulation starts in the North Atlantic and dives down into the deep ocean, then upwells into the Southern Ocean and returns back to the North Atlantic.</p> <p>The team set relative concentrations of iron and macronutrients in each box, then ran the model to see how phytoplankton growth evolved in each box over 10,000 years. They ran 10,000 simulations, each with different ligand properties.</p> <p>Out of their simulations, the researchers identified a crucial positive feedback loop between ligands and iron. Oceans with higher concentrations of ligands had also higher concentrations of iron available for phytoplankton to grow and produce more ligands. When microbes have more than enough iron to feast on, they consume as much of the other nutrients they need, such as nitrogen and phosphate, until those nutrients have been completely depleted. &nbsp;</p> <p>The opposite is true for oceans with low ligand concentrations: These have less iron available for phytoplankton growth, and therefore have very little biological activity in general, leading to less macronutrient consumption.</p> <p>The researchers also observed in their simulations a narrow range of ligand concentrations that resulted in a sweet spot, where there was just the right amount of ligand to make just enough iron available for phytoplankton growth, while also leaving just the right amount of macronutrients left over to sustain a whole new cycle of growth across all three ocean boxes.</p> <p>When they compared their simulations to measurements of nutrient, iron, and ligand concentrations taken in the real world, they found their simulated sweet spot range turned out to be the closest match. That is, the world’s oceans appear to have just the right amount of ligands, and therefore iron, available to maximize the growth of phytoplankton and optimally consume macronutrients, in a self-reinforcing and self-sustainable balance of resources.</p> <p>If scientists were to widely fertilize the Southern Ocean or any other iron-depleted waters with iron, the effort would temporarily stimulate phytoplankton to grow and take up all the macronutrients available in that region. But eventually there would be no macronutrients left to circulate to other regions like the North Atlantic, which depends on these macronutrients, along with iron from dust deposits, for phytoplankton growth. The net result would be an eventual decrease in phytoplankton in the North Atlantic and no significant increase in carbon dioxide draw-down globally.</p> <p>Lauderdale points out there may also be other unintended effects to fertilizing the Southern Ocean with iron.</p> <p>“We have to consider the whole ocean as this interconnected system,” says Lauderdale, who adds that if phytoplankton in the North Atlantic were to plummet, so too would all the marine life on up the food chain that depends on the microscopic organisms.</p> <p>“Something like 75 percent of production north of the Southern Ocean is fueled by nutrients from the Southern Ocean, and the northern oceans are where most fisheries are and where many ecosystem benefits for people occur,” Lauderdale says. “Before we dump loads of iron and draw down nutrients in the Southern Ocean, we should consider unintended consequences downstream that potentially make the environmental situation a lot worse.”</p> <p>This research was supported, in part, by the National Science Foundation, the Gordon and Betty Moore Foundation, and the Simons Foundation.</p> Iron is an essential nutrient for phytoplankton growth, but a new study finds that artificially pumping vast amounts of iron into the oceans won’t bump up the microbes’ populations, or their capacity to sequester carbon dioxide.Civil and environmental engineering, Climate, Climate change, EAPS, Earth and atmospheric sciences, Environment, Microbes, Oceanography and ocean engineering, Research, School of Science, National Science Foundation (NSF) Half of U.S. deaths related to air pollution are linked to out-of-state emissions Study tracks pollution from state to state in the 48 contiguous United States. Wed, 12 Feb 2020 12:59:59 -0500 Jennifer Chu | MIT News Office <p>More than half of all air-quality-related early deaths in the United States are a result of emissions originating outside of the state in which those deaths occur, MIT researchers report today in the journal <em>Nature</em>.</p> <p>The study focuses on the years between 2005 and 2018 and tracks combustion emissions of various polluting compounds from various sectors, looking at every state in the contiguous United States, from season to season and year to year.</p> <p>In general, the researchers find that when air pollution is generated in one state, half of that pollution is lofted into the air and carried by winds across state boundaries, to affect the health quality of out-of-state residents and increase their risk of early death.</p> <p>Electric power generation is the greatest contributor to out-of-state pollution-related deaths, the findings suggest. In 2005, for example, deaths caused by sulfur dioxide emitted by power plant smokestacks occurred in another state in more than 75 percent of cases.</p> <p>Encouragingly, the researchers found that since 2005, early deaths associated with air pollution have gone down significantly. They documented a decrease of 30 percent in 2018 compared to 2005, equivalent to about 30,000 avoided early deaths, or people who did not die early as a result of pollution. In addition, the fraction of deaths that occur due to emissions in other states is falling — from 53 percent in 2005 to 41 percent in 2018.</p> <p>Perhaps surprisingly, this reduction in cross-state pollution also appears to be related to electric power generation: In recent years, regulations such as the Environmental Protection Agency’s Clean Air Act and other changes have helped to significantly curb emissions from this sector across the country.</p> <p>The researchers caution, however, that today, emissions from other sectors are increasingly contributing to harmful cross-state pollution.</p> <p>“Regulators in the U.S. have done a pretty good job of hitting the most important thing first, which is power generation, by reducing sulfur dioxide emissions drastically, and there’s been a huge improvement, as we see in the results,” says study leader Steven Barrett, an associate professor of aeronautics and astronautics at MIT. “Now it’s looking like other emissions sectors are becoming important. To make further progress, we should start focusing on road transportation and commercial and residential emissions.”</p> <p>Barrett’s coauthors on the paper are Sebastian Eastham, a research scientist at MIT; Irene Dedoussi, formerly an MIT graduate student and now an assistant professor at Delft University of Technology; and Erwan Monier, formerly an MIT research scientist and now an assistant professor at the University of California at Davis. The research was a collaboration between MIT’s Laboratory for Aviation and the Environment and the MIT Joint Program on the Science and Policy of Global Change.</p> <p><strong>Death and the matrix</strong></p> <p>Scientists have long known that pollution observes no boundaries, one of the prime examples being acid rain.</p> <p>“It’s been known in Europe for over 30 years that power stations in England would create acid rain that would affect vegetation in Norway, but there’s not been a systematic way to capture how that translates to human health effects,” Barrett says.</p> <p>In the case of the United States, tracking how pollution from one state affects another state has historically been tricky and computationally difficult, Barrett says. For each of the 48 contiguous states, researchers would have to track emissions to and from the rest of the 47 states.</p> <p>“But now there are modern computational tools that enable you to do these assessments in a much more efficient way,” Barrett says. “That wasn’t really possible before.”</p> <p>He and his colleagues developed such tools, drawing on fundemental work by Daven Henze at the University of Colorado at Boulder, to track how every state in the contiguous U.S. affects pollution and health outcomes in every other state. They looked at multiple species of pollutants, such as sulfur dioxide, ozone, and fine particulates, from various emissions sectors, including electric power generation, road transportation, marine, rail, and aviation, and commercial and residential sources, at intervals of every hour of the year.</p> <p>They first obtained emissions data from each of seven sectors for the years 2005, 2011, and 2018. They then used the GEOS-Chem atmospheric chemistry transport model to track where these emissions ended up, from season to season and year to year, based on wind patterns and a pollutant’s chemical reactions to the atmosphere. Finally, they used an epidemiologically derived model to relate a population’s pollutant exposure and risk of early death.</p> <p>“We have this multidimensional matrix that characterizes the impact of a state’s emissions of a given economic sector of a given pollutant at a given time, on any other state’s health outcomes,” Barrett says. “We can figure out, for example, how much NOx emissions from road transportation in Arizona in July affects human health in Texas, and we can do those calculations instantly.”</p> <p><strong>Importing pollution</strong></p> <p>The researchers also found that emissions traveling out of state could affect the health of residents beyond immediate, neighboring states.</p> <p>“It’s not necessarily just the adjacent state, but states over 1,000 miles away that can be affected,” Barrett says. “Different kinds of emissions have a different kind of range.”</p> <p>For example, electric power generation has the greatest range, as power plants can loft pollutants far into the atmosphere, allowing them to travel over long distances. In contrast, commercial and residential sectors generally emit pollutants that chemically do not last as long in the atmosphere. &nbsp;</p> <p>“The story is different for each pollutant,” Barrett says.</p> <p>In general, the researchers found that out-of-state air pollution was associated with more than half of all pollution-related early deaths in the U.S. from 2005 to 2018.</p> <p>In terms of the impact on individual states, the team found that many of the northern Midwest states such as Wyoming and North Dakota are “net exporters” of pollution-related health impacts, partly because the populations there are relatively low and the emissions these states generate are carried away by winds to other states. Those states that “import” health impacts tend to lie along the East Coast, in the path of the U.S. winds that sweep eastward.</p> <p>New York in particular is what the researchers call “the biggest importer of air pollution deaths”; 60 percent of air pollution-related early deaths are from out-of-state emissions.</p> <p>“There’s a big archive of data we’ve created from this project,” Barrett says. “We think there are a lot of things that policymakers can dig into, to chart a path to saving the most lives.”</p> <p>This research was supported, in part, by the U.S. Environmental Protection Agency, the MIT Martin Family Fellowship for Sustainability, the George and Marie Vergottis Fellowship at MIT, and the VoLo Foundation.</p> New MIT study finds more than half of all air-quality-related early deaths in the United States are a result of cross-state pollution, or emissions originating outside of the state in which those deaths occur.Image: Chelsea Turner, MITEmissions, Environment, Health, Policy, Pollution, Research, School of Engineering, Joint Program on the Science and Policy of Global Change, Aeronautical and astronautical engineering Brainstorming energy-saving hacks on Satori, MIT’s new supercomputer Three-day hackathon explores methods for making artificial intelligence faster and more sustainable. Tue, 11 Feb 2020 11:50:01 -0500 Kim Martineau | MIT Quest for Intelligence <p>Mohammad Haft-Javaherian planned to spend an hour at the&nbsp;<a href="">Green AI Hackathon</a>&nbsp;— just long enough to get acquainted with MIT’s new supercomputer,&nbsp;<a href="">Satori</a>. Three days later, he walked away with $1,000 for his winning strategy to shrink the carbon footprint of artificial intelligence models trained to detect heart disease.&nbsp;</p> <p>“I never thought about the kilowatt-hours I was using,” he says. “But this hackathon gave me a chance to look at my carbon footprint and find ways to trade a small amount of model accuracy for big energy savings.”&nbsp;</p> <p>Haft-Javaherian was among six teams to earn prizes at a hackathon co-sponsored by the&nbsp;<a href="">MIT Research Computing Project</a>&nbsp;and&nbsp;<a href="">MIT-IBM Watson AI Lab</a> Jan. 28-30. The event was meant to familiarize students with Satori, the computing cluster IBM&nbsp;<a href="">donated</a> to MIT last year, and to inspire new techniques for building energy-efficient AI models that put less planet-warming carbon dioxide into the air.&nbsp;</p> <p>The event was also a celebration of Satori’s green-computing credentials. With an architecture designed to minimize the transfer of data, among other energy-saving features, Satori recently earned&nbsp;<a href="">fourth place</a>&nbsp;on the Green500 list of supercomputers. Its location gives it additional credibility: It sits on a remediated brownfield site in Holyoke, Massachusetts, now the&nbsp;<a href="">Massachusetts Green High Performance Computing Center</a>, which runs largely on low-carbon hydro, wind and nuclear power.</p> <p>A postdoc at MIT and Harvard Medical School, Haft-Javaherian came to the hackathon to learn more about Satori. He stayed for the challenge of trying to cut the energy intensity of his own work, focused on developing AI methods to screen the coronary arteries for disease. A new imaging method, optical coherence tomography, has given cardiologists a new tool for visualizing defects in the artery walls that can slow the flow of oxygenated blood to the heart. But even the experts can miss subtle patterns that computers excel at detecting.</p> <p>At the hackathon, Haft-Javaherian ran a test on his model and saw that he could cut its energy use eight-fold by reducing the time Satori’s graphics processors sat idle. He also experimented with adjusting the model’s number of layers and features, trading varying degrees of accuracy for lower energy use.&nbsp;</p> <p>A second team, Alex Andonian and Camilo Fosco, also won $1,000 by showing they could train a classification model nearly 10 times faster by optimizing their code and losing a small bit of accuracy. Graduate students in the Department of Electrical Engineering and Computer Science (EECS), Andonian and Fosco are currently training a classifier to tell legitimate videos from AI-manipulated fakes, to compete in Facebook’s&nbsp;<a href="">Deepfake Detection Challenge</a>. Facebook launched the contest last fall to crowdsource ideas for stopping the spread of misinformation on its platform ahead of the 2020 presidential election.</p> <p>If a technical solution to deepfakes is found, it will need to run on millions of machines at once, says Andonian. That makes energy efficiency key. “Every optimization we can find to train and run more efficient models will make a huge difference,” he says.</p> <p>To speed up the training process, they tried streamlining their code and lowering the resolution of their 100,000-video training set by eliminating some frames. They didn’t expect a solution in three days, but Satori’s size worked in their favor. “We were able to run 10 to 20 experiments at a time, which let us iterate on potential ideas and get results quickly,” says Andonian.&nbsp;</p> <p>As AI continues to improve at tasks like reading medical scans and interpreting video, models have grown bigger and more calculation-intensive, and thus, energy intensive. By one&nbsp;<a href="">estimate</a>, training a large language-processing model produces nearly as much carbon dioxide as the cradle-to-grave emissions from five American cars. The footprint of the typical model is modest by comparison, but as AI applications proliferate its environmental impact is growing.&nbsp;</p> <p>One way to green AI, and tame the exponential growth in demand for training AI, is to build smaller models. That’s the approach that a third hackathon competitor, EECS graduate student Jonathan Frankle, took. Frankle is looking for signals early in the training process that point to subnetworks within the larger, fully-trained network that can do the same job.&nbsp;The idea builds on his award-winning&nbsp;<a href="">Lottery Ticket Hypothesis</a>&nbsp;paper from last year that found a neural network could perform with 90 percent fewer connections if the right subnetwork was found early in training.</p> <p>The hackathon competitors were judged by John Cohn, chief scientist at the MIT-IBM Watson AI Lab, Christopher Hill, director of MIT’s Research Computing Project, and Lauren Milechin, a research software engineer at MIT.&nbsp;</p> <p>The judges recognized four&nbsp;other teams: Department of Earth, Atmospheric and Planetary Sciences (EAPS) graduate students Ali Ramadhan,&nbsp;Suyash Bire, and James Schloss,&nbsp;for adapting the programming language Julia for Satori; MIT Lincoln Laboratory postdoc Andrew Kirby, for adapting code he wrote as a graduate student to Satori using a library designed for easy programming of computing architectures; and Department of Brain and Cognitive Sciences graduate students Jenelle Feather and Kelsey Allen, for applying a technique that drastically simplifies models by cutting their number of parameters.</p> <p>IBM developers were on hand to answer questions and gather feedback.&nbsp;&nbsp;“We pushed the system — in a good way,” says Cohn. “In the end, we improved the machine, the documentation, and the tools around it.”&nbsp;</p> <p>Going forward, Satori will be joined in Holyoke by&nbsp;<a href="">TX-Gaia</a>, Lincoln Laboratory’s new supercomputer.&nbsp;Together, they will provide feedback on the energy use of their workloads. “We want to raise awareness and encourage users to find innovative ways to green-up all of their computing,” says Hill.&nbsp;</p> Several dozen students participated in the Green AI Hackathon, co-sponsored by the MIT Research Computing Project and MIT-IBM Watson AI Lab. Photo panel: Samantha SmileyQuest for Intelligence, MIT-IBM Watson AI Lab, Electrical engineering and computer science (EECS), EAPS, Lincoln Laboratory, Brain and cognitive sciences, School of Engineering, School of Science, Algorithms, Artificial intelligence, Computer science and technology, Data, Machine learning, Software, Climate change, Awards, honors and fellowships, Hackathon, Special events and guest speakers Powering the planet Fikile Brushett and his team are designing electrochemical technology to secure the planet’s energy future. Wed, 29 Jan 2020 09:00:00 -0500 Zain Humayun | School of Engineering <p>Before Fikile Brushett wanted to be an engineer, he wanted to be a soccer player. Today, however, Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering. Building 66 might not look much like a soccer field, but Brushett says the sport taught him a fundamental lesson that has proved invaluable in his scientific endeavors.<br /> <br /> “The teams that are successful are the teams that work together,” Brushett says.</p> <p>That philosophy inspires the Brushett Research Group, which draws on disciplines as diverse as organic chemistry and economics to create new electrochemical processes and devices.</p> <div class="cms-placeholder-content-video"></div> <p>As the world moves toward cleaner and sustainable sources of energy, one of the major challenges is converting efficiently between electrical and chemical energy. This is the challenge undertaken by Brushett and his colleagues, who are trying to push the frontiers of electrochemical technology.</p> <p>Brushett’s research focuses on ways to improve redox flow batteries, which are potentially low-cost alternatives to conventional batteries and a viable way of storing energy from renewable sources like wind and the sun. His group also explores means to recycle carbon dioxide — a greenhouse gas — into fuels and useful chemicals, and to extract energy from biomass.</p> <p>In his work, Brushett is helping to transform every stage of the energy pipeline: from unlocking the potential of solar and wind energy to replacing combustion engines with fuel cells, and even enabling greener industrial processes.</p> <p>“A lot of times, electrochemical technologies work in some areas, but we'd like them to work much more broadly than we've asked them to do beforehand,” Brushett says. “A lot of that is now driving the need for new innovation in the area, and that's where we come in.”</p> Fikile Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering.Photo: Lillie Paquette/School of EngineeringSchool of Engineering, Chemical engineering, Energy, Energy storage, Climate change, Batteries, Profile, Faculty, Sustainability, Chemistry, electronics Testing the waters MIT sophomore Rachel Shen looks for microscopic solutions to big environmental challenges. Tue, 28 Jan 2020 00:00:00 -0500 Lucy Jakub | Department of Biology <p>In 2010, the U.S. Army Corps of Engineers began restoring the Broad Meadows salt marsh in Quincy, Massachusetts. The marsh, which had grown over with invasive reeds and needed to be dredged, abutted the Broad Meadows Middle School, and its three-year transformation fascinated one inquisitive student. “I was always super curious about what sorts of things were going on there,” says Rachel Shen, who was in eighth grade when they finally finished the project. She’d spend hours watching birds in the marsh, and catching minnows by the beach.</p> <p>In her bedroom at home, she kept an eye on four aquariums furnished with anubias, hornwort, guppy grass, amazon swords, and “too many snails.” Now, living in a dorm as a sophomore at MIT, she’s had to scale back to a single one-gallon tank. But as a Course 7 (Biology) major minoring in environmental and sustainability studies, she gets an even closer look at the natural world, seeing what most of us can’t: the impurities in our water, the matrices of plant cells, and the invisible processes that cycle nutrients in the oceans.</p> <p>Shen’s love for nature has always been coupled with scientific inquiry. Growing up, she took part in <a href="">Splash</a> and <a href="">Spark</a> workshops for grade schoolers, taught by MIT students. “From a young age, I was always that kid catching bugs,” she says. In her junior year of high school, she landed the perfect summer internship through Boston University’s <a href="">GROW program</a>: studying ant brains at BU’s <a href="">Traniello lab</a>. Within a colony, ants with different morphological traits perform different jobs as workers, guards, and drones. To see how the brains of these castes might be wired differently, Shen dosed the ants with serotonin and dopamine and looked for differences in the ways the neurotransmitters altered the ants’ social behavior.</p> <p>This experience in the Traniello lab later connected Shen to her first campus job working for <a href=""><em>MITx</em> Biology</a>, which develops online courses and educational resources for students with Department of Biology faculty. Darcy Gordon, one of the administrators for GROW and a postdoc at the Traniello Lab, joined <em>MITx</em> Biology as a digital learning fellow just as Shen was beginning her first year. <em>MITx</em> was looking for students to beta-test their <a href="">biochemistry course</a>, and Gordon encouraged Shen to apply. “I’d never taken a biochem course before, but I had enough background to pick it up,” says Shen, who is always willing to try something new. She went through the entire course, giving feedback on lesson clarity and writing practice problems.</p> <p>Using what she learned on the job, she’s now the biochem leader on a student project with the <a href="">It’s On Us Data Sciences</a> club (formerly Project ORCA) to develop a live map of water contamination by rigging autonomous boats with pollution sensors. Environmental restoration has always been important to her, but it was on her trip to the Navajo Nation with her first-year advisory group, <a href="">Terrascope</a>, that Shen saw the effects of water scarcity and contamination firsthand. She and her peers devised filtration and collection methods to bring to the community, but she found the most valuable part of the project to be “working with the people, and coming up with solutions that incorporated their local culture and local politics.”</p> <p>Through the Undergraduate Research Opportunities Program (UROP), Shen has put her problem-solving skills to work in the lab. Last summer, she interned at Draper and the Velásquez-García Group in MIT’s Microsystems Technologies Laboratories. Through experiments, she observed how plant cells can be coaxed with hormones to reinforce their cell walls with lignin and cellulose, becoming “woody” — insights that can be used in the development of biomaterials.</p> <p>For her next UROP, she sought out a lab where she could work alongside a larger team, and was drawn to the people in the lab of <a href="" target="_blank">Sallie “Penny” Chisholm</a> in MIT’s departments of Biology and Civil and Environmental Engineering, who study the marine cyanobacterium <em>Prochlorococcus</em>. “I really feel like I could learn a lot from them,” Shen says. “They’re great at explaining things.”</p> <p><em>Prochlorococcus </em>is one of the most abundant photosynthesizers in the ocean. Cyanobacteria are mixotrophs, which means they get their energy from the sun through photosynthesis, but can also take up nutrients like carbon and nitrogen from their environment. One source of carbon and nitrogen is found in chitin, the insoluble biopolymer that crustaceans and other marine organisms use to build their shells and exoskeletons. Billions of tons of chitin are produced in the oceans every year, and nearly all of it is recycled back into carbon, nitrogen, and minerals by marine bacteria, allowing it to be used again.</p> <p>Shen is investigating whether <em>Prochlorococcus</em> also recycles chitin, like its close relative <em>Synechococcus</em> that secretes enzymes which can break down the polymer. In the lab’s grow room, she tends to test tubes that glow green with cyanobacteria. She’ll introduce chitin to half of the cultures to see if specific genes in <em>Prochlorococcus</em> are expressed that might be implicated in chitin degradation, and identify those genes with RNA sequencing.</p> <p>Shen says working with <em>Prochlorococcus </em>is exciting because it’s a case study in which the smallest cellular processes of a species can have huge effects in its ecosystem. Cracking the chitin cycle would have implications for humans, too. Biochemists have been trying to turn chitin into a biodegradable alternative to plastic. “One thing I want to get out of my science education is learning the basic science,” she says, “but it’s really important to me that it has direct applications.”</p> <p>Something else Shen has realized at MIT is that, whatever she ends up doing with her degree, she wants her research to involve fieldwork that takes her out into nature — maybe even back to the marsh, to restore shorelines and waterways. As she puts it, “something that’s directly relevant to people.” But she’s keeping her options open. “Currently I'm just trying to explore pretty much everything.”</p> Biology major Rachel Shen sees what most of us can’t: the impurities in our water, the matrices of plant cells, and the invisible processes that cycle nutrients in the oceans.Photo: Lucy JakubBiology, School of Science, MITx, Undergraduate Research Opportunities Program (UROP), Civil and environmental engineering, School of Engineering, Bacteria, Data, Environment, Microbes, Profile, Research For cheaper solar cells, thinner really is better Solar panel costs have dropped lately, but slimming down silicon wafers could lead to even lower costs and faster industry expansion. Sun, 26 Jan 2020 23:59:59 -0500 David L. Chandler | MIT News Office <p>Costs of solar panels have plummeted over the last several years, leading to rates of solar installations far greater than most analysts had expected. But with most of the potential areas for cost savings already pushed to the extreme, further cost reductions are becoming more challenging to find.</p> <p>Now, researchers at MIT and at the National Renewable Energy Laboratory (NREL) have outlined a pathway to slashing costs further, this time by slimming down the silicon cells themselves.</p> <p>Thinner silicon cells have been explored before, especially around a dozen years ago when the cost of silicon peaked because of supply shortages. But this approach suffered from some difficulties: The thin silicon wafers were too brittle and fragile, leading to unacceptable levels of losses during the manufacturing process, and they had lower efficiency. The researchers say there are now ways to begin addressing these challenges through the use of better handling equipment and some recent developments in solar cell architecture.</p> <p>The new findings are detailed in a paper in the journal <em>Energy and Environmental Science</em>, co-authored by MIT postdoc Zhe Liu, professor of mechanical engineering Tonio Buonassisi, and five others at MIT and NREL.</p> <p>The researchers describe their approach as “technoeconomic,” stressing that at this point economic considerations are as crucial as the technological ones in achieving further improvements in affordability of solar panels.</p> <p>Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year, the researchers say. Today’s silicon photovoltaic cells, the heart of these solar panels, are made from wafers of silicon that are 160 micrometers thick, but with improved handling methods, the researchers propose this could be shaved down to 100 micrometers —&nbsp; and eventually as little as 40 micrometers or less, which would only require one-fourth as much silicon for a given size of panel.</p> <p>That could not only reduce the cost of the individual panels, they say, but even more importantly it could allow for rapid expansion of solar panel manufacturing capacity. That’s because the expansion can be constrained by limits on how fast new plants can be built to produce the silicon crystal ingots that are then sliced like salami to make the wafers. These plants, which are generally separate from the solar cell manufacturing plants themselves, tend to be capital-intensive and time-consuming to build, which could lead to a bottleneck in the rate of expansion of solar panel production. Reducing wafer thickness could potentially alleviate that problem, the researchers say.</p> <p>The study looked at the efficiency levels of four variations of solar cell architecture, including PERC (passivated emitter and rear contact) cells and other advanced high-efficiency technologies, comparing their outputs at different thickness levels. The team found there was in fact little decline in performance down to thicknesses as low as 40 micrometers, using today’s improved manufacturing processes.</p> <p>“We see that there’s this area (of the graphs of efficiency versus thickness) where the efficiency is flat,” Liu says, “and so that’s the region where you could potentially save some money.” Because of these advances in cell architecture, he says, “we really started to see that it was time to revisit the cost benefits.”</p> <p>Changing over the huge panel-manufacturing plants to adapt to the thinner wafers will be a time-consuming and expensive process, but the analysis shows the benefits can far outweigh the costs, Liu says. It will take time to develop the necessary equipment and procedures to allow for the thinner material, but with existing technology, he says, “it should be relatively simple to go down to 100 micrometers,” which would already provide some significant savings. Further improvements in technology such as better detection of microcracks before they grow could help reduce thicknesses further.</p> <p>In the future, the thickness could potentially be reduced to as little as 15 micrometers, he says. New technologies that grow thin wafers of silicon crystal directly rather than slicing them from a larger cylinder could help enable such further thinning, he says.</p> <p>Development of thin silicon has received little attention in recent years because the price of silicon has declined from its earlier peak. But, because of cost reductions that have already taken place in solar cell efficiency and other parts of the solar panel manufacturing process and supply chain, the cost of the silicon is once again a factor that can make a difference, he says.</p> <p>“Efficiency can only go up by a few percent. So if you want to get further improvements, thickness is the way to go,” Buonassisi says. But the conversion will require large capital investments for full-scale deployment.</p> <p>The purpose of this study, he says, is to provide a roadmap for those who may be planning expansion in solar manufacturing technologies. By making the path “concrete and tangible,” he says, it may help companies incorporate this in their planning. “There is a path,” he says. “It’s not easy, but there is a path. And for the first movers, the advantage is significant.”</p> <p>What may be required, he says, is for the different key players in the industry to get together and lay out a specific set of steps forward and agreed-upon standards, as the integrated circuit industry did early on to enable the explosive growth of that industry. “That would be truly transformative,” he says.</p> <p>Andre Augusto, an associate research scientist at Arizona State University who was not connected with this research, says “refining silicon and wafer manufacturing is the most capital-expense (capex) demanding part of the process of manufacturing solar panels. So in a scenario of fast expansion, the wafer supply can become an issue. Going thin solves this problem in part as you can manufacture more wafers per machine without increasing significantly the capex.” He adds that “thinner wafers may deliver performance advantages in certain climates,” performing better in warmer conditions.</p> <p>Renewable energy analyst Gregory Wilson of Gregory Wilson Consulting, who was not associated with this work, says “The impact of reducing the amount of silicon used in mainstream cells would be very significant, as the paper points out. The most obvious gain is in the total amount of capital required to scale the PV industry to the multi-terawatt scale required by the climate change problem. Another benefit is in the amount of energy required to produce silicon PV panels. This is because the polysilicon production and ingot growth processes that are required for the production of high efficiency cells are very energy intensive.”</p> <p>Wilson adds “Major PV cell and module manufacturers need to hear from credible groups like Prof. Buonassisi’s at MIT, since they will make this shift when they can clearly see the economic benefits.”</p> <p>The team also included Sarah Sofia, Hannu Lane, Sarah Wieghold and Marius Peters at MIT and Michael Woodhouse at NREL. The work was partly supported by the U.S. Department of Energy, the Singapore-MIT Alliance for Research and Technology (SMART),&nbsp;and by a Total Energy Fellowship through the MIT Energy Initiative.</p> Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year.Research, School of Engineering, Energy, Solar, Nanoscience and nanotechnology, Materials Science and Engineering, Mechanical engineering, Renewable energy, Alternative energy, Sustainability, MIT Energy Initiative, Climate change, Department of Energy (DoE), Singapore-MIT Alliance for Research and Technology (SMART) Reducing risk, empowering resilience to disruptive global change Workshop highlights how MIT research can guide adaptation at local, regional, and national scales. Thu, 23 Jan 2020 15:15:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>Five-hundred-year floods. Persistent droughts and heat waves. More devastating wildfires. As these and other planetary perils become more commonplace, they pose serious risks to natural, managed, and built environments around the world. Assessing the magnitude of these risks over multiple decades and identifying strategies to prepare for them at local, regional, and national scales will be essential to making societies and economies more resilient and sustainable.</p> <p>With that goal in mind, the <a href="">MIT Joint Program on the Science of Global Change</a> launched in 2019 its Adaptation-at-Scale initiative (<a href="">AS-MIT</a>), which seeks evidence-based solutions to global change-driven risks. Using its Integrated Global System Modeling (<a href="">IGSM</a>) framework, as well as a suite of resource and infrastructure assessment models, AS-MIT targets, diagnoses, and projects changing risks to life-sustaining resources under impending societal and environmental stressors, and evaluates the effectiveness of potential risk-reduction measures. &nbsp;</p> <p>In pursuit of these objectives, MIT Joint Program researchers are collaborating with other adaptation-at-scale thought leaders across MIT. And at a conference on Jan. 10 on the MIT campus, they showcased some of their most promising efforts in this space. Part of a series of MIT Joint Program workshops aimed at providing decision-makers with actionable information on key global change concerns, the conference covered risks and resilience strategies for food, energy, and water systems; urban-scale solutions; predicting the evolving risk of extreme events; and decision-making and early warning capabilities — and featured a lunch seminar on renewable energy for resilience and adaptation by an expert from the National Renewable Energy Laboratory.</p> <p><strong>Food, energy, and water systems</strong></p> <p><a href="">Greg Sixt</a>, research manager in the Abdul Latif Jameel Water and Food Systems Lab (<a href="">J-WAFS</a>), described the work of J-WAFS’ Alliance for Climate Change and Food Systems Research, <a href="">an emerging alliance</a> of premier research institutions and key stakeholders to collaboratively frame challenges, identify research paths, and fund and pursue convergence research on building more resilience across the food system, from production to supply chains to consumption.</p> <p>MIT Joint Program Deputy Director <a href="">Sergey Paltsev</a>, also a senior research scientist at the MIT Energy Initiative (MITEI), explored climate-related risks to energy systems. He highlighted physical risks, such as potential impacts of permafrost degradation on roads, airports, natural gas pipelines, and other infrastructure in the Arctic, and of an increase in extreme temperature, wind, and icing events on power distribution infrastructure in the U.S. Northeast.</p> <p>“No matter what we do in terms of climate mitigation, the physical risks will remain the same for decades because of inertia in the climate system,” says Paltsev. “Even with very aggressive emissions-reduction policies, decision-makers must take physical risks into consideration.”</p> <p>They must also account for <a href="">transition risks</a> — long-term financial and investment risks to fossil fuel infrastructure posed by climate policies. Paltsev showed how <a href="">energy scenarios</a> developed at MIT and elsewhere can enable decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy.</p> <p>MIT Joint Program Deputy Director <a href="">Adam Schlosser</a> discussed MIT Joint Program (JP) efforts to assess risks to, and optimal adaptation strategies for, water systems subject to drought, flooding, and other challenges impacting water availability and quality posed by a changing environment. Schlosser noted that in some cases, efficiency improvements can go a long way in meeting these challenges, as shown in <a href="">one JP study</a> that found improving municipal and industrial efficiencies was just as effective as climate mitigation in confronting projected water shortages in Asia. Finally, he introduced a new JP <a href="">project</a> funded by the U.S. Department of Energy that will explore how in U.S. floodplains, foresight could increase resilience to future forces, stressors, and disturbances imposed by nature and human activity.</p> <p>“In assessing how we avoid and adapt to risk, we need to think about all plausible futures,” says Schlosser. “Our approach is to take all [of those] futures, put them into our [integrated global] system of human and natural systems, and think about how we use water optimally.”</p> <p><strong>Urban-scale solutions</strong></p> <p><a href="">Brian Goldberg</a>, assistant director of the MIT Office of Sustainability, detailed MIT’s plans to sustain MIT campus infrastructure amid intensifying climate disruptions and impacts over the next 100 years. Toward that end, the <a href="">MIT Climate Resiliency Committee</a> is working to shore up multiple, interdependent layers of resiliency that include the campus site, infrastructure and utilities, buildings, and community, and creating modeling tools to evaluate flood risk.</p> <p>“We’re using the campus as a testbed to develop solutions, advance research, and ultimately grow a more climate-resilient campus,” says Goldberg. “Perhaps the models we develop and engage with at the campus scale can then influence the city or region scale and then be shared globally.”</p> <p>MIT Joint Program/MITEI Research Scientist <a href="">Mei Yuan</a> described an upcoming study to assess the potential of the building sector to reduce its greenhouse gas emissions through more energy-efficient design and intelligent telecommunications — and thereby lower climate-related risk to urban infrastructure. Yuan aims to achieve this objective by linking the program’ s U.S. Regional Energy Policy (<a href="">USREP</a>) model with a detailed building sector model that explicitly represents energy-consuming technologies (e.g., for heating, cooling, lighting, and household appliances).&nbsp;</p> <p>“Incorporating this building sector model within an integrated framework that combines USREP with an hourly electricity dispatch model (EleMod) could enable us to simulate the supply and demand of electricity at finer spatial and temporal resolution,” says Yuan, “and thereby better understand how the power sector will need to adapt to future energy needs.”</p> <p><strong>Renewable energy for resilience and adaptation</strong></p> <p><a href="">Jill Engel-Cox</a>, director of NREL’s Joint Institute for Strategic Energy Analysis, presented several promising adaptation measures for energy resilience that incorporate renewables. These include placing critical power lines underground; increasing demand-side energy efficiency to decrease energy consumption and power system instability; diversifying generation so electric power distribution can be sustained when one power source is down; deploying distributed generation (e.g., photovoltaics, small wind turbines, energy storage systems) so that if one part of the grid is disconnected, other parts continue to function; and implementing smart grids and micro-grids.</p> <p>“Adaptation and resilience measures tend to be very localized,” says Engel-Cox. “So we need to come up with strategies that will work for particular locations and risks.”</p> <p>These include storm-proofing photovoltaics and wind turbine systems, deploying hydropower with greater flexibility to account for variability in water flow, incorporating renewables in planning for natural gas system outages, and co-locating wind and PV systems on agricultural land.</p> <p><strong>Extreme events</strong></p> <p>MIT Joint Program Principal Research Scientist <a href="">Xiang Gao</a> showed how a <a href="">statistical method</a> that she developed has produced predictions of the risk of <a href="">heavy precipitation</a>, heat waves, and other extreme weather events that are more consistent with observations than conventional climate models do. Known as the “analog method,” the technique detects extreme events based on large-scale atmospheric patterns associated with such events.</p> <p>“Improved prediction of extreme weather events enabled by the analog method offers a promising pathway to provide meaningful climate mitigation and adaptation actions,” says Gao.</p> <p><a href="">Sai Ravela</a>, a principal research scientist at MIT’s Department of Earth, Atmospheric and Planetary Sciences, showed how artificial intelligence could be exploited to predict extreme events. Key methods that Ravela and his research group are developing combine climate statistics, atmospheric modeling, and physics to assess the risk of future extreme events. The group’s long-range predictions draw upon deep learning and small-sample statistics using local sensor data and global oscillations. Applying these methods, Ravela and his co-investigators are developing a model to assess the risk of extreme weather events to infrastructure, such as that of wind and flooding damage to a nuclear plant or city.&nbsp;</p> <p><strong>Decision-making and early warning capabilities</strong></p> <p>MIT Joint Program/MITEI Research Scientist <a href="">Jennifer Morris</a> explored uncertainty and decision-making for adaptation to global change-driven challenges ranging from coastal adaptation to grid resilience. Morris described the MIT Joint Program approach as a four-step process: quantify stressors and influences, evaluate vulnerabilities, identify response options and transition pathways, and develop decision-making frameworks. She then used the following Q&amp;A to show how this four-pronged approach can be applied to the <a href="">case of grid resilience</a>.</p> <p><strong>Q:</strong> Do human-induced changes in damaging weather events present a rising, widespread risk of premature failure in the nation’s power grid — and, if so, what are the cost-effective near-term actions to hedge against that risk?<em> </em></p> <p><strong>A:</strong> First, identify critical junctures within power grid, starting with large power transformers (LPTs). Next, use an analogue approach (described above) to construct distribution of expected changes in extreme heat wave events which would be damaging to LPTs under different climate scenarios. Next, use energy-economic and electric power models to assess electricity demand and economic costs related to LPT failure. And finally, make decisions under uncertainty to identify near-term actions to mitigate risks of LPT failure (e.g., upgrading or replacing LPTs).</p> <p><a href="">John Aldridge</a>, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, highlighted the group’s efforts to combine advanced remote sensing and decision support systems to assess the impacts of natural disasters, support hurricane evacuation decision-making, and guide proactive climate adaptation and resilience. Lincoln Laboratory is collaborating with MIT campus partners to develop the Climate Resilience Early Warning System Network (<a href="" target="_blank">CREWSNET</a>), which draws on MIT strengths in cutting-edge climate forecasting, impact models, and applied decision support tools to empower climate resilience and adaptation on a global scale.</p> <p>“From extreme event prediction to scenario-based risk analysis, this workshop showcased the core capabilities of the joint program and its partners across MIT that can&nbsp;advance scalable&nbsp;solutions to adaptation challenges across&nbsp;the globe,” says Adam&nbsp;Schlosser, who coordinated the day’s presentations.&nbsp;“Applying leading-edge modeling tools, our research is well-positioned to provide decision-makers with guidance and strategies to build a more resilient future."</p> An Army Corps of Engineers flood model depicting the Ala Wai watershed after a 100-year rain event. The owner of a local design firm described the Ala Wai Flood Control Project as the largest climate impact project in Hawai's modern history.Image: U.S. Army Corps of Engineers-Honolulu DistrictJoint Program on the Science and Policy of Global Change, Abdul Latif Jameel World Water and Food Security Lab (J-WAFS), MIT Energy Initiative, EAPS, Lincoln Laboratory, Energy, Greenhouse gases, Renewable energy, Climate, Climate change, Environment, Policy, Emissions, Pollution Understanding combustion Assistant Professor Sili Deng is on a quest to understand the chemistry involved in combustion and develop strategies to make it cleaner. Thu, 23 Jan 2020 15:15:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>Much of the conversation around energy sustainability is dominated by clean-energy technologies like wind, solar, and thermal. However, with roughly 80 percent of energy use in the United States coming from fossil fuels, combustion remains the dominant method of energy conversion for power generation, electricity, and transportation.</p> <p>“People think of combustion as a dirty technology, but it’s currently the most feasible way to produce electricity and power,” explains Sili Deng, assistant professor of mechanical engineering and the Brit (1961) &amp; Alex (1949) d’Arbeloff Career Development Professor.</p> <p>Deng is working toward understanding the chemistry and flow that interacts in combustion in an effort to improve technologies for current or near-future energy conversion applications. “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” she adds.</p> <p>Deng’s interest in combustion stemmed from a conversation she had with a friend before applying to Tsinghua University for undergraduate study. “One day, I was talking about my dream school and major with a friend and she said ‘What if you could increase the efficiency of energy utilization by just 1 percent?’” recalls Deng. “Considering how much energy we use globally each year, you could make a huge difference.”</p> <p>This discussion inspired Deng to study combustion. After graduating with a bachelor’s degree in thermal engineering, she received her master’s and PhD from Princeton University. At Princeton, Deng focused on the how the coupling effects of chemistry and flow influence combustion and emissions.</p> <p>“The details of combustion are much more complicated than our general understanding of fuel and air combining to form water, carbon dioxide, and heat,” Deng explains. “There are hundreds of chemical species and thousands of reactions involved, depending on the type of fuel, fuel-air mixing, and flow dynamics.”</p> <p>Along with her team at the <a href="" target="_blank">Deng Energy and Nanotechnology Group at MIT</a>, she hopes that understanding chemically reacting flow in the combustion process will result in new strategies to control the process of combustion and reduce or eliminate the soot generated in combustion.&nbsp;</p> <p>“My group utilizes both experimental and computational tools to build a fundamental understanding of the combustion process that can guide the design of combustors for high performance and low emissions,” Deng adds. Her team is also utilizing artificial intelligence algorithms along with physical models to predict — and hopefully control — the combustion process.</p> <p>By understanding and controlling the combustion process, Deng is uncovering more about how soot, combustion’s most notorious by-product, is created.</p> <p>“Once soot leaves the site of combustion, it is difficult to contain. There isn’t much you can do to prevent haze or smog from developing,” she explains.</p> <p>The production of soot starts within the flame itself — even on a small scale, such as burning a candle. As Deng describes it, a “chemical soup” of hydrocarbons, vapor, melting wax, and oxygen interact to create soot particles visible as the yellow glow surrounding a candle light.</p> <p>“By understanding exactly how this soot is generated within a flame, we’re hoping to develop methods to reduce or eliminate it before it gets out of the combustion channel,” says Deng.</p> <p>Deng’s research on flames extends beyond the formation of soot. By developing a technology called flame synthesis, she is working on producing nanomaterials that can be used for renewable energy applications.</p> <p>The process of synthesizing nanomaterials via flames shares similarities with the soot formation in flames. Instead of generating the byproducts of incomplete combustion, certain precursors are added to the flame, which result in the production of nanomaterials. One common example of using flame synthesis to create nanomaterials is the production of titanium dioxide, a white pigment often used in paint and sunscreen.&nbsp;</p> <p>“I’m hoping to create a similar type of reaction to develop new materials that can be used for things like renewable energy, water treatment, pollution reduction, and catalysts,” she explains. Her team has been tweaking the various parameters of combustion — from temperature to the type of fuel used — to create nanomaterials that could eventually be used to clean up other, more nefarious byproducts created in combustion.</p> <p>To be successful in her quest to make combustion cleaner, Deng acknowledges that collaboration will be key. “There’s an opportunity to combine the fundamental research on combustion that my lab is doing with the materials, devices, and products being developed across areas like materials science and automotive engineering,” she says.</p> <p>Since we may be decades away from transitioning to a grid powered by renewable resources like solar, wave, and wind, Deng is helping carve out an important role for fellow combustion scientists.</p> <p>“While clean-energy technologies are continuing to be developed, it’s crucial that we continue to work toward finding ways to improve combustion technologies,” she adds.</p> “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” says Sili Deng, assistant professor of mechanical engineering at MIT.Photo: Tony PulsoneMechanical engineering, School of Engineering, Energy, Environment, Faculty, Oil and gas, Carbon, Emissions, Profile, Sustainability, Nanoscience and nanotechnology Students propose plans for a carbon-neutral campus Students in class 2.S999 (Solving for Carbon Neutrality at MIT) are charged with developing plans to make MIT’s campus carbon neutral by 2060. Fri, 17 Jan 2020 09:50:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>While so many faculty and researchers at MIT are developing technologies to reduce carbon emissions and increase energy sustainability, one class puts the power in students’ hands.</p> <p>In 2.S999 (Solving for Carbon Neutrality at MIT), teams of students are tasked with developing a plan to achieve carbon neutrality on MIT’s campus by 2060. “It’s a ‘roll up your sleeves and solve a real problem’ kind of class,” says Timothy Gutowski, professor of mechanical engineering and co-instructor for the class.</p> <p>In nearly every class, students hear from guest lecturers who offer their own expert views on energy sustainability and carbon emissions. In addition to faculty and staff from across MIT, guest lecturers include local government officials, industry specialists, and economists. Whether it’s the science and ethics behind climate change, the evolution of the electric grid, or the development of MIT’s upgraded Central Utilities Plant, these experts introduce students to considerations on a campus, regional, national, and global level.</p> <p>“It’s essential to expose students to these different perspectives so they understand the complexity and the multidisciplinary nature of this challenge,” says Julie Newman, director of MIT’s Office of Sustainability and co-instructor.</p> <p>In one class, students get the opportunity to embody different perspectives through a debate about the installation of an offshore wind farm near a small coastal town. Each student is given a particular role to play in a debate. Caroline Boone, a junior studying mechanical engineering, played the role of a beachfront property owner who objected to the installation.</p> <p>“It was a really good way of grasping how those negotiations happen in the real world,” recalls Boone. “The fact of the matter is, you’re going to have to work with groups who have their own interests — that requires compromise and negotiation.”</p> <p>Armed with these negotiation skills, along with insights from different experts, students are divided into teams and charged with developing a strategy that outlines year-by-year how MIT can achieve carbon neutrality by 2060. “The final project uses the campus as a test bed for engaging and exposing students to the complexity of solving for these global issues in their own backyard,” Newman adds.</p> <p>Student teams took a number of approaches in their strategies to achieve carbon neutrality. Tom Hubschman’s team focused on the immediate impact MIT could have through power purchase agreements — also known as PPAs.</p> <p>“Our team quickly realized that, given the harsh New England environment and the limited space on campus, building a giant solar or wind farm in the middle of Cambridge wasn’t a sound strategy,” says Hubschman, a mechanical engineering graduate student. Instead, his team built their strategy around replicating MIT’s current PPA that has resulted in the construction of a 650-acre solar farm in North Carolina.&nbsp;</p> <p>Boone’s team, meanwhile, took a different approach, developing a plan that didn’t include PPAs. “Our team was a bit contrarian in not having any PPAs, but we thought it was important to have that contrasting perspective,” she explains. Boone’s role within her team was to examine building energy use on campus. One takeaway from her research was the need for better controls and sensors to ensure campus buildings are running more efficiently.</p> <p>Regardless of their approach, each team had to deal with a level of uncertainty with regard to the efficiency of New England’s electric grid. “Right now, the electricity produced by MIT’s own power plant emits less carbon than the current grid,” adds Gutowski. “But the question is, as new regulations are put in place and new technologies are developed, when will there be a crossover in the grid emitting less carbon than our own power plant?” Students have to build this uncertainty into the predictive modeling for their proposed solutions.&nbsp;</p> <p>In the two years that the class has been offered, student projects have been helpful in shaping the Office of Sustainability’s own strategy. “These projects have reinforced our calculations and confirmed our strategy of using PPAs to contribute to greenhouse gas reduction off-site as we work toward developing on-site solutions,” explains Newman.</p> <p>This spring, Gutowski and Newman will work with a number of universities in South America on launching similar classes for their curricula. They will visit Ecuador, Chile, and Columbia, encouraging university administrators to task their students with solving for carbon neutrality on their own campuses.</p> Julie Newman, director of sustainability at MIT, says the final project for course 2.S999 “uses the campus as a test bed for engaging and exposing students to the complexity of solving [for] global issues in their own backyard.”Photo: Ken RichardsonMechanical engineering, School of Engineering, Classes and programs, Sustainability, Campus buildings and architecture, Climate change, Energy, Greenhouse gases, Students Zeroing in on decarbonization Wielding complex algorithms, nuclear science and engineering doctoral candidate Nestor Sepulveda spins out scenarios for combating climate change. Wed, 15 Jan 2020 00:00:00 -0500 Leda Zimmerman | Department of Nuclear Science and Engineering <p>To avoid the most destructive consequences of climate change, the world’s electric energy systems must stop producing carbon by 2050. It seems like an overwhelming technological, political, and economic challenge — but not to Nestor Sepulveda.</p> <p>“My work has shown me that we&nbsp;do&nbsp;have the means to tackle the problem, and we can start now,” he says. “I am optimistic.”</p> <p>Sepulveda’s research, first as a master’s student and now as a doctoral candidate in the MIT Department of Nuclear Science and Engineering (NSE), involves complex simulations that describe potential pathways to decarbonization. In work published last year in the journal&nbsp;<em>Joule,&nbsp;</em>Sepulveda and his co-authors made a powerful case for using a mix of renewable and “firm” electricity sources, such as nuclear energy, as the least costly, and most likely, route to a low- or no-carbon grid.</p> <p>These insights, which flow from a unique computational framework blending optimization and data science, operations research, and policy methodologies, have attracted interest from&nbsp;<em>The New York Times&nbsp;</em>and&nbsp;<em>The Economist,&nbsp;</em>as well as from such notable players in the energy arena as Bill Gates. For Sepulveda, the attention could not come at a more vital moment.</p> <p>“Right now, people are at extremes: on the one hand worrying that steps to address climate change might weaken the economy, and on the other advocating a Green New Deal to transform the economy that depends solely on solar, wind, and battery storage,” he says. “I think my data-based work can help bridge the gap and enable people to find a middle point where they can have a conversation.”</p> <p><strong>An optimization tool</strong></p> <p>The computational model Sepulveda is developing to generate this data, the centerpiece of his dissertation research, was sparked by classroom experiences at the start of his NSE master’s degree.</p> <p>“In courses like Nuclear Technology and Society [22.16], which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” he says. “I began wondering how to determine the value of different technologies.”</p> <p>Recognizing that “absolutes exist in people’s minds, but not in reality,” Sepulveda sought to develop a tool that might yield an optimal solution to the decarbonization question. His inaugural effort in modeling focused on weighing the advantages of utilizing advanced nuclear reactor designs against exclusive use of existing light-water reactor technology in the decarbonization effort.</p> <p>“I showed that in spite of their increased costs, advanced reactors proved more valuable to achieving the low-carbon transition than conventional reactor technology alone,” he says. This research formed the basis of Sepulveda’s master’s thesis in 2016, for a degree spanning NSE and the Technology and Policy Program. It also informed the MIT Energy Initiative’s report,&nbsp;“The Future of Nuclear Energy in a Carbon-Constrained World.”</p> <p><strong>The right stuff</strong></p> <p>Sepulveda comes to the climate challenge armed with a lifelong commitment to service, an appetite for problem-solving, and grit. Born in Santiago, he enlisted in the Chilean navy, completing his high school and college education at the national naval academy.</p> <p>“Chile has natural disasters every year, and the defense forces are the ones that jump in to help people, which I found really attractive,” he says. He opted for the most difficult academic specialty, electrical engineering, over combat and weaponry. Early in his career, the climate change issue struck him, he says, and for his senior project, he designed a ship powered by hydrogen fuel cells.</p> <p>After he graduated, the Chilean navy rewarded his performance with major responsibilities in the fleet, including outfitting a $100 million amphibious ship intended for moving marines and for providing emergency relief services. But Sepulveda was anxious to focus fully on sustainable energy, and petitioned the navy to allow him to pursue a master’s at MIT in 2014.</p> <p>It was while conducting research for this degree that Sepulveda confronted a life-altering health crisis: a heart defect that led to open-heart surgery. “People told me to take time off and wait another year to finish my degree,” he recalls. Instead, he decided to press on: “I was deep into ideas about decarbonization, which I found really fulfilling.”</p> <p>After graduating in 2016, he returned to naval life in Chile, but “couldn’t stop thinking about the potential of informing energy policy around the world and making a long-lasting impact,” he says. “Every day, looking in the mirror, I saw the big scar on my chest that reminded me to do something bigger with my life, or at least try.”</p> <p>Convinced that he could play a significant role in addressing the critical carbon problem if he continued his MIT education, Sepulveda successfully petitioned naval superiors to sanction his return to Cambridge, Massachusetts.</p> <p><strong>Simulating the energy transition</strong></p> <p>Since resuming studies here in 2018, Sepulveda has wasted little time. He is focused on refining his modeling tool to play out the potential impacts and costs of increasingly complex energy technology scenarios on achieving deep decarbonization. This has meant rapidly acquiring knowledge in fields such as economics, math, and law.</p> <p>“The navy gave me discipline, and MIT gave me flexibility of mind — how to look at problems from different angles,” he says.</p> <p>With mentors and collaborators such as Associate Provost and Japan Steel Industry Professor Richard Lester and MIT Sloan School of Management professors Juan Pablo Vielma and Christopher Knittel, Sepulveda has been tweaking his models. His simulations, which can involve more than 1,000 scenarios, factor in existing and emerging technologies, uncertainties such as the possible emergence of fusion energy, and different regional constraints, to identify optimal investment strategies for low-carbon systems and to determine what pathways generate the most cost-effective solutions.</p> <p>“The idea isn’t to say we need this many solar farms or nuclear plants, but to look at the trends and value the future impact of technologies for climate change, so we can focus money on those with the highest impact, and generate policies that push harder on those,” he says.</p> <p>Sepulveda hopes his models won’t just lead the way to decarbonization, but do so in a way that minimizes social costs. “I come from a developing nation, where there are other problems like health care and education, so my goal is to achieve a pathway that leaves resources to address these other issues.”</p> <p>As he refines his computations with the help of MIT’s massive computing clusters, Sepulveda has been building a life in the United States. He has found a vibrant Chilean community at MIT&nbsp;and discovered local opportunities for venturing out on the water, such as summer sailing on the Charles.</p> <p>After graduation, he plans to leverage his modeling tool for the public benefit, through direct interactions with policy makers (U.S. congressional staffers have already begun to reach out to him), and with businesses looking to bend their strategies toward a zero-carbon future.</p> <p>It is a future that weighs even more heavily on him these days: Sepulveda is expecting his first child. “Right now, we’re buying stuff for the baby, but my mind keeps going into algorithmic mode,” he says. “I’m so immersed in decarbonization that I sometimes dream about it.”</p> “In courses like Nuclear Technology and Society, which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” says doctoral student Nestor Sepulveda. “I began wondering how to determine the value of different technologies.”Photo: Gretchen ErtlNuclear science and engineering, MIT Energy Initiative, School of Engineering, Technology and policy, Students, Research, Alternative energy, Energy, Energy storage, Greenhouse gases, Climate change, Global Warming, Sustainability, Emissions, Renewable energy, Economics, Policy, Nuclear power and reactors, Profile, graduate, Graduate, postdoctoral Pathways to a low-carbon future A new study looks at how the global energy mix could change over the next 20 years. Thu, 09 Jan 2020 13:30:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>When it comes to fulfilling ambitious energy and climate commitments, few nations successfully walk their talk. A case in point is the Paris Agreement initiated four years ago. Nearly 200 signatory nations submitted voluntary pledges to cut their contribution to the world’s greenhouse gas emissions by 2030, but <a href="">many are not on track</a> to fulfill these pledges. Moreover, only a small number of countries are now pursuing climate policies consistent with keeping global warming well below 2 degrees Celsius, the long-term target recommended by the Intergovernmental Panel on Climate Change (IPCC). &nbsp;&nbsp;&nbsp;</p> <p>This growing discrepancy between current policies and long-term targets — combined with uncertainty about individual nations’ ability to fulfill their commitments due to administrative, technological, and cultural challenges — makes it increasingly difficult for scientists to project the future of the global energy system and its impact on the global climate. Nonetheless, these projections remain essential for decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy.</p> <p>Toward that end, several expert groups continue to produce energy scenarios and analyze their implications for the climate. In a <a href="">study</a> in the journal <em>Economics of Energy &amp; Environmental Policy</em>, <a href="">Sergey Paltsev</a>, deputy director of the <a href="">MIT Joint Program on the Science and Policy of Global Change</a> and a senior research scientist at the <a href="">MIT Energy Initiative</a>, collected projections of the global energy mix over the next two decades from several major energy-scenario producers. Aggregating results from scenarios developed by the MIT Joint Program, International Energy Agency, Shell, BP and ExxonMobil, and contrasting them with scenarios assessed by the IPCC that would be required to follow a pathway that limits global warming to 1.5 C, Paltsev arrived at three notable findings:</p> <p>1. Fossil fuels decline, but still dominate. Assuming current Paris Agreement pledges are maintained beyond 2030, the share of fossil fuels in the global energy mix declines from approximately 80 percent today to 73-76 percent in 2040. In scenarios consistent with the 2 C goal, this share decreases to 56-61 percent in 2040. Meanwhile, the share of wind and solar rises from 2 percent today to 6-13 percent (current pledges) and further to 17-26 percent (2 C scenarios) in 2040.</p> <p>2. Carbon capture waits in the wings. The multiple scenarios also show a mixed future for fossil fuels as the globe shifts away from carbon-intensive energy sources. Coal use does not have a sustainable future unless combined with carbon capture and storage (CCS) technology, and most near-term projections show no large-scale deployment of CCS in the next 10-15 years. Natural gas consumption, however, is likely to increase in the next 20 years, but also projected to decline thereafter without CCS. For pathways consistent with the “well below 2 C” goal, CCS scale-up by midcentury is essential for all carbon-emitting technologies.&nbsp;</p> <p>3. Solar and wind thrive, but storage challenges remain. The scenarios show the critical importance of energy-efficiency improvements on the pace of the low-carbon transition but little consensus on the magnitude of such improvements. They do, however, unequivocally point to successful upcoming decades for solar and wind energy. This positive outlook is due to declining costs and escalating research and innovation in addressing intermittency and long-term energy storage challenges.</p> <p>While the scenarios considered in this study project an increased share of renewables in the next 20 years, they do not indicate anything close to a complete decarbonization of the energy system during that time frame. To assess what happens beyond 2040, the study concludes that decision-makers should be drawing upon a range of projections of plausible futures, because the dominant technologies of the near term may not prevail over the long term.</p> <p>“While energy projections are becoming more difficult because of the widening gulf between current policies and stated goals, they remain stakeholders’ sharpest tool in assessing the near- and long-term physical and financial risks associated with climate change and the world’s ongoing transition to a low-carbon energy system,” says Paltsev. “Combining the results from multiple sources provides additional insight into the evolution of the global energy mix.”</p> The AES Corporation, based in Virginia, installed the world’s largest solar-plus-storage system on the southern end of the Hawaiian island of Kauai. A scaled-down version was first tested at the National Renewable Energy Laboratory. Photo: Dennis Schroeder/NRELJoint Program on the Science and Policy of Global Change, MIT Energy Initiative, Energy, Greenhouse gases, Renewable energy, Climate, Climate change, Environment, Policy, Alternative energy, Emissions, Research, Pollution Preventing energy loss in windows Mechanical engineers are developing technologies that could prevent heat from entering or escaping windows, potentially preventing a massive loss of energy. Mon, 06 Jan 2020 15:30:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>In the quest to make buildings more energy efficient, windows present a particularly difficult problem. According to the U.S. Department of Energy, heat that either escapes or enters windows accounts for roughly 30 percent of the energy used to heat and cool buildings. Researchers are developing a variety of window technologies that could prevent this massive loss of energy.</p> <p>“The choice of windows in a building has a direct influence on energy consumption,” says Nicholas Fang, professor of mechanical engineering. “We need an effective way of blocking solar radiation.”</p> <p>Fang is part of a large collaboration that is working together to develop smart adaptive control and monitoring systems for buildings. The research team, which includes researchers from the Hong Kong University of Science and Technology and Leon Glicksman, professor of building technology and mechanical engineering at MIT, has been tasked with helping Hong Kong achieve its ambitious goal to reduce carbon emissions by 40 percent by 2025.</p> <p>“Our idea is to adapt new sensors and smart windows in an effort to help achieve energy efficiency and improve thermal comfort for people inside buildings,” Fang explains.</p> <p>His contribution is the development of a smart material that can be placed on a window as a film that blocks heat from entering. The film remains transparent when the surface temperature is under 32 degrees Celsius, but turns milky when it exceeds 32 C. This change in appearance is due to thermochromic microparticles that change phases in response to heat. The smart window’s milky appearance can block up to 70 percent of solar radiation from passing through the window, translating to a 30 percent reduction in cooling load.&nbsp;</p> <p>In addition to this thermochromic material, Fang’s team is hoping to embed windows with sensors that monitor sunlight, luminance, and temperature. “Overall, we want an integral solution to reduce the load on HVAC systems,” he explains.</p> <p>Like Fang, graduate student Elise Strobach is working on a material that could significantly reduce the amount of heat that either escapes or enters through windows. She has developed a high-clarity silica aerogel that, when placed between two panes of glass, is 50 percent more insulating than traditional windows and lasts up to a decade longer.</p> <p>“Over the course of the past two years, we’ve developed a material that has demonstrated performance and is promising enough to start commercializing,” says Strobach, who is a PhD candidate in MIT’s Device Research Laboratory. To help in this commercialization, Strobach has co-founded the startup <a href="">AeroShield Materials</a>.&nbsp;</p> <p>Lighter than a marshmallow, AeroShield’s material comprises 95 percent air. The rest of the material is made up of silica nanoparticles that are just 1-2 nanometers large. This structure blocks all three modes of heat loss: conduction, convection, and radiation. When gas is trapped inside the material’s small voids, it can no longer collide and transfer energy through convection. Meanwhile, the silica nanoparticles absorb radiation and re-emit it back in the direction it came from.</p> <p>“The material’s composition allows for a really intense temperature gradient that keeps the heat where you want it, whether it’s hot or cold outside,” explains Strobach, who, along with AeroShield co-founder Kyle Wilke, was named one of <a href="">Forbes’ 30 Under 30 in Energy</a>. Commercialization of this research is being supported by the MIT Deshpande Center for Technological Innovation.</p> <p>Strobach also sees possibilities for combining AeroShield technologies with other window solutions being developed at MIT, including Fang’s work and research being conducted by Gang Chen, Carl Richard Soderberg Professor of Power Engineering, and research scientist Svetlana Boriskina.</p> <p>“Buildings represent one third of U.S. energy usage, so in many ways windows are low-hanging fruit,” explains Chen.</p> <p>Chen and Boriskina previously worked with Strobach on the first iteration of the AeroShield material for their project developing a solar thermal aerogel receiver. More recently, they have developed polymers that could be used in windows or building facades to trap or reflect heat, regardless of color.&nbsp;</p> <p>These polymers were partially inspired by stained-glass windows. “I have an optical background, so I’m always drawn to the visual aspects of energy applications,” says Boriskina. “The problem is, when you introduce color it affects whatever energy strategy you are trying to pursue.”</p> <p>Using a mix of polyethylene and a solvent, Chen and Boriskina added various nanoparticles to provide color. Once stretched, the material becomes translucent and its composition changes. Previously disorganized carbon chains reform as parallel lines, which are much better at conducting heat.</p> <p>While these polymers need further development for use in transparent windows, they could possibly be used in colorful, translucent windows that reflect or trap heat, ultimately leading to energy savings. “The material isn’t as transparent as glass, but it’s translucent. It could be useful for windows in places you don’t want direct sunlight to enter — like gyms or classrooms,” Boriskina adds.</p> <p>Boriskina is also using these materials for military applications. Through a three-year project funded by the U.S. Army, she is developing lightweight, custom-colored, and unbreakable polymer windows. These windows can provide passive temperature control and camouflage for portable shelters and vehicles.</p> <p>For any of these technologies to have a meaningful impact on energy consumption, researchers must improve scalability and affordability. “Right now, the cost barrier for these technologies is too high — we need to look into more economical and scalable versions,” Fang adds.&nbsp;</p> <p>If researchers are successful in developing manufacturable and affordable solutions, their window technologies could vastly improve building efficiency and lead to a substantial reduction in building energy consumption worldwide.</p> A smart window developed by Professor Nicholas Fang includes thermochromic material that turns frosty when exposed to temperatures of 32 C or higher, such as when a researcher touches the window with her hand. Photo courtesy of the researchers.Mechanical engineering, School of Engineering, Materials Science and Engineering, Energy, Architecture, Climate change, Glass, Nanoscience and nanotechnology How long will a volcanic island live? Plate tectonics and mantle plumes set the lifespan of volcanic islands like Hawaii and the Galapagos. Wed, 01 Jan 2020 13:59:59 -0500 Jennifer Chu | MIT News Office <p>When a hot plume of rock rises through the Earth’s mantle to puncture the overlying crust, it can create not only a volcanic ocean island, but also a swell in the ocean floor hundreds to thousands of kilometers long. Over time the island is carried away by the underlying tectonic plate, and the plume pops out another island in its place. Over millions of years, this geological hotspot can produce a chain of trailing islands, on which life may flourish temporarily before the islands sink, one by one, back into the sea.&nbsp;</p> <p>The Earth is pocked with dozens of hotspots, including those that produced the island chains of Hawaii and the Galapagos. While the process by which volcanic islands form is similar from chain to chain, the time that any island spends above sea level can vary widely, from a few million years in the case of the Galapagos to over 20 million for the Canary Islands. An island’s age can determine the life and landscapes that evolve there. And yet the mechanisms that set an island’s lifespan are largely unknown.</p> <p>Now scientists at MIT have an idea about the processes that determine a volcanic island’s age. In a paper published today in&nbsp;<em>Science Advances</em>, they report an analysis of 14 major volcanic island chains around the world. They found that an island’s age is related to two main geological factors: the speed of the underlying plate and the size of the swell generated by the hotspot plume.</p> <p>For instance, if an island lies on a fast-moving plate, it is likely to have a short lifespan, unless, as is the case with Hawaii, it was also created by a very large plume. The plume that gave rise to the Hawaiian islands is among the largest on Earth, and while the Pacific plate on which Hawaii sits is relatively speedy compared with other oceanic plates, it takes considerable time for the plate to slide over the plume’s expansive swell.&nbsp;</p> <p>The researchers found that this interplay between tectonic speed and plume size explains why the Hawaiian islands persist above sea level for million years longer than the oldest Galapagos Islands, which also sit on plates that travel at a similar speed but over a much smaller plume. By comparison, the Canary Islands, among the oldest island chains in the world, sit on the slow-moving Atlantic plate and over a relatively large plume.&nbsp;</p> <p>“These island chains are dynamic, insular laboratories that biologists have long focused on,” says former MIT graduate student Kimberly Huppert, the study’s lead author. “But besides studies on individual chains, there’s not a lot of work that related them to processes of the solid Earth, kilometers below the surface.”</p> <p>“You can imagine all these organisms living on a sort of treadmill made of islands, like stepping stones, and they’re evolving, diverging, migrating to new islands, and the old islands are drowning,” adds Taylor Perron, associate head of MIT’s Department of Earth, Atmospheric and Planetary Sciences. “What Kim has shown is, there’s a geophysical mechanism that controls how fast this treadmill is moving and how long the island chains go before they drop off the end.”</p> <p>Huppert and Perron co-authored the study with Leigh Royden, professor of earth, atmospheric and planetary sciences at MIT.&nbsp;</p> <p><strong>Sinking a blowtorch</strong></p> <p>The new study is a part of Huppert’s MIT thesis work, in which she looked mainly at the evolution of landscapes on volcanic island chains, the Hawaiian islands in particular. In studying the processes that contribute to island erosion, she dug up a controversy in the literature regarding the processes that cause the seafloor to swell around hotspot islands.&nbsp;</p> <p>“The idea was, if you heat some of the bottom of the plate, you can make it go up really fast by just thermal uplift,&nbsp;&nbsp;basically like a blowtorch under the plate,” Royden says.&nbsp;</p> <p>If this idea is correct, then by the same token, cooling of the heated plate should cause the seafloor to subside and islands to eventually sink back into the ocean. But in studying the ages of drowned islands in hotspot chains around the world, Huppert found that islands drown at a faster rate than any natural cooling mechanism could explain.</p> <p>“So most of this uplift and sinking couldn’t have been from heating and cooling,” Royden says. “It had to be something else.”</p> <p>Huppert’s observation inspired the group to compare major volcanic island chains in hopes of identifying the mechanisms of island uplift and sinking — which are likely the same processes that set an island’s lifespan, or time above sea level.&nbsp;</p> <p><strong>Evolution, on a treadmill</strong></p> <p>In their analysis, the researchers looked at 14 volcanic island chains around the world, including the Hawaiian, Galapagos, and Canary islands. For each island chain, they noted the direction in which the underlying tectonic plate was moving and measured the plate’s average speed relative to the hotspot. They then measured, in the direction of each island chain, the distance between the beginning and the end of the swell, or uplift in the crust, created by the underlying plume. For every island chain, they divided the swell distance by plate velocity to arrive at a number representing the average time a volcanic island should spend atop the plume’s swell — which should determine how long an island remains above sea level before sinking into the ocean.</p> <p>When the researchers compared their calculations with the actual ages of each island in each of the 14 chains, including islands that had long since sunk below sea level, they found a strong correlation between the time spent atop the swell and the typical amount of time that islands remain above sea level. A volcanic island’s lifespan, they concluded, depends on a combination of the underlying plate’s speed and the size of the plume, or swell that it creates.&nbsp;</p> <p>Huppert says that the processes that set an island’s age can help scientists better understand biodiversity and how life looks different from one island chain to another.&nbsp;</p> <p>“If an island spends a long time above sea level, that provides a long time for speciation to play out,” Huppert says. “But if you have an island chain where you have islands that drown at a faster rate, then it will affect the ability of fauna to radiate to neighboring islands, and how these islands are populated.”</p> <p>The researchers posit that, in some sense, we have the interplay of tectonic speed and plume size to thank for our modern understanding of evolution.&nbsp;</p> <p>“You’re looking at a process in the solid Earth which is contributing to the fact that the Galapagos is a very fast moving treadmill, with islands moving off very quickly, with not a long time to erode, and this was the system that led to people discovering evolution,” Royden notes. “So in a sense this process really set the stage for humans to figure out what evolution was about, by doing it in this microcosm. If there hadn’t been this process, and the Galapagos hadn’t been on that short residence time, who knows how long it would have taken for people to figure it out.”</p> <p>This research was supported, in part, by NASA.</p> An aerial view of Las Tintoreras, Isla Isabela in the Galapagos Islands, Ecuador.EAPS, Earth and atmospheric sciences, Environment, Geology, Evolution, Research, School of Science, NASA Bose grants for 2019 reward bold ideas across disciplines Three innovative research projects in literature, plant epigenetics, and chemical engineering will be supported by Professor Amar G. Bose Research Grants. Mon, 23 Dec 2019 14:40:11 -0500 MIT Resource Development <p>Now in their seventh year, the Professor Amar G. Bose Research Grants support visionary projects that represent intellectual curiosity and a pioneering spirit. Three MIT faculty members have each been awarded one of these prestigious awards for 2019 to pursue diverse questions in the humanities, biology, and engineering.</p> <p>At a ceremony hosted by MIT President L. Rafael Reif on Nov. 25 and attended by past awardees, Provost Martin Schmidt, the Ray and Maria Stata Professor of Electrical Engineering and Computer Science, formally announced this year’s Amar G. Bose Research Fellows: Sandy Alexandre, Mary Gehring, and Kristala L.J. Prather.</p> <p>The fellowships are named&nbsp;for&nbsp;the late Amar G. Bose ’51, SM ’52, ScD ’56, a longtime MIT faculty member and the founder of the Bose Corporation. Speaking at the event, President Reif expressed appreciation for the Bose Fellowships, which enable highly creative and unusual research in areas that can be hard to fund through traditional means. “We are tremendously grateful to the Bose family for providing the support that allows bold and curious thinkers at MIT to dream big, challenge themselves, and explore.”</p> <p>Judith Bose, widow of Amar’s son, Vanu ’87, SM ’94, PhD ’99, congratulated the fellows on behalf of the Bose family. “We talk a lot at this event about the power of a great innovative idea, but I think it was a personal mission of Dr. Bose to nurture the ability, in each individual that he met along the way, to follow through — not just to have the great idea but the agency that comes with being able to pursue your idea, follow it through, and actually see where it leads,” Bose said. “And Vanu was the same way. That care that was epitomized by Dr. Bose not just in the idea itself, but in the personal investment, agency, and nurturing necessary to bring the idea to life — that care is a large part of what makes true change in the world."</p> <p><strong>The relationship between literature and engineering</strong></p> <p>Many technological innovations have resulted from the influence of literature, one of the most notable being the World Wide Web. According to many sources, Sir Tim Berners-Lee, the web’s inventor, found inspiration from a short story by Arthur C. Clarke titled “Dial F for Frankenstein.” Science fiction has presaged a number of real-life technological innovations, including&nbsp;the defibrillator, noted in Mary Shelley’s "Frankenstein;" the submarine, described in Jules Verne’s "20,000 Leagues Under the Sea;" and earbuds, described in Ray Bradbury’s "Fahrenheit 451." But the data about literature’s influence on STEM innovations are spotty, and these one-to-one relationships are not always clear-cut.</p> <p>Sandy Alexandre, associate professor of literature, intends to change that by creating a large-scale database of the imaginary inventions found in literature. Alexandre’s project will enact the step-by-step mechanics of STEM innovation via one of its oft-unsung sources: literature. “To deny or sever the ties that bind STEM and literature is to suggest — rather disingenuously — that the ideas for many of the STEM devices that we know and love miraculously just came out of nowhere or from an elsewhere where literature isn’t considered relevant or at all,” she says.</p> <p>During the first phase of her work, Alexandre will collaborate with students to enter into the database the imaginary inventions as they are described verbatim in a selection of books and other texts that fall under the category of speculative fiction—a category that includes but is not limited to the subgenres of fantasy, Afrofuturism, and science fiction. This first phase will, of course, require that students carefully read these texts in general, but also read for these imaginary inventions more specifically. Additionally, students with drawing skills will be tasked with interpreting the descriptions by illustrating them as two-dimensional images.</p> <p>From this vast inventory of innovations, Alexandre, in consultation with students involved in the project, will decide on a short list of inventions that meet five criteria: they must be feasible, ethical, worthwhile, useful, and necessary. This vetting process, which constitutes the second phase of the project, is guided by a very important question: what can creating and thinking with a vast database of speculative fiction’s imaginary inventions teach us about what kinds of ideas we should (and shouldn’t) attempt to make into a reality? For the third and final phase, Alexandre will convene a team to build a real-life prototype of one of the imaginary inventions. She envisions this prototype being placed on exhibit at the MIT Museum.</p> <p>The Bose research grant, Alexandre says, will allow her to take this project from a thought experiment to lab experiment. “This project aims to ensure that literature no longer play an overlooked role in STEM innovations. Therefore, the STEM innovation, which will be the culminating prototype of this research project, will cite a work of literature as the main source of information used in its invention.”</p> <p><strong>Nature’s role in chemical production</strong></p> <p>Kristala L.J. Prather ’94, the Arthur D. Little Professor of Chemical Engineering, has been focused on using biological systems for chemical production during the 15 years she’s been at the Institute. Biology as a medium for chemical synthesis has been successfully exploited to commercially produce molecules for uses that range from food to pharmaceuticals — ethanol is a good example. However, there is a range of other molecules with which scientists have been trying to work, but they have faced challenges around an insufficient amount of material being produced and a lack of defined steps needed to make a specific compound.</p> <p>Prather’s research is rooted in the fact that there are a number of naturally (and unnaturally) occurring chemical compounds in the environment, and cells have evolved to be able to consume them. These cells have evolved or developed a protein that will sense a compound’s presence — a biosensor — and in response will make other proteins that help the cells utilize that compound for its benefit.</p> <p>“We know biology can do this,” Prather says, “so if we can put together a sufficiently diverse set of microorganisms, can we just let nature make these regulatory molecules for anything that we want to be able to sense or detect?” Her hypothesis is that if her team exposes cells to a new compound for a long enough period of time, the cells will evolve the ability to either utilize that carbon source or develop an ability to respond to it. If Prather and her team can then identify the protein that’s now recognizing what that new compound is, they can isolate it and use it to improve the production of that compound in other systems. “The idea is to let nature evolve specificity for particular molecules that we’re interested in,” she adds.</p> <p>Prather’s lab has been working with biosensors for some time, but her team has been limited to sensors that are already well characterized and that were readily available. She’s interested in how they can get access to a wider range of what she knows nature has available through the incremental exposure of new compounds to a more comprehensive subset of microorganisms.</p> <p>“To accelerate the transformation of the chemical industry, we must find a way to create better biological catalysts and to create new tools when the existing ones are insufficient,” Prather says. “I am grateful to the Bose Fellowship Committee for allowing me to explore this novel idea.”</p> <p>Prather’s findings as a result of this project hold the possibility of broad impacts in the field of metabolic engineering, including the development of microbial systems that can be engineered to enhance degradation of both toxic and nontoxic waste.</p> <p><strong>Adopting orphan crops to adapt to climate change</strong></p> <p>In the context of increased environmental pressure and competing land uses, meeting global food security needs is a pressing challenge. Although yield gains in staple grains such as rice, wheat, and corn have been high over the last 50 years, these have been accompanied by a homogenization of the global food supply; only 50 crops provide 90% of global food needs.</p> <p>However, there are at least 3,000 plants that can be grown and consumed by humans, and many of these species thrive in marginal soils, at high temperatures, and with little rainfall. These “orphan” crops are important food sources for farmers in less developed countries but have been the subject of little research.</p> <p>Mary Gehring, associate professor of biology at MIT, seeks to bring orphan crops into the molecular age through epigenetic engineering. She is working to promote hybridization, increase genetic diversity, and reveal desired traits for two orphan seed crops: an oilseed crop, <em>Camelina sativa </em>(false flax), and a high-protein legume, <em>Cajanus cajan </em>(pigeon pea).</p> <p><em>C. sativa, </em>which produces seeds with potential for uses in food and biofuel applications, can grow on land with low rainfall, requires minimal fertilizer inputs, and is resistant to several common plant pathogens. Until the mid-20th century, <em>C. sativa </em>was widely grown in Europe but was supplanted by canola, with a resulting loss of genetic diversity. Gehring proposes to recover this genetic diversity by creating and characterizing hybrids between <em>C. sativa </em>and wild relatives that have increased genetic diversity.</p> <p>“To find the best cultivars of orphan crops that will withstand ever increasing environmental insults requires a deeper understanding of the diversity present within these species. We need to expand the plants we rely on for our food supply if we want to continue to thrive in the future,” says Gehring. “Studying orphan crops represents a significant step in that direction. The Bose grant will allow my lab to focus on this historically neglected but vitally important field.”</p> Left to right: MIT Provost Martin Schmidt and President L. Rafael Reif stand with 2019 Bose Fellows Kristala Prather, Mary Gehring, and Sandy Alexandre, along with Judy Bose and Ursula Bose.Photo: Rose LincolnAwards, honors and fellowships, Grants, Faculty, Literature, Technology and society, Chemical engineering, Drug development, Chemistry, Biology, Microbes, Agriculture, Climate change, School of Science, School of Engineering, School of Humanities Arts and Social Sciences, Alumni/ae The race to develop renewable energy technologies Mechanical engineers rush to develop energy conversion and storage technologies from renewable sources such as wind, wave, solar, and thermal. Wed, 18 Dec 2019 11:45:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>In the early 20th century, just as electric grids were starting to transform daily life, an unlikely advocate for renewable energy voiced his concerns about burning fossil fuels. Thomas Edison expressed dismay over using combustion instead of renewable resources in a 1910 interview for Elbert Hubbard’s anthology, “Little Journeys to the Homes of the Great.”</p> <p>“This scheme of combustion to get power makes me sick to think of — it is so wasteful,” Edison said. “You see, we should utilize natural forces and thus get all of our power. Sunshine is a form of energy, and the winds and the tides are manifestations of energy. Do we use them? Oh, no! We burn up wood and coal, as renters burn up the front fence for fuel.”</p> <p>Over a century later, roughly 80 percent of global energy consumption still comes from burning fossil fuels. As the impact of climate change on the environment becomes increasingly drastic, there is a mounting sense of urgency for researchers and engineers to develop scalable renewable energy solutions.</p> <p>“Even 100 years ago, Edison understood that we cannot replace combustion with a single alternative,” adds Reshma Rao PhD '19, a postdoc in MIT’s Electrochemical Energy Lab who included Edison’s quote in her doctoral thesis. “We must look to different solutions that might vary temporally and geographically depending on resource availability.”</p> <p>Rao is one of many researchers across MIT’s Department of Mechanical Engineering who have entered the race to develop energy conversion and storage technologies from renewable sources such as wind, wave, solar, and thermal.</p> <p><strong>Harnessing energy from waves</strong></p> <p>When it comes to renewable energy, waves have other resources beat in two respects. First, unlike solar, waves offer a consistent energy source regardless of time of day. Second, waves provide much greater energy density than wind due to water’s heavier mass.</p> <p>Despite these advantages, wave-energy harvesting is still in its infancy. Unlike wind and solar, there is no consensus in the field of wave hydrodynamics on how to efficiently capture and convert wave energy. Dick K.P. Yue, Philip J. Solondz Professor of Engineering, is hoping to change that.</p> <p>“My group has been looking at new paradigms,” explains Yue. “Rather than tinkering with small improvements, we want to develop a new way of thinking about the wave-energy problem.”</p> <p>One aspect of that paradigm is determining the optimal geometry of wave-energy converters (WECs). Graduate student Emma Edwards has been developing a systematic methodology to determine what kind of shape WECs should be.</p> <p>“If we can optimize the shape of WECs for maximizing extractable power, wave energy could move significantly closer to becoming an economically viable source of renewable energy,” says Edwards.&nbsp;</p> <p>Another aspect of the wave-energy paradigm Yue’s team is working on is finding the optimal configuration for WECs in the water. Grgur Tokić PhD '16, an MIT alum and current postdoc working in Yue’s group, is building a case for optimal configurations of WECs in large arrays, rather than as stand-alone devices.</p> <p>Before being placed in the water, WECs are tuned for their particular environment. This tuning involves considerations like predicted wave frequency and prevailing wind direction. According to Tokić and Yue, if WECs are configured in an array, this tuning could occur in real time, maximizing energy-harvesting potential.</p> <p>In an array, “sentry” WECs could gather measurements about waves such as amplitude, frequency, and direction. Using wave reconstructing and forecasting, these WECs could then communicate information about conditions to other WECs in the array wirelessly, enabling them to tune minute-by-minute in response to current wave conditions.</p> <p>“If an array of WECs can tune fast enough so they are optimally configured for their current environment, now we are talking serious business,” explains Yue. “Moving toward arrays opens up the possibilities of significant advances and gains many-times-over non-interacting, isolated devices.”</p> <p>By examining the optimal size and configuration of WECs using theoretical and computational methods, Yue’s group hopes to develop potentially game-changing frameworks for harnessing the power of waves.</p> <p><strong>Accelerating the discovery of photovoltaics </strong></p> <p>The amount of solar energy that reaches the Earth’s surface offers a tantalizing prospect in the quest for renewable energy. Every hour, an estimated 430 quintillion joules of energy is delivered to Earth from the sun. That’s the equivalent of one year’s worth of global energy consumption by humans.</p> <p>Tonio Buonassisi, professor of mechanical engineering, has dedicated his entire career to developing technologies that harness this energy and convert it into usable electricity. But time, he says, is of the essence. “When you consider what we are up against in terms of climate change, it becomes increasingly clear we are running out of time,” he says.</p> <p>For solar energy to have a meaningful impact, according to Buonassisi, researchers need to develop solar cell materials that are efficient, scalable, cost-effective, and reliable. These four variables pose a challenge for engineers — rather than develop a material that satisfies just one of these factors, they need to create one that ticks off all four boxes and can be moved to market as quickly as possible. “If it takes us 75 years to get a solar cell that does all of these things to market, it’s not going to help us solve this problem. We need to get it to market in the next five years,” Buonassisi adds.</p> <p>To accelerate the discovery and testing of new materials, Buonassisi’s team has developed a process that uses a combination of machine learning and high-throughput experimentation — a type of experimentation that enables a large quantity of materials to be screened at the same time. The result is a 10-fold increase in the speed of discovery and analysis for new solar cell materials.</p> <p>“Machine learning is our navigational tool,” explains Buonassisi. “It can de-bottleneck the cycle of learning so we can grind through material candidates and find one that satisfies all four variables.”</p> <p>Shijing Sun, a research scientist in Buonassisi’s group, used a combination of machine learning and high-throughput experiments to quickly assess and test perovskite solar cells.</p> <p>“We use machine learning to accelerate the materials discovery, and developed an algorithm that directs us to the next sampling point and guides our next experiment,” Sun says. Previously, it would take three to five hours to classify a set of solar cell materials. The machine learning algorithm can classify materials in just five minutes.</p> <p>Using this method, Sun and Buonassisi made 96 tested compositions. Of those, two perovskite materials hold promise and will be tested further.</p> <p>By using machine learning as a tool for inverse design, the research team hopes to assess thousands of compounds that could lead to the development of a material that enables the large-scale adoption of solar energy conversion. “If in the next five years we can develop that material using the set of productivity tools we’ve developed, it can help us secure the best possible future that we can,” adds Buonassisi.</p> <p><strong>New materials to trap heat</strong></p> <p>While Buonassisi’s team is focused on developing solutions that directly convert solar energy into electricity, researchers including Gang Chen, Carl Richard Soderberg Professor of Power Engineering, are working on technologies that convert sunlight into heat. Thermal energy from the heat is then used to provide electricity.</p> <p>“For the past 20 years, I’ve been working on materials that convert heat into electricity,” says Chen. While much of this materials research is on the nanoscale, Chen and his team at the NanoEngineering Group are no strangers to large-scale experimental systems. They previously built a to-scale receiver system that used concentrating solar thermal power (CSP).</p> <p>In CSP, sunlight is used to heat up a thermal fluid, such as oil or molten salt. That fluid is then either used to generate electricity by running an engine, such as a steam turbine, or stored for later use.</p> <p>Over the course of a four-year project funded by the U.S. Department of Energy, Chen’s team built a CSP receiver at MIT’s Bates Research and Engineering Center in Middleton, Massachusetts. They developed the Solar Thermal Aerogel Receiver — nicknamed STAR.</p> <p>The system relied on mirrors known as Fresnel reflectors to direct sunlight to pipes containing thermal fluid. Typically, for fluid to effectively trap the heat generated by this reflected sunlight, it would need to be encased in a high-cost vacuum tube. In STAR, however, Chen’s team utilized a transparent aerogel that can trap heat at incredibly high temperatures — removing the need for expensive vacuum enclosures. While letting in over 95 percent of the incoming sunlight, the aerogel retains its insulating properties, preventing heat from escaping the receiver.</p> <p>In addition to being more efficient than traditional vacuum receivers, the aerogel receivers enabled new configurations for the CSP solar reflectors. The reflecting mirrors were flatter and more compact than conventionally used parabolic receivers, resulting in a savings of material.&nbsp;</p> <p>“Cost is everything with energy applications, so the fact STAR was cheaper than most thermal energy receivers, in addition to being more efficient, was important,” adds Svetlana Boriskina, a research scientist working on Chen’s team.&nbsp;</p> <p>After the conclusion of the project in 2018, Chen's team has continued to explore solar thermal applications for the aerogel material used in STAR. He recently used the aerogel in a device that contained a heat-absorbing material. When placed on a roof on MIT’s campus, the heat-absorbing material, which was covered by a layer of the aerogel, reached an amazingly high temperature of 220 degrees Celsius. The outside air temperature, for comparison, was a chilly 0 C. Unlike STAR, this new system doesn’t require Fresnel reflectors to direct sunlight to the thermal material.</p> <p>“Our latest work using the aerogel enables sunlight concentration without focusing optics to harness thermal energy,” explains Chen. “If you aren’t using focusing optics, you can develop a system that is easier to use and cheaper than traditional receivers.”</p> <p>The aerogel device could potentially be further developed into a system that powers heating and cooling systems in homes.</p> <p><strong>Solving the storage problem</strong></p> <p>While CSP receivers like STAR offer some energy storage capabilities, there is a push to develop more robust energy storage systems for renewable technologies. Storing energy for later use when resources aren’t supplying a consistent stream of energy — for example, when the sun is covered by clouds, or there is little-to-no wind — will be crucial for the adoption of renewable energy on the grid. To solve this problem, researchers are developing new storage technologies.&nbsp;&nbsp;</p> <p>Asegun Henry, Robert N. Noyce Career Development Professor, who like Chen has developed CSP technologies, has created a new storage system that has been dubbed “sun in a box.” Using two tanks, excess energy can be stored in white-hot molten silicon. When this excess energy is needed, mounted photovoltaic cells can be actuated into place to convert the white-hot light from the silicon back into electricity.</p> <p>“It’s a true battery that can work with any type of energy conversion,” adds Henry.</p> <p>Betar Gallant, ABS Career Development Professor, meanwhile, is exploring ways to improve the energy density of today’s electrochemical batteries by designing new storage materials that are more cost-effective and versatile for storing cleanly generated energy. Rather than develop these materials using metals that are extracted through energy-intensive mining, she aims to build batteries using more earth-abundant materials.</p> <p>“Ideally, we want to create a battery that can match the irregular supply of solar or wind energy that peak at different times without degrading, as today’s batteries do” explains Gallant.</p> <p>In addition to working on lithium-ion batteries, like Gallant, Yang Shao-Horn, W.M. Keck Professor of Energy, and postdoc Reshma Rao are developing technologies that can directly convert renewable energy to fuels.</p> <p>“If we want to store energy at scale going beyond lithium ion batteries, we need to use resources that are abundant,” Rao explains. In their electrochemical technology, Rao and Shao-Horn utilize one of the most abundant resources — liquid water.</p> <p>Using an active catalyst and electrodes, water is split into hydrogen and oxygen in a series of chemical reactions. The hydrogen becomes an energy carrier and can be stored for later use in a fuel cell. To convert the energy stored in the hydrogen back into electricity, the reactions are reversed. The only by-product of this reaction is water.&nbsp;&nbsp;</p> <p>“If we can get and store hydrogen sustainably, we can basically electrify our economy using renewables like wind, wave, or solar,” says Rao.</p> <p>Rao has broken down every fundamental reaction that takes place within this process. In addition to focusing on the electrode-electrolyte interface involved, she is developing next-generation catalysts to drive these reactions.&nbsp;&nbsp;</p> <p>“This work is at the frontier of the fundamental understanding of active sites catalyzing water splitting for hydrogen-based fuels from solar and wind to decarbonize transport and industry,” adds Shao-Horn.</p> <p><strong>Securing a sustainable future </strong></p> <p>While shifting from a grid powered primarily by fossil fuels to a grid powered by renewable energy seems like a herculean task, there have been promising developments in the past decade. A report released prior to the UN Global Climate Action Summit in September showed that, thanks to $2.6 trillion of investment, renewable energy conversion has quadrupled since 2010.</p> <p>In a statement after the release of the report, Inger Andersen, executive director of the UN Environment Program, stressed the correlation between investing in renewable energy and securing a sustainable future for humankind. “It is clear that we need to rapidly step up the pace of the global switch to renewables if we are to meet international climate and development goals,” Andersen said.</p> <p>No single conversion or storage technology will be responsible for the shift from fossil fuels to renewable energy. It will require a tapestry of complementary solutions from researchers both here at MIT and across the globe.</p> Postdoc Reshma Rao stands next to a pulsed laser deposition system, which is used to deposit well-defined thin films of catalyst materials. Photo: Tony PulsoneMechanical engineering, School of Engineering, Renewable energy, Alternative energy, Climate change, Energy, Energy storage, Oceanography and ocean engineering, Photovoltaics, Sustainability, Wind, Solar Screen could offer better safety tests for new chemicals Using specialized liver cells, a new test can quickly detect potentially cancer-causing DNA damage. Tue, 17 Dec 2019 00:00:00 -0500 Anne Trafton | MIT News Office <p>It’s estimated that there are approximately 80,000 industrial chemicals currently in use, in products such as clothing, cleaning solutions, carpets, and furniture. For the vast majority of these chemicals, scientists have little or no information about their potential to cause cancer.</p> <p>The detection of DNA damage in cells can predict whether cancer will develop, but tests for this kind of damage have limited sensitivity. A team of MIT biological engineers has now come up with a new screening method that they believe could make such testing much faster, easier, and more accurate.</p> <p>The National Toxicology Program, a government research agency that identifies potentially hazardous substances, is now working on adopting the MIT test to evaluate new compounds.</p> <p>“My hope is that they use it to identify potential carcinogens and we get them out of our environment, and prevent them from being produced in massive quantities,” says Bevin Engelward, a professor of biological engineering at MIT and the senior author of the study. “It can take decades between the time you’re exposed to a carcinogen and the time you get cancer, so we really need predictive tests. We need to prevent cancer in the first place.”</p> <p>Engelward’s lab is now working on further validating the test, which makes use of human liver-like cells that metabolize chemicals very similarly to real human liver cells and produce a distinctive signal when DNA damage occurs.</p> <p>Le Ngo, a former MIT graduate student and postdoc, is the lead author of the paper, which appears today in the journal <em>Nucleic Acids Research</em>. Other MIT authors of the paper include postdoc Norah Owiti, graduate student Yang Su, former graduate student Jing Ge, Singapore-MIT Alliance for Research and Technology graduate student Aoli Xiong, professor of electrical engineering and computer science Jongyoon Han, and professor emerita of biological engineering Leona Samson.</p> <p>Carol Swartz, John Winters, and Leslie Recio of Integrated Laboratory Systems are also authors of <a href="" target="_blank">the paper</a>.</p> <p><strong>Detecting DNA damage</strong></p> <p>Currently, tests for the cancer-causing potential of chemicals involve exposing mice to the chemical and then waiting to see whether they develop cancer, which takes about two years.</p> <p>Engelward has spent much of her career developing ways to detect DNA damage in cells, which can eventually lead to cancer. One of these devices, the <a href="">CometChip</a>, reveals DNA damage by placing the DNA in an array of microwells on a slab of polymer gel and then exposing it to an electric field. DNA strands that have been broken travel farther, producing a comet-shaped tail.</p> <p>While the CometChip is good at detecting breaks in DNA, as well as DNA damage that is readily converted into breaks, it can’t pick up another type of damage known as a bulky lesion. These lesions form when chemicals stick to a strand of DNA and distort the double helix structure, interfering with gene expression and cell division. Chemicals that cause this kind of damage include aflatoxin, which is produced by fungi and can contaminate peanuts and other crops, and benzo[a]pyrene, which can form when food is cooked at high temperatures.</p> <p>Engelward and her students decided to try to adapt the CometChip so that it could pick up this type of DNA damage. To do that, they took advantage of cells’ DNA repair pathways to generate strand breaks. Typically, when a cell discovers a bulky lesion, it will try to repair it by cutting out the lesion and then replacing it with a new piece of DNA.</p> <p>“If there’s something glommed onto the DNA, you have to rip out that stretch of DNA and then replace it with fresh DNA. In that ripping process, you’re creating a strand break,” Engelward says.</p> <p>To capture those broken strands, the researchers treated cells with two compounds that prevent them from synthesizing new DNA. This halts the repair process and generates unrepaired single-stranded DNA that the Comet test can detect.</p> <p>The researchers also wanted to make sure that their test, which is called HepaCometChip, would detect chemicals that only become hazardous after being modified in the liver through a process called bioactivation.</p> <p>“A lot of chemicals actually are inert until they get metabolized by the liver,” Ngo says. “In the liver you have a lot of metabolizing enzymes, which modify the chemicals so that they become more easily excreted by the body. But this process sometimes produces intermediates that can turn out to be more toxic than the original chemical.”</p> <p>To detect those chemicals, the researchers had to perform their test in liver cells. Human liver cells are notoriously difficult to grow outside the body, but the MIT team was able to incorporate a type of liver-like cell called HepaRG, developed by a company in France, into the new test. These cells produce many of the same metabolic enzymes found in normal human liver cells, and like human liver cells, they can generate potentially harmful intermediates that create bulky lesions.</p> <p><strong>Enhanced sensitivity</strong></p> <p>To test their new system, the researchers first exposed the liver-like cells to UV light, which is known to produce bulky lesions. After verifying that they could detect such lesions, they tested the system with nine chemicals, seven of which are known to lead to single-stranded DNA breaks or bulky lesions, and found that the test could accurately detect all of them.</p> <p>“Our new method enhances the sensitivity, because it should be able to detect any damage a normal Comet test would detect, and also adds on the layer of the bulky lesions,” Ngo says.</p> <p>The whole process takes between two days and a week, offering a significantly faster turnaround than studies in mice.</p> <p>The researchers are now working on further validating the test by comparing its performance with historical data from mouse carcinogenicity studies, with funding from the National Institutes of Health.</p> <p>They are also working with Integrated Laboratory Systems, a company that performs toxicology testing, to potentially commercialize the technology. Engelward says the HepaCometChip could be useful not only for manufacturers of new chemical products, but also for drug companies, which are required to test new drugs for cancer-causing potential. The new test could offer a much easier and faster way to perform those screens.</p> <p>“Once it’s validated, we hope it will become a recommended test by the FDA,” she says.</p> <p>The research was funded by the National Institute of Environmental Health Sciences, including the NIEHS Superfund Basic Research Program, and the MIT Center for Environmental Health Sciences.</p> MIT chemists have devised a way to observe the transition state of the chemical reaction that occurs when vinyl cyanide is broken apart by an ultraviolet laser.Image: Christine Daniloff, MITResearch, Cancer, Biological engineering, School of Engineering, National Institutes of Health (NIH), Drug development, DNA, Health sciences and technology, Environment Making buildings from industrial waste Following a successful project creating bricks from pulp plant waste in northern India, Elsa Olivetti is looking for ways to repurpose slag produced by the metals industry. Mon, 16 Dec 2019 13:15:01 -0500 Rachel Fritts | Environmental Solutions Initiative <p>Elsa Olivetti’s interest in materials science began when she was an engineering science major at the University of Virginia. Initially unable to settle on any one form of engineering, she took an introduction to materials science class on a whim. She loved the way materials science let her examine everyday material, like a block of wood or piece of cloth, on a molecular level. “Being able to think across those scales is something that I found really cool,” Olivetti says.</p> <p>Now, Olivetti is an associate professor in the MIT Department of Materials Science and Engineering and the principal investigator of her own lab. Her interest has turned to the social and environmental impacts of the materials we use in our daily lives. Specifically, the Olivetti lab looks at the huge quantities of industrial waste materials generated in the manufacturing industry, in the hopes of finding useful ways to reconstitute and reuse this waste for building.</p> <p>Some types of waste have already become standard tools in the building industry: fly ash from burning coal, for instance, is increasingly used in concrete as a substitute for freshly produced cement. Most types of industrial waste, however, are simply discarded as useless byproducts. Olivetti hopes to change that. By applying her understanding of materials on a molecular level, she can propose new ways these byproducts might be integrated into usable building materials to make the industry more efficient.</p> <p>Several years ago, Olivetti was able to put that idea to the test by participating in a Tata Center project launched in a city called Muzaffarnagar in northern India. The area is highly industrial, containing pulp and paper mills and steel and brick manufacturers. “But the challenge there is they don’t have a lot of resources to put into environmental abatement,” Olivetti says. “They’re just dumping.”</p> <p>So, the Tata Center team went looking for byproducts that could potentially be put to another use. They noticed that the pulp plants were powered by sugar cane and rice husks, which were burned to generate energy. The byproduct of these burnt plant materials was something called “biomass ash,” which has “pretty high, fairly reactive silica content.” This means that it can bind with other materials to produce a strong, cement-like structure.</p> <p>They were able to demonstrate that this ash, which had previously been dumped as waste, could actually be turned into cheap building material, providing an economic and environmental benefit to the local community. The end result, produced in 2015, was dubbed the Eco-BLAC brick. In 2017, Olivetti received an Environmental Solutions Initiative (ESI) seed grant to continue this work back at MIT.</p> <p>“What we used the ESI money for is to move outside of biomass ash and into other materials,” Olivetti says. She summarizes the work as “beyond India, beyond ash.” She’s most interested in the kinds of materials where “there’s still enough quantity to make it useful, but they aren’t already well-utilized,” a rubric that has brought her focus to metal waste products, especially the “slag” left over during copper production.</p> <p>Olivetti is particularly fond of the ESI project because “it pulls together a bunch of different dimensions of what I like to think about.” When trying to understand which metal waste materials might be put to the best use, she has to ask a few key questions. First, is the waste material reactive, like the biomass ash material was? Can it bind with other materials to add strength and integrity? How reactive is it? What will it react with, and under what conditions? Or, is the material non-reactive? Non-reactive materials don’t necessarily add value, but can be used to add volume, just as sand is mixed with cement to produce concrete.</p> <p>Once she figures out what role the material might play, she has to understand its durability in the environment where it will be used. Biomass ash, for instance, has a lot of carbon, and one implication of this is that it takes in water. This might not be a problem in India, where it is warm year-round, but it can harm the structural integrity of a material that will be used somewhere like Boston, Massachusetts, where winter temperatures drop below freezing.</p> <p>To test all these things, she needs to do something a little bit counterintuitive. “One of the things we’ve started to do, which has been kind of fun, is synthesize waste,” she says. “Which feels silly when I say it like that.”</p> <p>A recurring problem with researching waste is that it involves substances people have typically ignored. It’s rare for any industry to keep careful track of what its waste is made up of, or how much of it is produced. When making copper, for instance, the end product is always copper. But there might be several different kinds of unintended waste products that are produced along the way, which all get mixed together. Olivetti describes the end result as “Jell-O with a bunch of fruit in it.”</p> <p>By artificially manufacturing the waste, Olivetti can better understand how much of each waste product is produced, and how to best separate it into usable materials. The question of quantity is another one that becomes trickier to answer when dealing with waste material. While a steel factory, for example, has an incentive to measure its steel production, it has little incentive to keep detailed records of how much material it’s wasting.</p> <p>“I think overall what this field needs is better cataloging of what wastes are going to be where, and trying to project that a little bit,” Olivetti says. “If it’s a raw material for making something, you need to know that supply’s going to be steady.” A key component of any business is having a stable supply, so in order for all this waste material to be used more widely, there need to be better records of what kinds of waste are being produced where, and in what quantity. Now, Olivetti is working on a project using AI to automatically extract information about how various materials are made, to try to better understand the supply chain and where the most promising byproducts are being created.</p> <p>She’s also hoping to better understand the environmental impact of using waste materials, to ensure that there will be no harmful effects of repurposing these substances. If even one of the Olivetti lab’s discoveries is widely adopted, her research will have contributed to a materials supply chain that is much more efficient, cost-effective, and environmentally sustainable than ever before.</p> Associate Professor Elsa Olivetti studies the huge quantities of industrial waste materials generated in the manufacturing industry, in hopes of finding useful ways to reconstitute and reuse this waste for building.Photo: MIT Environmental Solutions InitiativeMaterials Science and Engineering, Tata Center, School of Engineering, Environment, Supply chains, Sustainability, Recycling, Industry, DMSE, Profile, Faculty, India The uncertain role of natural gas in the transition to clean energy MIT study finds that challenges in measuring and mitigating leakage of methane, a powerful greenhouse gas, prove pivotal. Mon, 16 Dec 2019 10:43:54 -0500 David L. Chandler | MIT News Office <p>A new MIT study examines the opposing roles of natural gas in the battle against climate change — as a bridge toward a lower-emissions future, but also a contributor to greenhouse gas emissions.</p> <p>Natural gas, which is mostly methane, is viewed as a significant “bridge fuel” to help the world move away from the greenhouse gas emissions of fossil fuels, since burning natural gas for electricity produces about half as much carbon dioxide as burning coal. But methane is itself a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas. Increasing its usage, as a strategy for decarbonizing the electricity supply, will also increase the potential for such “fugitive” methane emissions, although there is great uncertainty about how much to expect. Recent studies have documented the difficulty in even measuring today’s emissions levels.</p> <p>This uncertainty adds to the difficulty of assessing natural gas’ role as a bridge to a net-zero-carbon energy system, and in knowing when to transition away from it. But strategic choices must be made now about whether to invest in natural gas infrastructure. This inspired MIT researchers to quantify timelines for cleaning up natural gas infrastructure in the United States or accelerating a shift away from it, while recognizing the uncertainty about fugitive methane emissions.</p> <p>The study shows that in order for natural gas to be a major component of the nation’s effort to meet greenhouse gas reduction targets over the coming decade, present methods of controlling methane leakage would have to improve by anywhere from 30 to 90 percent. Given current difficulties in monitoring methane, achieving those levels of reduction may be a challenge. Methane is a valuable commodity, and therefore companies producing, storing, and distributing it already have some incentive to minimize its losses. However, despite this, even intentional natural gas venting and flaring (emitting carbon dioxide) continues.</p> <p>The study also finds policies that favor moving directly to carbon-free power sources, such as wind, solar, and nuclear, could meet the emissions targets without requiring such improvements in leakage mitigation, even though natural gas use would still be a significant part of the energy mix.</p> <p>The researchers compared several different scenarios for curbing methane from the electric generation system in order to meet a target for 2030 of a 32 percent cut in carbon dioxide-equivalent emissions relative to 2005 levels, which is consistent with past U.S. commitments to mitigate climate change. The findings appear today in the journal <em>Environmental Research Letters</em>, in a paper by MIT postdoc Magdalena Klemun and Associate Professor Jessika Trancik.</p> <p>Methane is a much stronger greenhouse gas than carbon dioxide, although how much more depends on the timeframe you choose to look at. Although methane traps heat much more, it doesn’t last as long once it’s in the atmosphere — for decades, not centuries. &nbsp;When averaged over a 100-year timeline, which is the comparison most widely used, methane is approximately 25 times more powerful than carbon dioxide. But averaged over a 20-year period, it is 86 times stronger.</p> <p>The actual leakage rates associated with the use of methane are widely distributed, highly variable, and very hard to pin down. Using figures from a variety of sources, the researchers found the overall range to be somewhere between 1.5 percent and 4.9 percent of the amount of gas produced and distributed. Some of this happens right at the wells, some occurs during processing and from storage tanks, and some is from the distribution system. Thus, a variety of different kinds of monitoring systems and mitigation measures may be needed to address the different conditions.</p> <p>“Fugitive emissions can be escaping all the way from where natural gas is being extracted and produced, all the way along to the end user,” Trancik says. “It’s difficult and expensive to monitor it along the way.”</p> <p>That in itself poses a challenge. “An important thing to keep in mind when thinking about greenhouse gases,” she says, “is that the difficulty in tracking and measuring methane is itself a risk.” If researchers are unsure how much there is and where it is, it’s hard for policymakers to formulate effective strategies to mitigate it. This study’s approach is to embrace the uncertainty instead of being hamstrung by it, Trancik says: The uncertainty itself should inform current strategies, the authors say, by motivating investments in leak detection to reduce uncertainty, or a faster transition away from natural gas.</p> <p>“Emissions rates for the same type of equipment, in the same year, can vary significantly,” adds Klemun. “It can vary depending on which time of day you measure it, or which time of year. There are a lot of factors.”</p> <p>Much attention has focused on so-called “super-emitters,” but even these can be difficult to track down. “In many data sets, a small fraction of point sources contributes disproportionately to overall emissions,” Klemun says. “If it were easy to predict where these occur, and if we better understood why, detection and repair programs could become more targeted.” But achieving this will require additional data with high spatial resolution, covering wide areas and many segments of the supply chain, she says.</p> <p>The researchers looked at the whole range of uncertainties, from how much methane is escaping to how to characterize its climate impacts, under a variety of different scenarios. One approach places strong emphasis on replacing coal-fired plants with natural gas, for example; others increase investment in zero-carbon sources while still maintaining a role for natural gas.</p> <p>In the first approach, methane&nbsp;emissions from the U.S. power sector would need to be reduced by 30 to 90 percent from today’s levels by 2030,&nbsp;along with&nbsp;a 20 percent reduction in&nbsp;carbon dioxide.&nbsp;Alternatively,&nbsp;that target could be met through even greater carbon dioxide&nbsp;reductions, such as through faster expansion of low-carbon electricity, without&nbsp;requiring any&nbsp;reductions in natural&nbsp;gas leakage&nbsp;rates. The higher end of the published ranges reflects greater emphasis on methane’s short-term warming contribution.</p> <p>One question raised by the study is how much to invest in developing technologies and infrastructure for safely expanding natural gas use, given the difficulties in measuring and mitigating methane emissions, and given that virtually all scenarios for meeting greenhouse gas reduction targets call for ultimately phasing out natural gas that doesn’t include carbon capture and storage by mid-century. “A certain amount of investment probably makes sense to improve and make use of current infrastructure, but if you’re interested in really deep reduction targets, our results make it harder to make a case for that expansion right now,” Trancik says.</p> <p>The detailed analysis in this study should provide guidance for local and regional regulators as well as policymakers all the way to federal agencies, they say. The insights also apply to other economies relying on natural gas. The best choices and exact timelines are likely to vary depending on local circumstances, but the study frames the issue by examining a variety of possibilities that include the extremes in both directions — that is, toward investing mostly in improving the natural gas infrastructure while expanding its use, or accelerating a move away from it.</p> <p>The research was supported by the MIT Environmental Solutions Initiative. The researchers also received support from MIT’s Policy Lab at the Center for International Studies.</p> Methane is a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas.IDSS, Research, Solar, Energy, Renewable energy, Alternative energy, Climate change, Technology and society, Oil and gas, Economics, Policy, MIT Energy Initiative, Emissions, Sustainability, ESI, Greenhouse gases Taking the carbon out of construction with engineered wood Substituting lumber for materials such as cement and steel could cut building emissions and costs. Wed, 11 Dec 2019 12:55:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>To meet the long-term goals of the Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius and ideally capping it at 1.5 C — humanity will ultimately need to achieve net-zero emissions of greenhouse gases (GHGs) into the atmosphere. To date, emissions reduction efforts have largely focused on decarbonizing the two economic sectors responsible for the most emissions, electric power and transportation. Other approaches aim to remove carbon from the atmosphere and store it through carbon capture technology, biofuel cultivation, and massive tree planting. &nbsp;</p> <p>As it turns out, planting trees is not the only way forestry can help in climate mitigation; how we use wood harvested from trees may also make a difference. Recent studies have shown that engineered wood products — composed of wood and various types of adhesive to enhance physical strength — involve far fewer carbon dioxide emissions than mineral-based building materials, and at lower cost. Now <a href="" target="_blank">new research</a> in the journal <em>Energy Economics</em> explores the potential environmental and economic impact in the United States of substituting lumber for energy-intensive building materials such as cement and steel, which account for <a href="" target="_blank">nearly 10 percent</a> of human-made GHG emissions and are among the hardest to reduce.</p> <p>“To our knowledge, this study is the first economy-wide analysis to evaluate the economic and emissions impacts of substituting lumber products for more CO<sub>2</sub>-intensive materials in the construction sector,” says the study’s lead author <a href="">Niven Winchester</a>, a research scientist at the MIT Joint Program on the Science and Policy of Global Change and Motu Economic and Public Policy Research. “There is no silver bullet to reduce GHGs, so exploiting a suite of emission-abatement options is required to mitigate climate change.”</p> <p>Comparing the economic and emissions impacts of replacing CO<sub>2</sub>-intensive building materials (e.g., steel and concrete) with lumber products in the United States under an economy-wide cap-and-trade policy consistent with the nation’s Paris Agreement GHG emissions-reduction pledge, the study found that the CO<sub>2</sub> intensity (tons of CO<sub>2</sub> emissions per dollar of output) of lumber production is about 20 percent less than that of fabricated metal products, under 50 percent that of iron and steel, and under 25 percent that of cement. In addition, shifting construction toward lumber products lowers the GDP cost of meeting the emissions cap by approximately $500 million and reduces the carbon price.</p> <p>The authors caution that these results only take into account emissions resulting from the use of fossil fuels in harvesting, transporting, fabricating, and milling lumber products, and neglect potential increases in atmospheric CO<sub>2</sub> associated with tree harvesting or beneficial long-term carbon sequestration provided by wood-based building materials.</p> <p>“The source of lumber, and the conditions under which it is grown and harvested, and the fate of wood products deserve further attention to develop a full accounting of the carbon implications of expanded use of wood in building construction,” they write. “Setting aside those issues, lumber products appear to be advantageous compared with many other building materials, and offer one potential option for reducing emissions from sectors like cement, iron and steel, and fabricated metal products — by reducing the demand for these products themselves.”</p> <p>Funded, in part, by Weyerhaeuser and the Softwood Lumber Board, the study develops and utilizes a customized economy-wide model that includes a detailed representation of energy production and use and represents production of construction, forestry, lumber, and mineral-based construction materials.</p> A 70-unit British Columbia lakeside resort hotel was built with local engineered wood products, including cross-laminated timber. New research explores the potential environmental and economic impact in the United States of substituting lumber for energy-intensive building products such as cement and steel.Photo: Province of British Columbia/FlickrResearch, Climate change, Greenhouse gases, Emissions, Climate, Environment, Energy, Economics, Policy, Carbon dioxide, Building, Sustainability, Materials Science and Engineering, Cement, Joint Program on the Science and Policy of Global Change Getting the carbon out of the electricity sector MIT symposium looks at the role of advances in storage, solar, nuclear, EVs and more in cutting greenhouse gas emissions. Mon, 09 Dec 2019 09:55:45 -0500 David L. Chandler | MIT News Office <p>The generation of electricity is a huge contributor to the world’s emissions of climate-altering greenhouse gases, producing some 25 percent globally. That’s because more than two-thirds of the world’s electricity is still being produced by burning fossil fuels. But progress in a variety of areas could allow for drastic reductions in those emissions, as several specialists in engineering and economics outlined last week at the third of six climate change symposia being held this academic year at MIT.</p> <p>Titled “Decarbonizing the Electricity Sector,” the symposium centered on four areas: improvements in solar energy and storage systems, advances in nuclear power and fusion, electric vehicles, and expanding access to electricity in the developing world while curbing emissions.</p> <p>“Globally, we are in the midst of a major decarbonization strategy to create clean electricity,” said Paul Joskow, a professor of economics at MIT’s Sloan School of Management and co-moderator of the symposium. But, he said, it will also be essential to cut emissions from the other major sectors, especially in transportation and in building operations.</p> <p>Jessika Trancik, an associate professor of energy studies at MIT’s Institute for Data, Systems, and Society and the event’s other moderator, said that “solar represents one of the biggest successes,” given that solar module prices have dropped by 90 percent since 2000. But there is still great potential for significant further progress in the next few years.</p> <p>Moungi Bawendi, the Lester Wolfe Professor of Chemistry, described some promising research on solar technology, including the use of perovskite-based solar cells with potential for much greater output for a given weight. This technology may open up possibilities for solar panels that could be integrated into building exteriors, including transparent ones incorporated in windows.</p> <p>The material is soluble and could be produced in a roll-to-roll process like printing a newspaper, potentially making it inexpensive and easy to deploy, Bawendi said. Today, “it’s within striking distance of silicon” in its efficiency. Because it is a hundred times more absorbent of solar energy than silicon, “it can be made a hundred times thinner and still collect the same amount of light,” he said. But there are still challenges related to scaling up its production and making it more durable when exposed to water. “It’s an engineering problem that can be solved,” he said.</p> <p>As for storage, which is crucial as solar and wind power become larger components of the world’s generating capacity, there is great progress in that field as well. Currently, over 90 percent of storage capacity in the electric grid is in the form of lithium-ion batteries, said Yet-Ming Chiang, the Kyocera Professor of Materials Science. But more cost-effective alternatives are under development, which could enable rapid expansion of renewables.</p> <p>For example, he described efforts to develop batteries based on much cheaper and more abundant materials than lithium, including sulfur and zinc. Prices for some kinds of batteries based on such materials could potentially drop to as little as $1 per kilowatt hour, compared to about $160 for today’s lithium-ion batteries, he suggested.</p> <p>Other kinds of batteries, emphasizing storage capacity for a given weight, are also being developed, which might help expand battery power into areas such as aviation, where it has not played a role so far, he said. Still others might be used for backup storage; these may be used infrequently but would remain stable for long periods.</p> <p>Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, described a recent <a href="">report</a> that he led, on the future of nuclear technology, which found several areas of new kinds of nuclear plant designs that hold promise for future installations. But he said at this point such potential is mostly in other countries, as there is little interest among domestic utility companies today. New designs, including ones that are modular and standardized to reduce construction costs, could help to revitalize that industry.</p> <p>Meanwhile, promising work on fusion power, which if perfected could provide virtually limitless emissions-free power, is progressing well on several fronts, said Earl Marmar, a senior research scientist in MIT’s physics department. One key to that has been the development of improved superconductors, enabling a drastic reduction in the size of a fusion plant needed to produce a given amount of power. That technology is at the heart of an ongoing <a href="">joint project</a> between MIT and Commonwealth Fusion Systems.</p> <p>Another factor that could help in the transition away from fossil fuels is the increasing use of electric cars. Trancik said that today, the use of an electric car reduces emissions per mile travelled by about 30 percent on average, but that depends crucially on the mix of generating sources used in the grid at the location and time when the car is recharged. Cars charged entirely by solar power would eliminate their emissions altogether.</p> <p>David Keith, an assistant professor of systems dynamics at MIT Sloan, said “my question is how quickly can electric vehicles diffuse into the fleet?” He pointed out that there are some 250 million cars in this country, and their average lifetime is 15 years, so the turnover is a slow process. Currently, even though virtually all automakers offer some kind of electric model, their sales still represent a very small fraction of the total.</p> <p>Christopher Knittel, the George P. Shultz Professor at MIT Sloan, said there has been great progress in lowering the costs of the kind of lightweight batteries needed for electric vehicles, and that as those prices continue to fall, that could unleash rapid growth in the penetration of electric vehicles into the market. They will soon reach the point where battery prices will no longer cause electric vehicles to be costlier than their gasoline counterparts, and that could be a turning point, he said.</p> <p>But as the use of electricity grows around the world, any progress in reducing emissions in the industrialized world could be offset if new generating capacity in the developing world follows the same fossil-based trajectory other nations have. That can sometimes be the most accessible option, however, so finding ways to hold emissions down while advancing the availability of reliable power can be a challenge.</p> <p>Kate Steel, co-founder of Nithio, described how her company approaches that issue by providing simple, low-cost, solar-powered installations that can provide some basic services, such as lighting and cellphone charging, to people in regions not yet served by reliable electric grids or any service at all.</p> <p>Rob Stoner, deputy director for science and technology at the MIT Energy Initiative, said that there are presently about 800 million people worldwide without access to electricity. While there is a goal of providing universal access by 2050, that will be very challenging to achieve, he said.</p> Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT, described recent progress on new kinds of solar cell materials.Image: Jake BelcherMIT Energy Initiative, ESI, Climate, Climate change, Special events and guest speakers, Sustainability, Global Warming, Batteries, Renewable energy, Energy storage, Automobiles, Policy, Environment, Emissions Understanding the impact of deep-sea mining Mining materials from the sea floor could help secure a low-carbon future, but researchers are racing to understand the environmental effects. Thu, 05 Dec 2019 23:59:59 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>Resting atop Thomas Peacock’s desk is an ordinary-looking brown rock. Roughly the size of a potato, it has been at the center of decades of debate. Known as a polymetallic nodule, it spent 10 million years sitting on the deep seabed, 15,000 feet below sea level. The nodule contains nickel, cobalt, copper, and manganese — four minerals that are essential in energy storage.</p> <p>“As society moves toward driving more electric vehicles and utilizing renewable energy, there will be an increased demand for these minerals, to manufacture the batteries necessary to decarbonize the economy,” says Peacock, a professor of mechanical engineering and the director of MIT’s Environmental Dynamics Lab (END Lab). He is part of an international team of researchers that has been trying to gain a better understanding the environmental impact of collecting polymetallic nodules, a process known as deep-sea mining.</p> <p>The minerals found in the nodules, particularly cobalt and nickel, are key components of lithium-ion batteries. Currently, lithium-ion batteries offer the best energy density of any commercially available battery. This high energy density makes them ideal for use in everything from cellphones to electric vehicles, which require large amounts of energy within a compact space.</p> <p>“Those two elements are expected to see a tremendous growth in demand due to energy storage,” says Richard Roth, director of MIT’s Materials Systems Laboratory.</p> <p>While researchers are exploring alternative battery technologies such as sodium-ion batteries and flow batteries that utilize electrochemical cells, these technologies are far from commercialization.</p> <p>“Few people expect any of these lithium-ion alternatives to be available in the next decade,” explains Roth. “Waiting for unknown future battery chemistries and technologies could significantly delay widespread adoption of electric vehicles.”</p> <p>Vast amounts of specialty nickel will be also needed to build larger-scale batteries that will be required as societies look to shift from an electric grid powered by fossil fuels to one powered by renewable resources like solar, wind, wave, and thermal.</p> <p>“The collection of nodules from the seabed is being considered as a new means for getting these materials, but before doing so it is imperative to fully understand the environmental impact of mining resources from the deep ocean and compare it to the environmental impact of mining resources on land,” explains Peacock.</p> <p>After receiving seed funding from MIT’s Environmental Solutions Initiative (ESI), Peacock was able to apply his expertise in fluid dynamics to study how deep-sea mining could affect surrounding ecosystems.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Meeting the demand for energy storage</strong></p> <p>Currently, nickel and cobalt are extracted through land-based mining operations. Much of this mining occurs in the Democratic Republic of the Congo, which produces 60 percent of the world’s cobalt. These land-based mines often impact surrounding environments through the destruction of habitats, erosion, and soil and water contamination. There are also concerns that land-based mining, especially in politically unstable countries, might not be able to supply enough of these materials as the demand for batteries rises.</p> <p>The swath of ocean located between Hawaii and the West Coast of the United States — also&nbsp; known as the Clarion Clipperton Fracture Zone — is estimated to possess six times more cobalt and three times more nickel than all known land-based stores, as well as vast deposits of manganese and a substantial amount of copper.</p> <p>While the seabed is abundant with these materials, little is known about the short- and long-term environmental effects of mining 15,000 feet below sea level. Peacock and his collaborator Professor Matthew Alford from the Scripps Institution of Oceanography and the University of California at San Diego are leading the quest to understand how the sediment plumes generated by the collection of nodules from the seabed will be carried by water currents.</p> <p>“The key question is, if we decide to make a plume at site A, how far does it spread before eventually raining down on the sea floor?” explains Alford. “That ability to map the geography of the impact of sea floor mining is a crucial unknown right now.”</p> <p>The research Peacock and Alford are conducting will help inform stakeholders about the potential environmental effects of deep-sea mining. One pressing matter is that draft exploitation regulations for deep-sea mining in areas beyond national jurisdiction are currently being negotiated by the International Seabed Authority (ISA), an independent organization established by the United Nations that regulates all mining activities on the sea floor. Peacock and Alford’s research will help guide the development of environmental standards and guidelines to be issued under those regulations.</p> <p>“We have a unique opportunity to help regulators and other concerned parties to assess draft regulations using our data and modeling, before operations start and we regret the impact of our activity,” says Carlos Munoz Royo, a PhD student in MIT’s END Lab.</p> <p><strong>Tracking plumes in the water</strong></p> <p>In deep-sea mining, a collector vehicle would be deployed from a ship. The collector vehicle then travels 15,000 feet down to the seabed, where it vacuums up the top four inches of the seabed. This process creates a plume known as a collector plume.</p> <p>“As the collector moves across the seabed floor, it stirs up sediment and creates a sediment cloud, or plume, that’s carried away and distributed by ocean currents,” explains Peacock.</p> <p>The collector vehicle picks up the nodules, which are pumped through a pipe back to the ship. On the ship, usable nodules are separated from unwanted sediment. That sediment is piped back into the ocean, creating a second plume, known as a discharge plume.</p> <p>Peacock collaborated with Pierre Lermusiaux, professor of mechanical engineering and of ocean science and engineering, and Glenn Flierl, professor of Earth, atmospheric, and planetary sciences, to create mathematical models that predict how these two plumes travel through the water.</p> <p>To test these models, Peacock set out to track actual plumes created by mining the floor of the Pacific Ocean. With funding from MIT ESI, he embarked on the first-ever field study of such plumes. He was joined by Alford and Eric Adams, senior research engineer at MIT, as well as other researchers and engineers from MIT, Scripps, and the United States Geological Survey.</p> <p>With funding from the UC Ship Funds Program, the team conducted experiments in consultation with the ISA during a weeklong expedition in the Pacific Ocean aboard the U.S. Navy R/V Sally Ride in March 2018. The researchers mixed sediment with a tracer dye that they were able to track using sensors on the ship developed by Alford’s <a href="">Multiscale Ocean Dynamics group</a>. In doing so, they created a map of the plumes’ journeys.</p> <p>The field experiments demonstrated that the models Peacock and Lermusiaux developed can be used to predict how plumes will travel through the water — and could help give a clearer picture of how surrounding biology might be affected.</p> <p><strong>Impact on deep-sea organisms</strong></p> <p>Life on the ocean floor moves at a glacial pace. Sediment accumulates at a rate of 1 millimeter every millennium. With such a slow rate of growth, areas disturbed by deep-sea mining would be unlikely to recover on a reasonable timescale.</p> <p><br /> “The concern is that if there is a biological community specific to the area, it might be irretrievably impacted by mining,” explains Peacock.&nbsp;</p> <p>According to Cindy Van Dover, professor of biological oceanography at Duke University, in addition to organisms that live in or around the nodules, other organisms elsewhere in the water column could be affected as the plumes travel.</p> <p>“There could be clogging of filter feeding structures of, for example, gelatinous organisms in the water column, and burial of organisms on the sediment,” she explains. “There could also be some metals that get into the water column, so there are concerns about toxicology.”</p> <p>Peacock’s research on plumes could help biologists like Van Dover assess collateral damage from deep-sea mining operations in surrounding ecosystems.</p> <p><strong>Drafting regulations for mining the sea</strong></p> <p>Through connections with MIT’s <a href="">Policy Lab</a>, the Institute is one of only two research universities with observer status at the ISA.</p> <p>“The plume research is very important, and MIT is helping with the experimentation and developing plume models, which is vital to inform the current work of the International Seabed Authority and its stakeholder base,” explains Chris Brown, a consultant at the ISA. Brown was one of dozens of experts who convened on MIT’s campus last fall at a workshop discussing the risks of deep-sea mining.</p> <p>To date, the field research Peacock and Alford conducted is the only ocean dataset on midwater plumes that exists to help guide decision-making. The next step in understanding how plumes move through the water will be to track plumes generated by a prototype collector vehicle. Peacock and his team in the END Lab are preparing to participate in a major field study using a prototype vehicle in 2020.</p> <p>Peacock and Lermusiaux hope to develop models that give increasingly accurate predictions about how deep-sea mining plumes will travel through the ocean. They will continue to interact with academic colleagues, international agencies, NGOs, and contractors to develop a clearer picture of deep-sea mining’s environmental impact.</p> <p>“It’s important to have input from all stakeholders early in the conversation to help make informed decisions, so we can fully understand the environmental impact of mining resources from the ocean and compare it to the environmental impact of mining resources on land,” says Peacock.</p> Professor Thomas Peacock (left) with graduate students Rohit Balasaheb Supekar (center) and Carlos Munoz Royo (right) aboard the RV Sally Ride.Image: John FreidahMechanical engineering, School of Engineering, Oceanography and ocean engineering, Environment, Sustainability, Batteries, Renewable energy, Research, Energy storage How biomarkers can record and reconstruct climate trends Scientists reveal the genes and proteins controlling the chemical structures underpinning paleoclimate proxies. Wed, 04 Dec 2019 17:00:01 -0500 Fatima Husain | EAPS <p>Nestled within sediments that accumulate in marine environments, fossil molecules sneakily record how climates and environments change over time. These fossils, vestiges of microbial membranes, preserve different chemical structures that reflect the changing world around them at the time the organisms lived. For almost two decades, scientists have used one class of these molecular fossils, known as glycerol dibiphytanyl glycerol tetraether lipids or GDGTs, to reconstruct climate trends experienced over both regional and local marine environments by examining the number of 5- and 6-carbon-membered rings that formed within the fossil, which are sensitive to the ambient temperatures the microbe experienced. The greater the number of rings in each molecule of fossil lipid, the higher the estimated sea surface temperature. &nbsp;</p> <p>For almost two decades, scientists estimated ancient sea surface temperatures in this manner by applying a temperature proxy known as TEX86, which enables researchers to relate the relative abundances of fossils and their structures to estimated temperature values to see how climate has changed in the oceans over the past tens of millions of years.<br /> &nbsp;<br /> Yet a mystery remained: No one fully understood the mechanisms by which the complex membrane-spanning GDGTs encoded information about temperature through their rings, or which organisms actually contributed to the sedimentary GDGT signals. “Identifying the precise sources of sedimentary lipids has been an enduring problem for geochemists because, without that knowledge, there will always be doubts surrounding their interpretation. The advent of the ‘genomic era’ in molecular biology, however, has opened all sorts of new ways to solve problems like this,” says Roger Summons, the Schlumberger Professor of Geobiology at MIT.</p> <p>This new approach has now been applied to paleoclimate research, thanks to scientists associated with the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS).</p> <p>Former EAPS postdoc Paula Welander, now an associate professor of Earth Systems Science at Stanford University, recently led an effort to understand just how GDGTs are built, as well as how that information relates to the GDGTs produced in the oceans today and, potentially, in the distant past. In a <a href="">study</a> published last month in <em>PNAS</em>, Welander — along with first-author Zhirui Zeng of Stanford University and colleagues from Stanford, MIT, and the University of Oklahoma — employed a combined organic geochemical, bioinformatic, and microbiological approach to fill in the details on GDGT biosynthesis.</p> <p>To start, the researchers identified a related type of archaeon, called <em>Sulfolobus acidocaldarius</em>, that produced GDGTs with rings, much like the GDGTs produced by marine organisms. While <em>S. acidocaldarius</em> does not grow in marine environments, a genetically tractable archaeal system is already in place for this model organism — that is, scientists can genetically manipulate it by inserting or deleting genes and seeing how those changes affect its physiology and its membrane lipids. <em>S. acidocaldarius</em> is also well-characterized and grows quickly, enabling researchers to study their manipulations within days, rather than weeks or months.</p> <p>Within <em>S. acidocaldarius</em>, the researchers found three genes that might code for the enzymes that build rings into the GDGTs, and, deleted them one by one. These mutants showed them that only two of the deletions affected the number of rings present in the GDGTs. When they performed the two deletions together, the GDGTs produced no longer contained any rings. To further confirm the roles the two genes play in ring-building, the researchers expressed the genes in another organism that doesn’t normally produce GDGTs with rings, <em>Methanosarcina acetivorans</em>. Once the genes were expressed, <em>M. acetivorans</em> began to produce GDGTs containing rings.</p> <p>To study the GDGTs produced, Welander, a microbiologist, turned to Summons and former EAPS postdoc Xiaolei Liu, now assistant professor of organic geochemistry at the University of Oklahoma. Liu, a world-leading expert on identifying GDGTs by mass spectrometry, was not only able to confirm that two genes were needed to make the cyclized GDGT, but also that they operated in a sequential manner. One gene adds rings near the center of the molecule and the second gene subsequently adds more rings to the outer edges.</p> <p>Summons adds: “This was an exciting collaboration to participate in because earlier work conducted in our laboratory suggested that there may be multiple clades of archaea contributing to the TEX86 signal in the ocean. The new research shows that this does not seem to be the case and that it is just one clade, the marine Thaumarchaeota, that appears responsible, thereby improving the focus for future research directions.”</p> <p>The study was funded by the Simons Foundation Collaboration on the Origins of Life, the National Science Foundation, and the U.S. Department of Energy.</p> The composition and location of rock strata help scientists date when biomarkers were formed and deposited. Offshore in Zumaia, Basque Country (Spain), variations in the thickness and composition of sedimentary rocks show periodic changes in the Earth's orbit and tilt, affecting how much sunlight reaches Earth’s surface. This is near the Cretaceous-Paleogene boundary, associated with a mass extinction event.Photo: Fatima HusainEAPS, School of Science, Research, Climate, Ocean science, DNA, Biology, Geology, Climate change, Microbes, Evolution, Genetics, Bacteria, Earth and atmospheric sciences, National Science Foundation (NSF), Department of Energy (DoE) Investigating the rise of oxygenic photosynthesis EAPS scientists find an alternative explanation for mineral evidence thought to signal the presence of oxygen prior to the Great Oxidation Event. Wed, 04 Dec 2019 13:00:00 -0500 Kate S. Petersen | EAPS <p>About 2.4 billion years ago, at the end of the Archean Eon, a planet-wide increase in oxygen levels called the Great Oxidation Event (GOE) created the familiar atmosphere we all breathe today. Researchers focused on life's origins widely agree that this transition event was caused by the global proliferation of photosynthetic microbes capable of splitting water to make molecular oxygen (O<sub>2</sub>). However, according to Tanja Bosak, associate professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS), researchers don’t know how long before the GOE these organisms evolved.</p> <p>Bosak’s <a href="" target="_blank">new research</a>, published today in <em>Nature</em>, suggests it might now be even harder to pin down the emergence of oxygen-producing microbes in the geologic record.</p> <p><strong>A signal in the rocks</strong></p> <p>The first microbes to make oxygen did not leave a diary behind, so scientists must search for subtle clues of their emergence that could have survived the intervening few billion years. Complicating things further, while evidence of the GOE is found all over the Earth, these early colonies of oxygen-producing organisms would likely have first existed in small ponds or bodies of water. Any record of them would be geographically isolated.</p> <p>Some scientists consider localized evidence of the mineral manganese oxide in ancient sediments to be an indicator (or proxy) for the existence of oxygen-producing organisms. This is because manganese oxidation was only thought to be possible in the presence of significant amounts of O<sub>2</sub>, more than normally existed in the atmosphere pre-GOE. Thus, finding evidence of manganese oxide in sediments predating the GOE would suggest oxygen-producing organisms had evolved by that time and were active in the area.</p> <p>But it turns out there’s more than one way to oxidize manganese.</p> <p><strong>Anaerobic microbes change the game</strong></p> <p>As described in the new paper, Bosak and her former postdoc, Mirna Daye, discovered that colonies of modern microbes can perform this process in anaerobic environments typical of the late Archean Eon. Unlike the organisms that caused the GOE, Daye and Bosak’s microbes use sulfide, instead of water, to perform photosynthesis, so they do not create molecular oxygen as a byproduct. Most scientists think that this type of anaerobic photosynthesis emerged as a precursor system to the more familiar oxygenic photosynthesis that ushered in the GOE, and Daye and Bosak’s microbes contain genetic machinery similar to what is thought to have existed before the evolution of bacteria capable of making oxygen.</p> <p>The Bosak group’s demonstration of manganese oxidation in an anaerobic environment means that evidence of ancient manganese oxide may not be a reliable proxy for the local evolution of oxygen-producing life. It could just be a signal for the presence of other organisms already thought to be widespread at that time.</p> <p>Bosak’s co-authors include associate professor of geobiology Gregory Fournier, along with former postdocs Mirna Daye and Mihkel Pajusalu of MIT’s EAPS department; Vanja Klepac-Ceraj, Sophie Rowland, and Anna Farrell-Sherman of Wellesley College;&nbsp;Nicolas Beukes of the University of Johannesburg; and Nobumichi Tamura of Berkley National Laboratory.</p> <p><strong>Questioning ancient manganese</strong></p> <p>“Discovering new mechanisms by which manganese oxide might be created in the Archean environments, before the rise of oxygen, is tremendously interesting because many of the proxies that we have [used] for the presence of oxygen [and therefore, microbes capable of producing it] in the environment in the first half of Earth’s history are … actually proxies for the presence of manganese oxide,” says Ariel Anbar, professor at the Arizona State University School of Earth and Space Exploration, who was not involved in the research. “That forces us to think more deeply about the proxies that we're using and whether they really are indicative of O<sub>2</sub> or not.”</p> <p>The study of the ancient Earth has always been challenging, as evidence gets recycled by geological processes and otherwise lost to the wear and tear of time. Researchers have only fragmented and inferred data that they can use to develop theories.</p> <p>“What we are finding is not necessarily saying that these people who are interpreting these blips of oxygen before the GOE [are] wrong. It just gives me huge pause,” says Bosak, “The fact that we threw in some microbes and found these processes that were just never considered tells us that we really don't understand a lot about how life and the environment coevolved.”</p> Biomineralization of dolomite and manganese oxide minerals on the cell surfaces of Chlorobium sp. Image: Mirna DayeEAPS, School of Science, Microbes, Environment, Climate, Bacteria, Evolution, Biology, Geology, Research, Genetics Continuing a legacy of Antarctic exploration The Summons Lab compares lipids from Antarctic microbial communities to century-old samples. Fri, 22 Nov 2019 11:55:01 -0500 Fatima Husain | EAPS <p>When Robert F. Scott’s&nbsp;Discovery&nbsp;expedition began exploring the Antarctic continent in 1901, they set out to geographically and scientifically characterize the regions touched by the Ross Sea. As the group of naval officers and scientists set foot upon the Ross Ice Shelf, they mapped their travels and completed surveys, collecting biological specimens for further study.</p> <p>Two polar explorers and physicians on the expedition, Reginald Koettlitz and Edward Wilson, noticed microbial mats composed of cyanobacteria growing along the edges of shallow freshwater ponds on the McMurdo Ice Shelf in and around Ross Island. In the name of natural science, they sampled them, and the preserved mats spent nearly the next century in the collections of London’s Natural History Museum. Now a new comparison of contemporary lipids with those old samples is shedding light on the evolution of complex life, and that which existed during the planet's "Snowball Earth" phase.</p> <p>In 2017, Anne Jungblut, a life sciences researcher at the museum, examined Koettlitz’s and Wilson’s mats to study whether Antarctica’s cyanobacterial diversity had changed since the&nbsp;Discovery&nbsp;expedition by comparing them to modern mats from the same region. Her results showed that, for the most part, the microbial community remained stable, with slow genetic turnover — a testament to the cyanobacteria’s resilience on the icy continent.</p> <p>Roger Summons, the Schlumberger Professor of Geobiology in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), traveled to Antarctica in 2018 with colleagues Ian Hawes of the University of Waikato and Marc Schallenberg of the University of Otago in New Zealand to take a first-hand look at the types of environments in which these microbial mats thrive. The trio ventured to Bratina Island, which is surrounded by the Ross Ice Shelf. There, the meltwater ponds form in the midst of “dirty ice,” debris-covered slopes of ice and volcanic rock.</p> <p>“The ponds have liquid water, though there a few ponds that have thin layers of ice over them — and its full-on sunlight,” Summons says. “What a phenomenal challenge it would have been for the early explorers to carry equipment across this place because of its precipices, holes, miserable weather, and wind.”</p> <p>The unique topography of the glacial environment results from a vertical conveyer-belt mechanism that moves sediment from the sea bed up to the surface of McMurdo Ice Shelf. While wind causes the ice’s surface to ablate — to evaporate or melt — seawater freezes beneath the ice shelf, sometimes trapping sea sediments and organisms in the ice. As more ice ablates at the surface over time, material from the sea beneath is transported upwards over long time scales, accumulating at the surface. In Antarctica, Summons saw ancient sponges and bryozoans — aquatic invertebrates that once grew in the water beneath the ice — scattered among the sediments. And, like Koettlitz and Wilson, Summons and his colleagues sampled the microbial mats thriving in the ephemeral meltwater ponds.</p> <p>Thomas Evans, a postdoc in the EAPS Summons lab, has been studying these microbial communities because of their potential as models for the evolution of complex life on Earth during the Cryogenian Period, an enigmatic geologic time-slice that took place 720-635 million years ago. “These oases of life in high latitude ecosystems are of interest because they might serve as analogs to those that existed when the Earth experienced two long-lasting glaciations of global extent,” Evans says.</p> <p>These glaciations play a central role in the version of the Snowball Earth hypothesis described by Paul Hoffman, professor emeritus at Harvard University. The hypothesis delineates scenarios in which the Earth becomes entirely or almost entirely covered by ice, putting the brakes on biological productivity. But those icy events didn’t quite halt the existence or radiation of life.</p> <p>“I’ve always been interested in the evolution of animals after the Cryogenian,” Summons says. “Why do we see Ediacara fauna so quickly after such a dramatic epoch in Earth’s history?” The Ediacaran marks the rise of multicellularity with tissue specialization, although little fossil evidence exists concerning the precise nature of the Ediacaran biota. To study what conditions during the Cryogenian may have contributed to the resiliency of life during glacial periods, Summons and Evans both examine lipids, molecules that play roles in energy storage, biological signaling, and in fortifying cell membranes.</p> <p>Evans specifically focused on intact polar lipids — known as IPLs — biomarkers diagnostic for living cells. “IPLs represent an important barrier by maintaining the flux and gradients of ions and nutrients between the inner cell and the environment,” Evans says.</p> <p>“The analysis of IPLs provides the perfect tool to investigate how microbes can thrive under extreme climatic conditions, and how they adjust to the radical summer-winter environmental changes,” Evans says. Even further, the IPLs can help pinpoint important chemotaxonomic information about the cyanobacterial communities in the mats — which helps researchers like Jungblut determine the effects of climate change in the region over time.</p> <p>To study the IPLs, Evans analyzed the compounds on an instrument that employs high-pressure liquid chromatography, coupled with a mass spectrometer. The instrument, which takes the space of a large closet, separates molecules based on their polarity and molecular formulae. From there, Evans deduces the lipid structures and abundances, and connects them with the environmental parameters of the particular microbial mats to determine what contributes most to the lipid variability within the different mat communities.</p> <p>“Based on our data, environmental conditions, such the availability of nutrients and variations in temperature, seem to be the main driver of lipid membrane setup,” Evans says. “These microbes have a very special lipid signature that allows them to adapt to the extreme climatic conditions in Antaractica’s harsh environment.” In a continuation of this work, Summons and Evans are investigating other compound classes, such as the sterols that modulate the membrane behavior of the microscopic eukaryotes that occupy particular niches within an otherwise bacterially-dominated landscape.</p> <p>“In the process of answering the most obvious questions others always crop up,” Summons says. “No matter what we learn, there are always curiosities that beg to be investigated.”</p> Marc Shallenberg samples from a pond.Photo: Roger SummonsEAPS, School of Science, Biology, Climate, Climate change, Evolution, Research, Animals, Earth and atmospheric sciences, Geology, Ecology, Environment, Microbes, Bacteria, Genetics Forum addresses future of civil and environmental engineering education Academic leaders cite urgent need to expand, enhance curriculum to address societal challenges. Fri, 22 Nov 2019 11:50:55 -0500 Department of Civil and Environmental Engineering <p>Battling climate change and adapting communities to be ready for its effects on the world. Ensuring food and water security for an exploding population. Navigating ever-more congested urban landscapes.</p> <p>These global concerns and others have been outlined by the National Academies and other institutions as imminent threats. One discipline in particular — civil and environmental engineering — has the history and capability to address these challenges on a large scale.</p> <p>MIT’s Department of Civil and Environmental Engineering (CEE) took one of the first steps to address the question on how best to prepare a new generation of civil and environmental engineers by organizing a recent one-day workshop, entitled “CEE Education Frontiers Forum,” with invited leaders and educators from 10 leading U.S. institutions, including Stanford University, the University of California at Berkeley, Georgia Tech, and the University of Texas at Austin.</p> <p>“The discipline of CEE is really at the cusp of a lot of things,” says Saurabh Amin, associate professor and undergraduate officer of CEE at MIT and one of the organizers of the event. “This exceptional group of universities is already addressing today’s challenges, but the field is changing so quickly that our educational efforts need to stay ahead of what we see CEE in need of.” &nbsp;</p> <p>During her plenary talk, Anette “Peko” Hosoi, associate dean of engineering and professor of mechanical engineering at MIT, said that it’s not uncommon for engineering curricula to change over time. Each time the curriculum was revised, it was informed by the needs and constraints of that specific period of time, which drove the educational goals forward. Hosoi shared an historical survey of MIT engineering education which showed an overhauled curriculum approximately every 25 years to adjust to the needs of the day. From learning to design iron bridges in 1875 to electrifying the countryside in 1925, “the curriculum was turning over at a fairly rapid timescale,” she said.</p> <p>“A lot of students come in wanting to change the world,” says Markus J. Buehler, department head of MIT CEE, the Jerry McAfee (1940) Professor in Engineering, and a forum co-organizer. “They want to address climate change, they want to address transportation, they want to address pollution. CEE offers a clear pathway through today’s curriculum to make contributions in these fields, and the workshop fostered discussion into how we can further strengthen our program and provide our students even better skills for their careers after graduation.”</p> <p>The workshop consisted of two plenary talks, four panel discussions, and a lunch session, all focused on how to make sure CEE students understand and are well-prepared for their post-graduate opportunities, now and into the future. Two subjects addressed throughout the day were the value of an interdisciplinary education and an increased need for excellent interpersonal skills to prepare students for the real world.</p> <p>As the need for a sustainable future becomes urgent, the required skillset of CEE graduates has also broadened. These skills include foundational knowledge in emerging fields such as computing and machine learning, as well as social responsibility and ethics, and leadership. For example, a well-trained civil or environmental engineer should be able to help design new solutions to make a city capable of withstanding rising sea levels associated with a changed climate, or create sustainable food, water, and energy supply chains. In an increasingly digitized world, speakers pointed out that CEE students should be able to incorporate key concepts such as data analytics and applications of artificial intelligence into their solutions.</p> <p>“Civil and environmental engineers are defined by our applications … not our tools,” said Mark Stacey, department chair of CEE at UC Berkeley, during the panel on CEE Domains and Interdisciplinary Frontiers. “We bring tools from wherever they emerge.”</p> <p>MIT’s CEE education is built around three central tenants: rigorous core knowledge of the science behind the discipline, fieldwork that allows students to gain insights into real-world problems, and labs designed to have students synthesize the knowledge and skills they have developed over their other coursework.</p> <p>“Our curriculum is agile and designed to be adaptive to the needs of students and help them address these grand challenges,” says Amin. “The hope is that as new problems and areas of study arise, our students will be able to tackle whatever area they are interested in. We help students to tailor their coursework based on their individual goals and aspirations. This workshop identified some of the hurdles that may be coming and will help us in preparing for them.”</p> <p>Nearly all in attendance agreed that the first year of a CEE curriculum was critical to demonstrate to students the possibilities of a future in the profession. One suggestion involved adding experience-based lab work during the first semester to engage students with the field from the get-go.</p> <p>A similar educational reform is already underway at MIT through the Designing the First Year Experience initiative, headed by Vice Chancellor for Undergraduate and Graduate Education Ian Waitz. Waitz acknowledged the difficulty in changing a curriculum that has been in place for decades, but recognized the importance of addressing the educational and social needs of a changing student body.</p> <p>“It’s not rocket science,” said Waitz during his talk. “It’s harder than that — it’s people science.”</p> <p>For example, starting this year MIT CEE began offering new discovery subjects focusing on sustainable cities and climate change for first-year students. The goal is to bring these students into the discussions of grand challenges early on and equip them to make informed choices during their stay at MIT, and beyond.</p> <p>A number of speakers also mentioned throughout the day that civil and environmental engineers are frequently at the center of civic problem-solving. They must be able to engage with the public, government officials, and engineers and scientists of other backgrounds. Any new curriculum should foster the ability to connect with people of different backgrounds to strengthen leadership skills. Panels on post-grad research opportunities by representatives from the National Science Foundation enforced this point.</p> <p>Participants agreed the workshop was successful in moving the CEE education conversation forward.</p> <p>“What we tried to do was … answer questions about what the CEE degree of the future would be,” says Desiree Plata, assistant professor in CEE at MIT and an event co-organizer. “[We] saw a lot of different opinions about that today, so that's great for idea generation. I think there's still a lot of work that needs to be done in terms of how to do it.”</p> <p>The organizers plan to release a document summarizing the key points from the workshop to act as a jumping-off point for the next round of talks, and will seek further input from students and alumni.</p> <p>“The next conversation should not start from scratch,” says Amin. “People will have their own hurdles at their own universities, but we believe that now is right time to lead this change.”</p> Forum participants engage in a discussion to advance CEE curriculum fundamentals. (Left to right:) Donald Webster of Georgia Tech, David Rosowsky of the University of Vermont, Julie Zimmerman of Yale University, and Robert Gilbert of the University of Texas at Austin.Photo: Maria IacoboCivil and environmental engineering, School of Engineering, Leadership, Education, teaching, academics, Environment, Special events and guest speakers Microbial cooperation at the micron scale impacts biodegradation MIT researchers demonstrate how often-ignored microbial interactions have a significant impact on the biodegradation of complex materials. Fri, 15 Nov 2019 14:15:45 -0500 Maria Iacobo | Department of Civil and Environmental Engineering <p>The carbon cycle, in which CO<sub>2</sub> is incorporated into living organisms and later released back into the atmosphere through respiration, relies on the ability of bacteria and fungi to degrade complex organic materials such as polysaccharides. These materials represent large reservoirs of carbon and energy on the planet. By degrading them, microbes enable the recycling of this energy and carbon into the ecosystem. However, much like some of the human-designed synthetic materials, some polysaccharides can be highly recalcitrant to degradation.</p> <p>Using a combination of computational models and experiments, MIT scientists have shown that, in order to degrade, recalcitrant polysaccharides bacteria “team up” by forming micrometer-scale cell clusters where cells facilitate each other’s growth. This study demonstrates how cooperation among microbes, often ignored in biogeochemical models, can have a significant impact on ecosystem-level processes. The <a href="" target="_blank">work</a>, published in the <em>Proceedings of The National Academy of Science</em> Oct. 30, shows that the emergence of these cooperative clusters is a stochastic process that depends on cells encountering each other and aggregating on the surface of polysaccharide particles.</p> <p>“One of the implications of cooperation is that degradation rate can be determined by the time it takes for cells to find each other, and this can be very long if cell densities are low” says Otto X. Cordero, associate professor of civil and environmental engineering at MIT.</p> <p>The research team showed that for some organisms, the critical cell densities required for degradation can be larger than their natural abundance in the environment, suggesting that the degradation of complex organic matter can be bacteria-limited in some cases.&nbsp;</p> <p>“The fundamental reason why cooperation emerges in these microorganisms is because the large molecules that make up complex materials need to be dissolved by secreted enzymes, outside cells” says Cordero.</p> <p>The researchers showed that in an environment like the ocean, 99 percent of all carbon released outside cells is lost by diffusion. The formation of cell clusters of sizes 10-20 micrometers is a cooperative behavior that increases the uptake of dissolved carbon, enabling bacteria to initiate growth and degradation.</p> <p>Cordero’s lab uses a combination of genomics, experiments, and modeling to understand the community ecology of microorganisms, as well as its functional and evolutionary consequences. Postdocs Ali Ebrahimi and Julia Schwartzman are also authors on the paper. Ebrahimi specialized in the modeling of biogeochemical processes at the microbe level. Julia Schwartzman specialized in the ecophysiology of microbes. Both are part of the Simons Collaboration PriME (Principles of Microbial Ecosystems), which is co-directed by Cordero and funds labs in University of California at San Diego, University of Southern California, Caltech, University of Georgia, ETH Zurich, and MIT.</p> MIT research finds the emergence of cooperative cell clusters depends on cells encountering each other and aggregating on the surface of polysaccharide particles.Civil and environmental engineering, School of Engineering, Ecology, Climate change, Research, Microbes, Carbon, Biology New tools could improve the way cement seals oil wells Techniques for observing concrete as it sets could facilitate the development of new cements. Tue, 12 Nov 2019 11:19:40 -0500 David L. Chandler | MIT News Office <p>A key part of drilling and tapping new oil wells is the use of specialized cements to line the borehole and prevent collapse and leakage of the hole. To keep these cements from hardening too quickly before they penetrate to the deepest levels of the well, they are mixed with chemicals called retarders that slow down the setting process.</p> <p>It’s been hard to study the way these retarders work, however, because the process happens at extreme pressures and temperatures that are hard to reproduce at the surface.</p> <p>Now, researchers at MIT and elsewhere have developed new techniques for observing the setting process in microscopic detail, an advance that they say could lead to the development of new formulations specifically designed for the conditions of a given well location. This could go a long way toward addressing the problems of methane leakage and well collapse that can occur with today’s formulations.</p> <p>Their findings appear in the journal <em>Cement and Concrete Research</em>, in a paper by MIT Professor Oral Buyukozturk, MIT research scientist Kunal Kupwade-Patil, and eight others at the Aramco Research Center in Texas and at Oak Ridge National Laboratory (ORNL) in Tennessee.</p> <p>“There are hundreds of different mixtures” of cement currently in use, says Buyukozturk, who is the George Macomber Professor of Civil and Environmental Engineering at MIT. The new methods developed by this team for observing how these different formulations behave during the setting process “open a new environment for research and&nbsp; innovation” in developing these specialized cements, he says.</p> <p>The cement used to seal the lining of oil wells often has to set hundreds or even thousands of meters below the surface, under extreme conditions and in the presence of various corrosive chemicals. Studies of retarders have typically been done by removing samples of the cured cement from a well for testing in the lab, but such tests do not reveal the details of the sequence of chemical changes taking place during the curing process.</p> <p>The new method uses a unique detector setup at Oak Ridge National Laboratory called the Nanoscale Ordered Materials Diffractometer, or NOMAD, which is used to carry out a process called Neutron Pair Distribution Function analysis, or PDF. This technique can examine in situ the distribution of pairs of atoms in the material that mimic realistic conditions that are encountered in a real oil well at depth.</p> <p>“NOMAD is perfectly suited to study complex structural problems such as understanding hydration in concrete, because of its high flux and the sensitivity of neutrons to light elements such as hydrogen,” says Thomas Proffen of ORNL, a co-author of the paper.</p> <p>The experiments revealed that the primary mechanism at work in widely used retarder materials is the depletion of calcium ions, a key component in the hardening process, within the setting cement. With fewer calcium ions present, the solidifying process is dramatically slowed down. This knowledge should help experimenters to identify different chemical additives that can produce this same effect.</p> <p>When oil wells are drilled, the next step is to insert a steel casing to protect the integrity of the borehole, preventing loose material from collapsing into the well and causing blockages. These casings also prevent the oil and gas, which is under high pressure, from escaping out into the surrounding rock and soil and migrating to the surface, where leakage of methane can play a significant role in contributing to climate change. But there is always a space, which ranges up to a few inches, between the casing and the borehole. This space must be fully filled with cement slurry to prevent leakage and protect the steel lining from exposure to water and corrosive chemicals that could cause it to fail.</p> <p>Methane is a much stronger greenhouse gas than carbon dioxide, so limiting its escape is a crucial step toward limiting the contribution of oil and gas wells to global warming.</p> <p>“The methane, water, and all sorts of different chemicals down there [in the well] create a corrosion problem,” Buyukozturk says. “Also, the well bore circumferential area is next to parts of the Earth’s crust that have instabilities, so material could tumble into the hole and damage the casing.” The way to prevent these instabilities is to pump cement through the casing into the area between the well bore and the casing, which provides “zonal isolation.” The cement then provides a hydraulic seal to keep any water and other fluids away from the casing.</p> <p>But the high temperatures and pressures found at depth present an environment that is “the worst thing you can do to a material,” he says, so it is crucial to understand just how the material and its chemical properties are affected by these harsh surroundings as they do their job of sealing the well.</p> <p>This new method of studying the setting process provides a way “to precisely understand this process, so we can engineer the next generation of retardants,” says Kupwade-Patil, lead author of this paper. “These retardants are very important,” not only for protecting the environment but also for preventing serious economic losses from a damaged or leaking well. “Loss of the seal is serious, so you can’t afford to make a mistake” in the cement sealing process, he says.</p> <p>“After obtaining my PhD, about 30 years ago, my first job was to improve the quality of oil-well cementing,” says Paulo Monteiro, the Roy W. Carlson Distinguished Professor of Civil and Environmental Engineering at the University of California at Berkeley, who was not involved in this work. “At that time there were limited sophisticated characterization techniques, so it is a real pleasure to see X-ray and neutron total scattering methods being applied to study the hydration of oil-well cements in the presence of chemical admixtures.” He adds that these new methods have “the potential to guide the development of tailor-made admixtures that can significantly improve the performance of oil-well cementing.”</p> <p>The research team included Peter J. Boul, Diana Rasner and Carl Thaemlitz from Aramco Service Company and Michelle Everett, Thomas Proffen, Katharine Page, Dong Ma and Daniel Olds from Oak Ridge National Laboratory in Tennessee. The work was supported by Aramco Service Company, of Houston, and the U.S. Department of Energy.&nbsp;</p> Oil and natural gas wells require concrete to seal the area between the well casing and the surrounding borehole, but because of the high temperatures and pressures at depth, it has been hard to study how these specialized cements harden. Now, a new method developed at MIT can help to fill in that missing knowledge.Civil and environmental engineering, Environment, Materials Science and Engineering, Research, School of Engineering, Sustainability, Oil and gas, Emissions MIT report provides guidance on climate-related financial disclosures Recommendations could help companies deliver more useful disclosures to investors on risks they face due to climate change. Wed, 06 Nov 2019 11:29:49 -0500 David L. Chandler | MIT News Office <p>An MIT white paper released today outlines a series of recommendations on how companies, particularly those in the oil and gas industry, can use scenario analysis to effectively disclose risks and opportunities they face as a result of global climate change.</p> <p>The <a href="">report</a>, “Climate-Related Financial Disclosure Disclosures: The Use of Scenarios,” was organized by the Office of the Vice President for Research and drafted by a team of MIT faculty and staff members. It builds on insights gained from a workshop held at MIT last year, which included representatives from oil and gas companies, credit rating agencies, investment firms, and nongovernmental organizations, along with academics and other entities engaged in the production of global climate scenarios.</p> <p>“This report, and the workshop it grew out of, are part of MIT’s ongoing efforts under our <a href="">Plan for Action on Climate Change</a>,” says Vice President for Research Maria T. Zuber. “A key element of the plan is a strategy of engaging with a wide variety of sectors to accelerate the world’s transition away from carbon emitting energy sources.”</p> <p>Financial disclosures that include an examination of risk factors that could impact a company’s operations, facilities, and financial performance are an essential tool to provide guidance to potential investors and lenders, credit rating agencies, and insurers. For these disclosures to be useful, however, they must be prepared using comparable methods and consistent approaches.</p> <p>In 2017, the Task Force on Climate-related Financial Disclosures (TCFD), established by the G20 Financial Stability Board, provided a guiding framework and set of recommendations to promote that kind of consistency. However, the use of scenario analysis to describe the resilience of a company’s strategy, as recommended by the TCFD, still represents a significant challenge for companies. MIT, with its extensive experience in analysis of climate futures, saw this as an opportunity to shed some light on the task.</p> <p>“The point was to engage with industry to help all of the different stakeholders get on the same page about the scenarios they use,” says Erik Landry SM ’18, research associate in the Office of the Vice President for Research and lead author of the report. “Once a common understanding is reached, then at least we are all working on the same problem.” Landry is also a recent graduate of MIT’s Technology and Policy Program.</p> <p>The report aims to advance the state of scenario-based disclosures of climate-related risks and opportunities by promoting a better understanding of the underlying scenarios. It is meant to help oil and gas companies produce more useful scenario-based disclosures, help the financial community better evaluate such disclosures, and enable a dialogue that would help scenario producers make their scenarios more relevant to company-level climate-related risk assessment.</p> <p>Henry Jacoby, the William F. Pounds Professor of Management, Emeritus, in the MIT Sloan School of Management and a member of the MIT working group, says, “Most climate scenarios were developed to study the implications of specific policies or technological developments, not to assess near-term financial risks in a particular industry,” so the report tries to outline ways such scenarios can be applied usefully to this new task. “We’re trying to tweak the tools to fit the purposes we’re trying to use them for,” he adds. In this case, a major aim is to help financial decision-makers make more informed decisions about where best to allocate resources, potentially in ways aligned with a low-carbon transition.</p> <p>Many different groups produce such scenarios, the report points out. One widely used set of scenarios is that issued annually by the International Energy Agency. But several other organizations, including the Integrated Assessment Modeling Consortium, the International Renewable Energy Agency, and the Organization for Economic Cooperation and Development, also produce scenarios, each one taking a somewhat different approach. The producers of these climate scenarios “all differ in their modeling methodologies, and one or another may be more relevant to some sectors,” Landry says. “It is important for financial decision-makers to be aware of the underlying assumptions being made about the future.”</p> <p>Arguably, the strength of using scenarios lies in their range of possible futures they can explore, from “business as usual” scenarios to those in which global temperature rise is limited to 2 degrees Celsius, and beyond. Scenarios involve various assumptions about technology and policy. Some assumptions that go into these scenarios are relatively easy to quantify, such as whether or not a carbon price is implemented, and if so how much it is and how it increases over time. Other factors have greater inherent uncertainties, such as the expected rate of improvement in energy production and storage technologies, the development and scalability of carbon capture and sequestration, or social factors such as how quickly people change their energy-related choices.</p> <p>One recommendation the report makes is for oil and gas companies to compare their own scenarios to “reference scenarios,” or credible scenarios that are commonly used and understood by many stakeholders. While companies may want to include their own specific scenarios based on the unique characteristics of their own facilities and supply chains, making clear exactly how their scenarios differ from a reference case enables investors to assess each company by its own merits, while also retaining a level of comparability between companies. “Insofar as companies disclose clearly and transparently what scenarios they use and what assumptions go into them, they can let investors to do their jobs and see how they compare,” Landry says.</p> <p>It also calls for companies to be complete in their descriptions of how their strategies are resilient in the face of a changing climate and a low carbon transition. This includes addressing both where the company’s vulnerabilities lie and their degree of preparedness. For audiences evaluating such descriptions, it’s important to “be wary of general claims of resilience that are not visibly grounded in clear, consistent, and transparent use of scenarios,” Landry says.</p> <p>While this report focuses on the climate-related disclosures by the oil and gas industry, the authors believe that the principles it outlines should also be very applicable to many other industries such as manufacturing, commercial transportation, or agriculture. With more useful disclosures, financial decision-makers can make choices that not only promote their own interests, but also encourage the advancement of more sustainable business models.</p> <p>The report was produced by MIT’s Working Group on Climate-Related Scenarios, which in addition to Landry and Jacoby included Louis Carranza and Sergey Paltsev of the MIT Energy Initiative, James Gomes of the Office of the Vice President for Research, and Donald Lessard and Bethany Patten of the MIT Sloan School of Management.</p> A new report from MIT outlines how companies in the fossil fuel business can make better use of scenarios to show their vulnerabilities, as well as their opportunities, in a world facing major climate change.Research, MIT Energy Initiative, Sloan School of Management, Oil and gas, Climate change, Environment, Industry, Policy, Climate, Greenhouse gases, Carbon dioxide, Emissions, Sustainability, Institute for Data, Systems, and Society 3 Questions: When the student becomes the teacher Grad student Brandon Leshchinskiy created EarthDNA Ambassadors, an outreach program “for the Earth, for future generations.” Tue, 05 Nov 2019 12:15:01 -0500 Sara Cody | Aeronautics and Astronautics <p><em>As a master’s student in the <a href="">Department of Aeronautics and Astronautics</a> and the <a href="">Technology and Policy Program</a> at MIT, Brandon Leshchinskiy’s ultimate goal is to “build AI tools to adapt to climate change and the educational tools to stop it.” As part of his graduate thesis, in collaboration with <a href="">MIT Portugal</a> and EarthDNA, both led by <a href="">Dava Newman</a>, the Apollo Professor of Aeronautics and Astronautics, Leschinskiy created <a href="">EarthDNA’s Ambassadors</a>, an outreach program “for the Earth, for future generations.”</em></p> <p><em>The program aims to empower high school students to speak loudly and often about climate change, by leveraging the energy of college students and recent graduates who are passionate about infusing these conversations in to their local community. EarthDNA Ambassadors provides resources, including a Climate 101 presentation, email templates, surveys, and other materials to support these outreach efforts in local communities. Leshchinskiy spoke about the program in a recent interview.</em></p> <p><strong>Q: </strong>Why are you targeting college students and recent grads to participate in educating local high schoolers?</p> <p><strong>A:</strong> As an undergraduate, I participated in a lot of STEM outreach activities, and so I know firsthand that college students have a lot of energy to give back, and there are a lot of institutional resources available for these efforts. College students have this intrinsic capacity and desire for this type of work, so we feel that college students and recent graduates would be great emissaries in our effort.</p> <p>Climate change is an issue that has become more and more of a cultural priority, especially among the younger generations. Recent UN/IPCC reports show we have roughly 10 years before climate impacts could start to spiral out of control, and I think this younger generation is much more attuned to this because we have grown up experiencing the realities of climate change. Because of this, I think young people feel disenfranchised by the status quo and are therefore much more motivated towards activism.</p> <p>To that end, I think there is a much greater sense of trust between peers due to our similar shared experiences. We all understand how high the stakes are here, and I think college students or recent graduates are better able to appeal to high school students in a way that’s meaningful.</p> <p><strong>Q: </strong>What does the process look like to get involved with EarthDNA Ambassadors and what sort of activities and other resources does the program entail?<strong> </strong></p> <p><strong>A:</strong> Our first goal is to foster a sense of community among people who are passionate about climate change, so first and foremost we encourage interested parties to join our Slack community to start the conversation. We share resources that follow the three key steps of our program: Reach out, where volunteers connect with local high schools where they want to present; Present, where they prepare and present the information in their classroom of choice; and Follow up, where volunteers follow up with teachers a day after, a week after, and a month after their presentation, collecting survey data to help us measure the impact of our program.</p> <p>On our website, we offer training resources and other material for our volunteers, such as email templates for contacting teachers, presentation tips and guidelines, recordings of sample presentations, and step-by-step instructions about our “Climate 101” presentation and interactive activity.</p> <p>The goal of our educational program is to present a cohesive narrative that tells the full story of climate change. Solving this problem will require interdisciplinary effort, and right now I think there is this huge misconception that climate education only belongs in science class. Don’t get me wrong — climate-change education absolutely belongs in a science classroom. But history teachers can provide valuable perspective on past interactions between humans and our home planet; visual arts teachers can foster a community dialogue that captures our intimate relationship with Earth’s climate; and since a key component of climate monitoring involves working with data, climate-change education belongs in computer science and math classrooms as well. In the social sciences, climate change is a big economics problem. Economic models assume continuous growth — but they are competing against physics, which sets a clear limit due to finite resources. Physics will win. Still, if we are going to solve climate change, we have to tell the whole story by reconciling all of these perspectives.</p> <p><strong>Q: </strong>What do you hope to accomplish with this program?</p> <p><strong>A:</strong> Our goal is to broaden access to climate literacy. People tend to filter information through their values, ideologies, and experiences, but in order to make the systemic changes required, we’ll need some level of government intervention, which not all citizens are comfortable with. In general, parents do trust their kids, so if we can get adolescents to talk about the impacts of climate change on their lives, we can at least help start the conversation where there isn’t necessarily one happening right now. One of the questions we ask in the followup survey is how many times do you speak about climate change per week. Eventually we hope to see that we help foster thousands of conversations about climate change that may not have happened otherwise.</p> Brandon Leshchinskiy Photo: Sara Cody/Department of Aeronautics and AstronauticsMIT Portugal, Technology and policy, Climate change, Students, Graduate, postdoctoral, Sustainability, STEM education, Science communications, Aeronautical and astronautical engineering, School of Engineering, 3 Questions, K-12 education, Artificial intelligence, Volunteering, outreach, public service, Policy Autonomous system improves environmental sampling at sea Robotic boats could more rapidly locate the most valuable sampling spots in uncharted waters. Mon, 04 Nov 2019 14:54:51 -0500 Rob Matheson | MIT News Office <p>An autonomous robotic system invented by researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) efficiently sniffs out the most scientifically interesting — but hard-to-find —&nbsp;sampling spots in vast, unexplored waters.</p> <p>Environmental scientists are often interested in gathering samples at the most interesting locations, or “maxima,” in an environment. One example could be a source of leaking chemicals, where the concentration is the highest and mostly unspoiled by external factors. But a maximum can be any quantifiable value that researchers want to measure, such as water depth or parts of coral reef most exposed to air.</p> <p>Efforts to deploy maximum-seeking robots suffer from efficiency and accuracy issues. Commonly, robots will move back and forth like lawnmowers to cover an area, which is time-consuming and collects many uninteresting samples. Some robots sense and follow high-concentration trails to their leak source. But they can be misled. For example, chemicals can get trapped and accumulate in crevices far from a source. Robots may identify those high-concentration spots as the source yet be nowhere close.</p> <p>In a paper being presented at the International Conference on Intelligent Robots and Systems (IROS), the researchers describe “PLUMES,” a system that enables autonomous mobile robots to zero in on a maximum far faster and more efficiently. PLUMES leverages probabilistic techniques to predict which paths are likely to lead to the maximum, while navigating obstacles, shifting currents, and other variables. As it collects samples, it weighs what it’s learned to determine whether to continue down a promising path or search the unknown — which may harbor more valuable samples.</p> <p>Importantly, PLUMES reaches its destination without ever getting trapped in those tricky high-concentration spots. “That’s important, because it’s easy to think you’ve found gold, but really you’ve found fool’s gold,” says co-first author Victoria Preston, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and in the MIT-WHOI Joint Program.</p> <p>The researchers built a PLUMES-powered robotic boat that successfully detected the most exposed coral head in the Bellairs Fringing Reef in Barbados —&nbsp;meaning, it was located in the shallowest spot —&nbsp;which is useful for studying how sun exposure impacts coral organisms. In 100 simulated trials in diverse underwater environments, a virtual PLUMES robot also consistently collected seven to eight times more samples of maxima than traditional coverage methods in allotted time frames.</p> <p>“PLUMES does the minimal amount of exploration necessary to find the maximum and then concentrates quickly on collecting valuable samples there,” says co-first author Genevieve Flaspohler, a PhD student and in CSAIL and the MIT-WHOI Joint Program.</p> <p>Joining Preston and Flaspohler on the paper are: Anna P.M. Michel and Yogesh Girdhar, both scientists in the Department of Applied Ocean Physics and Engineering at the WHOI; and Nicholas Roy, a professor in CSAIL and in the Department of Aeronautics and Astronautics. &nbsp;</p> <p><strong>Navigating an exploit-explore tradeoff</strong></p> <p>A key insight of PLUMES was using techniques from probability to reason about navigating the notoriously complex tradeoff between exploiting what’s learned about the environment and exploring unknown areas that may be more valuable.</p> <p>“The major challenge in maximum-seeking is allowing the robot to balance exploiting information from places it already knows to have high concentrations and exploring places it doesn’t know much about,” Flaspohler says. “If the robot explores too much, it won’t collect enough valuable samples at the maximum. If it doesn’t explore enough, it may miss the maximum entirely.”</p> <p>Dropped into a new environment, a PLUMES-powered robot uses a probabilistic statistical model called a Gaussian process to make predictions about environmental variables, such as chemical concentrations, and estimate sensing uncertainties. PLUMES then generates a distribution of possible paths the robot can take, and uses the estimated values and uncertainties to rank each path by how well it allows the robot to explore and exploit.</p> <p>At first, PLUMES will choose paths that randomly explore the environment. Each sample, however, provides new information about the targeted values in the surrounding environment — such as spots with highest concentrations of chemicals or shallowest depths. The Gaussian process model exploits that data to narrow down possible paths the robot can follow from its given position to sample from locations with even higher value. PLUMES uses a novel objective function —&nbsp;commonly used in machine-learning to maximize a reward — to make the call of whether the robot should exploit past knowledge or explore the new area.</p> <p><strong>“Hallucinating” paths</strong></p> <p>The decision where to collect the next sample relies on the system’s ability to “hallucinate” all possible future action from its current location. To do so, it leverages a modified version of Monte Carlo Tree Search (MCTS), a path-planning technique popularized for powering artificial-intelligence systems that master complex games, such as Go and Chess.</p> <p>MCTS uses a decision tree — a map of connected nodes and lines — to simulate a path, or sequence of moves, needed to reach a final winning action. But in games, the space for possible paths is finite. In unknown environments, with real-time changing dynamics, the space is effectively infinite, making planning extremely difficult. The researchers designed “continuous-observation MCTS,” which leverages the Gaussian process and the novel objective function to search over this unwieldy space of possible real paths.</p> <p>The root of this MCTS decision tree starts with a “belief” node, which is the next immediate step the robot can take. This node contains the entire history of the robot’s actions and observations up until that point. Then, the system expands the tree from the root into new lines and nodes, looking over several steps of future actions that lead to explored and unexplored areas.</p> <p>Then, the system simulates what would happen if it took a sample from each of those newly generated nodes, based on some patterns it has learned from previous observations. Depending on the value of the final simulated node, the entire path receives a reward score, with higher values equaling more promising actions. Reward scores from all paths are rolled back to the root node. The robot selects the highest-scoring path, takes a step, and collects a real sample. Then, it uses the real data to update its Gaussian process model and repeats the “hallucination” process.</p> <p>“As long as the system continues to hallucinate that there may be a higher value in unseen parts of the world, it must keep exploring,” Flaspohler says. “When it finally converges on a spot it estimates to be the maximum, because it can’t hallucinate a higher value along the path, it then stops exploring.”</p> <p>Now, the researchers are collaborating with scientists at WHOI to use PLUMES-powered robots to localize chemical plumes at volcanic sites and study methane releases in melting coastal estuaries in the Arctic. Scientists are interested in the source of chemical gases released into the atmosphere, but these test sites can span hundreds of square miles.</p> <p>“They can [use PLUMES to] spend less time exploring that huge area and really concentrate on collecting scientifically valuable samples,” Preston says.</p> Even in unexplored waters, an MIT-developed robotic system can efficiently sniff out valuable, hard-to-find spots to collect samples from. When implemented in autonomous boats deployed off the coast of Barbados (pictured), the system quickly found the most exposed coral head —meaning it was located in the shallowest spot — which is useful for studying how sun exposure impacts coral organisms.Image courtesy of the researchersResearch, Computer science and technology, Algorithms, Computer Science and Artificial Intelligence Laboratory (CSAIL), Autonomous vehicles, Machine learning, Artificial intelligence, Environment, Robots, Robotics, Oceanography and ocean engineering, Aeronautical and astronautical engineering, School of Engineering Symposium explores challenges of adapting to climate change “Uncertainty is a reason to act, not to wait,” panelists agree. Thu, 31 Oct 2019 12:20:51 -0400 David L. Chandler | MIT News Office <p>In the second of six symposia on climate change to be held this academic year, seven experts from around the country tackled the topic of “challenges of climate policy.” The Oct. 29 event included three panel discussions held at MIT’s Wong Auditorium.</p> <p>Moderated by Richard Schmalensee, the Howard W. Johnson Professor of Management and professor of economics emeritus at MIT’s Sloan School of Management, the panelists discussed the social impacts caused by climate change; the kinds of adaptations that might help people cope with these impacts and limit their economic and physical harm; and possible solutions to the political, economic, and social factors affecting the world’s responses to this pressing issue.</p> <p>Global climate change will have “huge impacts that will affect every sector” of society, and “its costs will be extremely high,” said Susanne Moser, a specialist in adaptation to climate change and director of Susanne Moser Research and Consulting. Although there are still uncertainties about the rate and extent of climate change, she said, “uncertainty is a reason to act, not to wait.”</p> <p>Compared to the responses that experts say are required to forestall the worst effects of climate change, efforts around the world still fall far short, Moser said: “Most responses are just reactive. There’s no unifying vision, there’s no agreement on social equity priorities, and there is a surprising lack of urgency,” she said.</p> <p>Even most universities, she noted, do not yet have clear and easy ways to find information on their efforts toward adaptation to climate change, or programs for students to specialize in that field. “You can barely find it on their websites,” she said.</p> <p>Some people fear that an emphasis on adaptation could make people complacent because they see less need to to reduce greenhouse gases if plans are underway to adapt to a changed climate. But Moser disputed that claim. “We’ve studied that” and found the reverse to be true, she said. When people see just how difficult and expensive the processes of adaptation are, compared to measures to reduce emissions, “they realize reduction [of emissions] is a bargain,” and their motivation to deal with that issue actually increases.</p> <p>Andrew Steer, president and CEO of the World Resources Institute, urged listeners: “Let’s get serious about climate change adaptation, as if our lives depended on it. Which it may.” He said people need to start looking seriously at ways to respond to five different key areas of global change: higher temperatures, rising seas, stronger storms, shifting rainfall patterns, and acidification of the oceans.</p> <p>The impacts are likely to be extreme, he said. Just adapting to the changes directly affecting coastal cities could cost upward of a trillion dollars a year, he said. And yet, when governments and agencies allocate resources to dealing with climate change, so far only about 10 percent of that money goes toward adaptation, versus 90 percent toward mitigation, or efforts to slow or reverse the release of climate altering emissions. Both are crucial, he said, but adaptation should not be ignored since even with aggressive mitigation policies, a significant amount of climate change is already unavoidable.</p> <p>“Adaptation is a moral imperative,” he said, and also “an ecological imperative, and a massive economic imperative.”</p> <p>Adaptation need not be as expensive as people think, Steer added. Many of the measures that are needed to adapt to a warming world also have other benefits, he pointed out. As an example, drip irrigation was invented as a way to deal with drought conditions, but it is also an inherently more efficient system, greatly reducing the amount of water needed for crops and the need for power to operate pumps. That greater efficiency for farmers can lower their costs, and thus make food less expensive. “Done right, adaptation can have all kinds of dynamic benefits,” he said.</p> <p>Much more research is needed to quantify the expected effects of a warming planet, said Max Auffhammer, a professor of international sustainable development at the University of California at Berkeley. To study and quantify the economic harm done by 1 ton of carbon dioxide (roughly the amount emitted by driving a car from Cambridge to Berkeley, he said) is a very difficult task. The best existing estimates were made back in the 1990s, and much has been learned since then. Models need to encompass global coverage, establish causal connections, and anticipate significant technological changes. Imagine, he said, trying to predict in the late 1800s the energy that would be used for cooling houses today.</p> <p>Whereas some might say “we got this” in terms of the scientific answers about the effects of climate change, he said, “We don’t got this. There’s a lot of work to be done.”</p> <p>Kathleen Hicks, director of the International Security Program at the Center for Strategic and International Studies, said that the U.S. military forces, unlike many politicians, understand the problem of climate change and take it seriously. Partly that’s because it’s in their nature to always be assessing potential risks and planning how to respond to them, and they are highly trained in how to do so. In addition, they are already feeling the effects directly, with even inland bases such as one in Nebraska affected by severe flooding, likely exacerbated by climate change.</p> <p>“Climate is a national security concern that is not debated in the security community,” she said.</p> <p>But public opinion has also come a long way over the last several years, said Steven Ansolabehere, a professor of government at Harvard University. “The American public accepts that climate change is coming and is a concern,” he said, but “a majority also feel it’s distant,” with consequences beyond their lifetimes, whereas scientists studying the problem say its damaging effects are already being seen clearly in many parts of the world today. This discrepancy “is the heart of the problem, and it has implications for any policy we take,” he said.</p> <p>But Ansolabehere said that there are already interesting differences in the responses of younger people compared to their elders. The difference in the degree of urgency seen in the issue of climate change between younger (“millennial”) Republicans and “boomer” generation Republicans is just as big as the difference between Democrats and Republicans overall, he said. And, he said, linking policies to tackle climate change to other benefits such as clear air and clean water — for example through the closing of coal-fired power plants — is a more effective strategy for gaining support than just emphasizing the climate benefits.</p> <p>Henry Jacoby, the William F. Pounds Professor of Management Emeritus at MIT’s Sloan School of Management, said that the issue of climate change reflects the well-known “commons problem,” where a few bad actors can undermine a large group’s mutual dependence on common resources. He compared it to a shared refrigerator in a dorm, where there is little control over someone making off with someone else’s stored drink. Similarly, nations will almost always end up acting in their own self interest rather than for a more abstract common good.</p> <p>The way nations deal with that is through international agreements and treaties, such as the Paris Agreement on climate change. But that agreement is entirely voluntary, consisting of individual national pledges without any mechanism for enforcement. Just as with the dorm fridge, there’s no police officer to call about an infraction.</p> <p>By 2030, projections show that about three-quarters of all greenhouse emissions will be coming from developing countries — the places that can least afford to spend money to address the problem. “There’s going to have to be some financial transfer” from the wealthier countries to help those developing countries reduce their emissions, Jacoby said.</p> <p>Leah Stokes PhD ’15, an assistant professor of political science at the University of California at Santa Barbara, said that the three decades of climate denial efforts by major fossil fuel companies “has been extremely influential,” and will require significant efforts to reverse. But she also noted several reasons to expect that these attitudes are changing.</p> <p>For one, the raging wildfires in California and other places provide a vivid reminder that a significant increase in such fires is one of the expected effects of a warmer planet with more frequent and deeper droughts. In addition, the UN’s Intergovernmental Panel on Climate Change’s most recent report set a target of 2030 by which the world must significantly reduce emissions. That short timeline means that “it’s suddenly not about the distant future,” but a time when most people still expect to be alive, she said, making the problem seem much more urgent. And increasing public actions, such as the recent Climate Strike initiated by Swedish teenager Greta Thunberg, have also raised the public awareness of the issue’s seriousness.</p> <p>Stokes pointed to significant areas of progress, such as the rapid growth of solar and wind power and electric vehicles, and state and local regulations that have continued to push for progress even as federal regulations have been cut back. But to continue this progress will require much more. “We must have solutions at the scale of the crisis,” she said. One approach that could help is to emphasize the potential for new, well-paying jobs in the renewable energy field. “It can’t just be about sticks,” she said, adding that there needs to be tangible carrots as well.</p> Susanne Moser, director of Susanne Moser Research and Consulting, addresses MIT’s second Symposium on Climate Change. In the background are Andrew Steer, president and CEO of the World Resources Institute, and Richard Schmalensee, the Howard W. Johnson Professor of Management and Professor of Economics Emeritus at the MIT Sloan School of Management, who moderated the panel discussions.Image: Bryce VickmarkSpecial events and guest speakers, ESI, MIT Energy Initiative, Climate, Climate change, Global Warming, Policy, Administration, Sustainability, Faculty, Environment J-WAFS zeroes in on food security as agricultural impacts of the climate crisis become more apparent The Abdul Latif Jameel Water and Food Systems Lab presents a new report on climate, agriculture, water, and food security — with plans for more research. Wed, 30 Oct 2019 15:50:01 -0400 Andi Sutton | Abdul Latif Jameel Water and Food Systems Lab <p>Early this August, the UN Intergovernmental Panel on Climate Change issued yet another in a series of grave and disquieting reports outlining the extreme challenges placed on the Earth’s systems by the climate crisis. Most IPCC reports and accompanying media coverage tend to emphasize greenhouse gas (GHG) emissions from energy and transportation sectors, along with the weather and sea-level impacts of climate change and their direct impact on vulnerable human populations. However, this particular report, the "<a href="">Special Report on Climate Change and Land</a>," presents a sobering set of data and analyses addressing the substantial contributions of agriculture to climate change and the ways the climate crisis is projected to jeopardize global food security if urgent action is not taken at the individual, institutional, industry, and governmental levels.</p> <p>There is an ever-increasing public awareness about climate’s effects on the frequency and intensity of extreme weather, threats to coastal cities, and the rapid decline in the biodiversity of the Earth’s ecosystems. However, the impact of climate change on land and food production — and the impact of our food systems on climate change — is just beginning to enter the wider public discourse. Food systems are responsible for up to 30 percent of global GHG emissions, with agricultural activities accounting for up to 86 percent of total food-system emissions. And agriculture is a sector that is put at significant risk by the direct and indirect effects of the Earth’s rising temperatures. In order to adapt to future climate uncertainty and to minimize agricultural greenhouse gas emissions, strategies addressing the sustainability and adaptive capacity of food systems must be developed and rapidly implemented.</p> <p>With so much at stake, targeted research that reaches beyond disciplinary and institutional boundaries is needed. Since its 2014 launch at MIT, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has promoted research and innovation across diverse disciplines that will help ensure the resilience of the world’s water and food systems even as they are increasingly pressured by the effects of climate change. Its newly released report, "<a href="" target="_blank">Climate Change, Agriculture, Water, and Food Security: What We Know and Don’t Know</a>," is part of this effort. The report collects the central findings of an expert workshop conducted by J-WAFS in May 2018. The workshop gathered 46 experts in agriculture, climate, engineering, and the physical and natural sciences from around the world — several of whom were also involved in writing the August 2019 IPCC report — to discuss current understanding of the complex relationship between climate change and agriculture. This report, based on the workshop deliberations, initiates a longer study that will directly engage stakeholders to address how research can be best targeted to the needs of policymakers, funders, and other decision-makers and stakeholders.</p> <p>Central to the conclusions of the 2018 workshop was widespread agreement among participants of the need for convergence research that addresses the climate crisis in food systems. Convergence research is built around deep integration across disciplines in order to address complex problems focusing on societal need. By deploying transdisciplinary teams with expertise in plant, soil, and climate science, agricultural technologies, agribusiness, economics, behavior change and communication, marketing, nutrition, and public policy, convergence research promotes innovative approaches to formulating and evaluating adaptation and mitigation strategies for future food security.</p> <p>A study that J-WAFS is now launching will take this approach. As part of the new study, J-WAFS is partnering with three internationally renowned institutions with complementary expertise in agriculture and food systems. Titled “Climate Change and Global Food Systems: Supporting Adaptation and Mitigation Strategies with Research,” the collaborative project will leverage the myriad disciplines and specialties of a cross-institutional group of researchers, along with stakeholders and decision-makers, in order to develop a prioritized, actionable, solutions-oriented research agenda. The project’s goal is to determine which research questions must be answered, and which innovations must be prioritized, in order to ensure that global food security can be met even while the climate crisis wreaks havoc on global food systems. The project will help develop stronger connections and collaborative partnerships across diverse research communities (in particular, MIT and the partner universities) and with the stakeholders and decision-makers who fund research, develop policy, and implement programs to support agriculture and food security.</p> <p>The three collaborating universities who are joining MIT in this effort are: Wageningen University in the Netherlands — an institution which is at forefront of agriculture and food systems research; Tufts University — an international leader in interdisciplinary food and nutrition research, especially through its Friedman School of Nutrition Science and Policy; and the University of California at Davis, whose College of Agricultural and Environmental Sciences ranks No. 1 in the United States for agriculture, plant sciences, animal science, forestry, and agricultural economics. Says Ermias Kebreab, associate dean for global engagement in the College of Agricultural and Environmental Sciences at UC Davis, “the project will address several grand challenges that align very well with the mission and goals of the UC Davis College of Agricultural and Environmental Sciences.&nbsp; Collaborating with MIT and other project partners presents exciting opportunities to extend the reach and impact of the UC Davis research.”</p> <p>With potential dire impacts of the climate crisis on our global food systems, opportunities for transformative change must be found. But there currently exist significant knowledge gaps on the best practices, technologies, policies, and development approaches for achieving food security with win-win solutions at the nexus of climate change and food systems. J-WAFS’ workshop report emphasized that more research is required to better characterize specific challenges and to develop, evaluate, and implement effective strategies. Specific areas where research presents significant opportunities include understanding and improving soil quality and fertility; the development of technologies such as advanced biotechnology, carbon sequestration, and geospatial tools; fundamental research questions about crop response to environmental stresses, such as high temperatures and drought; improvements to crop and climate models; approaches to manage risk in the face of uncertain risk; and the development of strategies to effect behavioral change, particularly around food choices.</p> <p>It may yet be possible to sustainably produce enough nutritious food to feed the world while at the same time reversing the current trends in its production that damage the environment. As stated by John H. Lienhard V, J-WAFS director and MIT professor, “the next green revolution will be delivered using new farming practices, emerging scientific discoveries, technological breakthroughs, and insights from the social sciences, all combined to provide effective policies, equitable social programs, and much-needed changes in consumer behavior.” &nbsp;</p> If the world is to be free of hunger and malnutrition in accordance with the 2030 UN Sustainable Development Goals, actions to strengthen the resilience and adaptive capacity of food systems must be rapidly implemented in order to adapt to climate change. Research launched by J-WAFS seeks to map out the most strategic ways that research can be used to ensure a global transition toward food-system sustainability.Climate change, Agriculture, Food, Sustainability, J-WAFS, Research, Water Collision course: A geological mystery in the Himalayas MIT geologists use paleomagnetism to determine the chain of events that resulted in the Himalayan mountains, with the support of MISTI-India. Mon, 28 Oct 2019 11:40:01 -0400 Fernanda Ferreira | School of Science <p>According to Craig Martin, deciphering Earth’s geologic past is like an ant climbing over a car crash. “You’ve got to work out how the car crash happened, how fast the cars were going, at what angle they impacted,” explains Martin, a graduate student at MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “You’re just a tiny ant wandering over this massive chaos,” he adds.</p> <p>The crash site Martin is investigating is the Himalayas, a 1,400-mile mountain range that rose when the Indian and Eurasian tectonic plates scrunched together. “The mainstream idea is: There was Eurasia; there was India; and they collided 50 million years ago,” says Oliver Jagoutz, an associate professor in EAPS and Martin’s advisor. “We think it was much more complicated than that, because it’s always more complicated.”</p> <p><strong>Detective work at 11,000 feet</strong></p> <p>Eighty million years ago, India and Eurasia were 4,000 miles apart, separated by an ancient body of water that geologists call the Neotethys Ocean, but Jagoutz believes there was more than just seawater between the two. He’s not alone. Many geologists agree on the existence of an arc of volcanic islands that formed on the boundary of a smaller tectonic plate, similar to the Mariana Islands in the Pacific Ocean. However, there is debate on whether these islands first collided with the Eurasian plate to the north or the Indian plate to the south. Jagoutz’s hypothesis is the latter. “If I’m right, the arc sits near the equator. If the others are right, the fragments should be 20 degrees north,” he explains. “That’s how simple it is.” But it can mean a world of difference in terms of explaining the paleoclimate — not just in the Himalayas, but globally as well.</p> <p>To test this hypothesis, Jagoutz and Martin turned to paleomagnetism. Some rock minerals, such as magnetite, contain iron and act as tiny bar magnets, orienting their magnetization along Earth’s magnetic field. At the Equator, magnetite in newly formed rocks will be magnetized parallel to the ground but the further north or south it is, the more inclined the magnetization will be. “We can measure, essentially, the latitude that a rock was formed at,” explains Martin.</p> <p>If you were to take a slice of the Kohistan-Ladakh region of the Himalayas in northern India, you would see a succession of rock layers representing the India plate and the Eurasia plate, with the volcanic island arc sandwiched in between. “That’s why Ladakh is a really cool place to go to, because you can walk though this whole collision,” says Martin.</p> <p>In summer 2018, Martin and Jade Fischer, a junior double-majoring in EAPS and physics, spent six weeks in Ladakh collecting samples from the volcanic rocks. Back at MIT, Martin measured the paleomagnetic signature of these rocks, and his results placed the Kohistan-Ladakh arc right at the equator, in agreement with Jagoutz’s theory.</p> <p><strong>A magnetic collaboration</strong></p> <p>Megan Guenther, a junior in EAPS, first heard about the opportunity to do field work in Ladakh when Martin gave a presentation about his research in her structural geology class last fall. “At the end, he told us he was probably going again and to let him know if we were interested,” Guenther explains. “I emailed him an hour later.”</p> <p>Guenther had been looking for a chance to gain more field experience. She works on the compositions of lunar glasses with Tim Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, where the research takes place entirely in the lab. “You can’t really do field work on the moon,” she jokes.</p> <p>This past summer, Guenther and Martin spent six weeks in Ladakh collecting rock samples from the Eurasian plate to prove that this was not also further south, mapping the region and doing structural analyses. Both Guenther and Martin were supported by MIT International Science and Technology Initiatives (MISTI) and the MISTI Global Seed Fund.</p> <p>MISTI and Jagoutz go back a long time, with MISTI funding class excursions, department field trips, and a number of Jagoutz’s students. “MISTI-India has been good to us,” he says. “They financed the workshop where we came up with the whole concept of this work.” And, says Jagoutz, the students really love the experience. “They get influenced by it, and a lot of people chose their career paths after it,” says Jagoutz. “Ultimately, that’s what MISTI is all about: an experience that tells students they want to get into science.”</p> <p>For Guenther, the trip was an essential part of her education as a geologist. “I feel much more confident as a field geologist, which is exactly what I wanted,” she says. It also impressed on her the titanic scale of geology. “The scale of everything is so crazy,” says Guenther. “You’re already at 11,000 feet, minimum, the whole time, and then these huge mountains tower above that.”</p> <p>By solving the story of the collision that resulted in the Himalayas, Jagoutz and his team also shed light on its global implications. Large-scale collisions, Jagoutz explains, don’t just have local effects, and in the case of the Himalayas they can also explain some of Earth’s past glaciation events. “That’s the good thing about geology: the dimensions,” says Jagoutz. “You look at a magnetite crystal in a rock, and it tells you how global cooling works.”</p> As part of MISTI-India, Megan Guenther, a junior in EAPS, records field notes about the landscape of the Kohistan-Ladakh region of the Himalayas in northern India. Photo: Craig MartinEAPS, MISTI, Environment, Geology, Planetary science, Plate tectonics, Climate, Research, Students, School of Science, School of Humanities Arts and Social Sciences MIT engineers develop a new way to remove carbon dioxide from air The process could work on the gas at any concentrations, from power plant emissions to open air. Thu, 24 Oct 2019 23:59:59 -0400 David Chandler | MIT News Office <p>A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.</p> <p>Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.</p> <p>The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal <em>Energy and Environmental Science</em>, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.</p> <div class="cms-placeholder-content-video"></div> <p>The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.</p> <p>As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called polyanthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged — during which the device can provide part of the power needed for the whole system — and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.</p> <p>“The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains Voskian. In other words, the electrode material, by its nature, “has either a high affinity or no affinity whatsoever,” depending on the battery’s state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.</p> <p>“This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO<sub>2</sub>,” Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.</p> <p>In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.</p> <p>The process this system uses for capturing and releasing carbon dioxide “is revolutionary” he says. “All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input. It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”</p> <p>“In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles,” Hatton says. “This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations.”​</p> <p>In a working plant — for example, in a power plant where exhaust gas is being produced continuously — two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.</p> <p>The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. “We have developed very cost-effective techniques,” he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.</p> <p>Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.</p> <p>The researchers have set up a company called Verdox to commercialize the process, and hope to develop a pilot-scale plant within the next few years, he says. And the system is very easy to scale up, he says: “If you want more capacity, you just need to make more electrodes.”</p> <p>This work was supported by an MIT Energy Initiative Seed Fund grant and by Eni S.p.A.</p> In this diagram of the new system, air entering from top right passes to one of two chambers (the gray rectangular structures) containing battery electrodes that attract the carbon dioxide. Then the airflow is switched to the other chamber, while the accumulated carbon dioxide in the first chamber is flushed into a separate storage tank (at right). These alternating flows allow for continuous operation of the two-step process.Image courtesy of the researchersResearch, Chemical engineering, School of Engineering, Emissions, Carbon nanotubes, Nanoscience and nanotechnology, Climate change, Carbon dioxide, Sustainability, Carbon, Greenhouse gases Enhanced nuclear energy online class aims to inform and inspire Revamped version of MITx MOOC includes new modules on nuclear security, nuclear proliferation, and quantum engineering. Thu, 24 Oct 2019 14:30:01 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering <p>More than 3,000 users hailing from 137 countries signed up for the MIT Department of Nuclear Energy's debut massive open online course (MOOC), Nuclear Energy: Science, Systems and Society, which debuted last year on <em>MITx. </em>Now, after roaring success, the course will be <a href="" target="_blank">offered again</a> in spring 2020, with key upgrades.</p> <p>“We had hoped there was an appetite in the general public for information about nuclear energy and technology,” says Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering and one of the course instructors. “We were fully confirmed by this first offering.”</p> <p>Unfolding over nine weeks, the MOOC provides a primer on nuclear energy and radiation and the wide-ranging applications of nuclear technology in medicine, security, energy, and research. It aims not just to educate, but to capture the interest of a distance-learning audience not necessarily well acquainted with physics and mathematics.</p> <p>“The MOOC builds on a tradition in our department of a first-year seminar that exposes students to a broad overview of the field,” says another instructor, Anne White, professor and head, Department of Nuclear Science and Engineering. “We set ourselves the challenge of translating the experience of being MIT first-years, who jump into something they know nothing about, and come out with excitement for the foundations of the field and its frontiers.”</p> <p>Before setting out to tackle this problem, the creative team — which also includes Michael Short, the Class of ’42 Career Development Assistant Professor of Nuclear Science and Engineering, and John Parsons, senior lecturer in the Finance Group at MIT Sloan School of Management — carefully reviewed existing online nuclear science offerings.</p> <p>“When we looked at MOOCs out in the world, a lot of them are wonderful, but highly technical,” says White. “We had a different vision of what MIT could accomplish, and that was reaching a big audience of virtual first-years.”</p> <p>For last year’s launch, the MOOC was structured around three modules. The first, taught by Short, introduced nuclear science at the atomic level. “We focused on the basics — the nucleus and particles, and the technologies that naturally emerge out of the study of the discipline,” says Buongiorno. This included a close look at ionizing radiation and how to measure it, with an invitation for online users to build a simple Geiger counter to measure radiation in their own backyards.</p> <p>The second module, led by Buongiorno and Parsons, delved into how nuclear reactors function, what makes nuclear energy attractive, issues of safety and waste, and questions of nuclear power plant economics and policy.</p> <p>The third module, taught by White, discussed magnetic fusion energy research, with a look at pioneering work at MIT and elsewhere dealing with high-magnetic-field fusion. “We lay the foundation first for fission power, and see a lot of enthusiasm about decarbonizing the grid in the short term,” says White. “We then present fusion power and MIT’s SPARC experiment, which really captures students’ imagination with its potential as a future energy source.”</p> <p>Translating key elements of nuclear science and technology syllabi from the MIT classroom setting to prerecorded video segments, slides, and online assessments for the MOOC proved a significant effort for instructors.</p> <p>“Much of the material was drawn from classes we collectively taught, and it took nearly a year to develop this curriculum and make sure it was the right content, at the right level,” says Buongiorno. “It was a huge challenge to make this intelligible and attractive to a much broader audience than usual, people without a science background, or who might not be on the same page around energy.” It was, he adds, “more difficult than a typical class I teach.”</p> <p>The MOOC included opportunities for students to interact with each other and the instructors at key junctures, through the means of online write-in forums. Buongiorno and his colleagues had hoped to duplicate online the vibrant interactions of residential classrooms, and even offer office hours, but it proved infeasible. “Because of the geographic distribution of participants, it made no sense; half of the students would be excluded because the event would be taking place in the middle of the night.”</p> <p>The team, not content to rest on its laurels, is adding elements for the MOOC’s second run: R. Scott Kemp, the MIT Class of ’43 Associate Professor of Nuclear Science and Engineering, will teach a new module on nuclear security and nuclear proliferation, and Paola Cappellaro, the Esther and Harold E. Edgerton Associate Professor of Nuclear Science and Engineering, will offer a module on quantum engineering.</p> <p>In addition to this expansion, White envisions an eventual residential version of the course, where first-years could take the MOOC online and attend seminars on campus to receive MIT credit. “Our goal as a department is not just educating majors in nuclear science and engineering, but creating classes appealing to students outside the major,” she says. “It’s in the pipeline.”</p> <p>Given rising concern about climate change, and the emergence of new technologies in fission and fusion, the timing of this MOOC seems propitious to its founding team.</p> <p>“We’d like to have an impact with the course on the greater debate about the use of nuclear energy as part of the solution for climate change,” says Buongiorno. “The public in this debate needs science-based input and facts about different technologies, which is one of our major objectives.” Adds White, “We believe the course will appeal to folks working in government, policy, industry, as well as to those who are simply curious about what’s happening at the frontiers of our field.”</p> “We’d like to have an impact with the course on the greater debate about the use of nuclear energy as part of the solution for climate change,” says Professor Jacopo Buongiorno.Nuclear science and engineering, School of Engineering, Sloan School of Management, Classes and programs, Education, teaching, academics, Design, Energy, Environment, Nuclear power and reactors, EdX, Physics, Fusion, Massive open online courses (MOOCs), Climate change, MITx Antarctic ice cliffs may not contribute to sea-level rise as much as predicted Study finds even the tallest ice cliffs should support their own weight rather than collapsing catastrophically. Mon, 21 Oct 2019 00:00:00 -0400 Jennifer Chu | MIT News Office <p>Antarctica’s ice sheet spans close to twice the area of the contiguous United States, and its land boundary is buttressed by massive, floating ice shelves extending hundreds of miles out over the frigid waters of the Southern Ocean. When these ice shelves collapse into the ocean, they expose towering cliffs of ice along Antarctica’s edge.</p> <p>Scientists have assumed that ice cliffs taller than 90 meters (about the height of the Statue of Liberty) would rapidly collapse under their own weight, contributing to more than 6 feet of sea-level rise by the end of the century — enough to completely flood Boston and other coastal cities. But now MIT researchers have found that this particular prediction may be overestimated.</p> <p>In a paper published today in <em>Geophysical Research Letters</em>, the team reports that in order for a 90-meter ice cliff to collapse entirely, the ice shelves supporting the cliff would have to break apart &nbsp;extremely quickly, within a matter of hours — a rate of ice loss that has not been observed in the modern record.</p> <p>“Ice shelves are about a kilometer thick, and some are the size of Texas,” says MIT graduate student Fiona Clerc. “To get into catastrophic failures of really tall ice cliffs, you would have to remove these ice shelves within hours, which seems unlikely no matter what the climate-change scenario.”</p> <p>If a supporting ice shelf were to melt away over a longer period of days or weeks, rather than hours, the researchers found that the remaining ice cliff wouldn’t suddenly crack and collapse under its own weight, but instead would slowly flow out, like a mountain of cold honey that’s been released from a dam.</p> <p>“The current worst-case scenario of sea-level rise from Antarctica is based on the idea that cliffs higher than 90 meters would fail catastrophically,” Brent Minchew, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We’re saying that scenario, based on cliff failure, is probably not going to play out. That’s something of a silver lining. That said, we have to be careful about breathing a sigh of relief. There are plenty of other ways to get rapid sea-level rise.”</p> <p>Clerc is the lead author of the new paper, along with Minchew, and Mark Behn of Boston College.</p> <p><strong>Silly putty-like behavior</strong></p> <p>In a warming climate, as Antarctica’s ice shelves collapse into the ocean, they expose towering cliffs of grounded ice, or ice over land. Without the buttressing support of ice shelves, scientists have assumed that the continent’s very tall ice cliffs would collapse, calving into the ocean, to expose even taller cliffs further inland, which would themselves fail and collapse, initiating a runaway ice-sheet retreat. &nbsp;</p> <p>Today, there are no ice cliffs on Earth that are taller than 90 meters, and scientists assumed this is because cliffs any taller than that would be unable to support their own weight.</p> <p>Clerc, Minchew, and Behn took on this assumption, wondering whether and under what conditions ice cliffs 90 meters and taller would physically collapse. To answer this, they developed a simple simulation of a rectangular block of ice to represent an idealized ice sheet (ice over land) supported initially by an equally tall ice shelf (ice over water). They ran the simulation forward by shrinking the ice shelf at different rates and seeing how the exposed ice cliff responds over time.</p> <p>In their simulation, they set the mechanical properties, or behavior of ice, according to Maxwell’s model for viscoelasticity, which describes the way a material can transition from an elastic, rubbery response, to a viscous, honey-like behavior depending on whether it is quickly or slowly loaded. A classic example of viscoelasticity is silly putty: If you leave a ball of silly putty on a table, it slowly slumps into a puddle, like a viscous liquid; if you quickly pull it apart, it tears like an elastic solid.</p> <p>As it turns out, ice is also a viscoelastic material, and the researchers incorporated Maxwell viscoelasticity into their simulation. They varied the rate at which the buttressing ice shelf was removed, and predicted whether the ice cliff would fracture and collapse like an elastic material or flow like a viscous liquid.</p> <p>They model the effects of various starting heights, or thicknesses of ice, from 0 to 1,000 meters, along with various timescales of ice shelf collapse. In the end, they found that when a 90-meter cliff is exposed, it will quickly collapse in brittle chunks only if the supporting ice shelf has been removed quickly, over a period of hours. In fact, they found that this behavior holds true for cliffs as tall as 500 meters. If ice shelves are removed over longer periods of days or weeks, ice cliffs as tall as 500 meters will not collapse under their own weight, but instead will slowly slough away, like cold honey.</p> <p><strong>A realistic picture</strong></p> <p>The results suggest that the Earth’s tallest ice cliffs are unlikely to collapse catastrophically and trigger a runaway ice sheet retreat. That’s because the fastest rate at which ice shelves are disappearing, at least as documented in the modern record, is on the order of weeks, not hours, as scientists observed in 2002, when they captured satellite imagery of the collapse of the Larsen B ice shelf — a chunk of ice as large as Rhode Island that broke away from Antarctica, shattering into thousands of icebergs over the span of two weeks.</p> <p>“When Larsen B collapsed, that was quite an extreme event that occurred over two weeks, and that is a tiny ice shelf compared to the ones that we would be particularly worried about,” Clerc says. “So our work shows that cliff failure is probably not the mechanism by which we would get a lot of sea level rise in the near future.”</p> <p>This research is supported, in part, by the National Science Foundation.</p> The Getz Ice Shelf in West Antarctica.Image: NASA/Jeremy HarbeckClimate, Climate change, EAPS, Earth and atmospheric sciences, Environment, Fluid dynamics, Global Warming, Research, School of Science, National Science Foundation (NSF) MADMEC teams address plastic waste problem with materials science Finalists presented an alternative to nondegradable plastics, and an additive to help plastics decompose. Thu, 17 Oct 2019 14:20:57 -0400 Zach Winn | MIT News Office <p>A team with a sustainable alternative to nondegradable plastic earned first place in this year’s MADMEC competition on Oct. 15.</p> <p>The ecoTrio team, made up of three MIT PhD students, took home the $10,000 grand prize in the annual materials science program for its biodgradable blends that imitate various plastics. The second-place prize was awarded to PETTIGREW, which integrated live bacteria into plastic production to improve plastic degradability. RadioStar, which created a low-cost sensor for farmers, came in third.</p> <p>“There seem to be natural themes from year to year,” Michael Tarkanian, a senior lecturer in the Department of Materials Science and Engineering (DMSE) who runs MADMEC, told <em>MIT News</em>. “There were a bunch of plastic postconsumer recycling projects this year, two of which made it to the finals. I think plastics are getting a lot of press lately, with trash piles building up in the oceans and sea animals being injured. Maybe that influenced the students.”</p> <p>The oral and poster presentations were the culmination of team projects that began last spring and included a series of design challenges throughout the summer. Each team received guidance, access to equipment, and up to $1,000 in funding to build and test their prototypes.</p> <p>The teams were judged based on what they accomplished during their journey from idea to prototype. For ecoTrio, that meant creating a material that fit its cost, mechanical, and sustainability goals.</p> <p>“These ideas start from scratch, and the goal is to test their feasibility and develop hardware, so by the time the program is over, students know whether or not they will work,” Tarkanian said.</p> <p><strong>An alternative to nondegradable plastic</strong></p> <p>At the core of ecoTrio’s product are three materials, two of which the company considers proprietary. The first is a polymer that is easily biodegradable but difficult to process and too expensive to compete with plastics on its own. The second material is a biodegradable plastic polymer that’s cheap and makes the blend easier to process at scale using industrial equipment. The third component consists of fine-grained wood particles that the team uses to further lower the cost of the mix and tune the final product for different uses.</p> <p>“Our goal was to create an alternative plastic material that comes exclusively from renewable resources, has the same properties as existing plastics used today, and at the end of its life, biodegrades regardless of where it ends up,” ecoTrio team member Ty Christoff-Tempesta said.</p> <p>Members of the team, which also includes Margaret Lee and Sara Sheffels, say their blend has a similar cost and melting point as traditional plastics, while its strength and flexibility can be adjusted based on the percentage of added wood particles.</p> <p>To demonstrate the range of plastics their product could replace, the team showed off samples including a hard spoon as well as flexible, bag-like materials.</p> <p>“Today, we all recognize single-use plastics as an environmental crisis, but as consumers we come into contact with them all the time, whether it’s packaging for food, cosmetic products, or household products,” Christoff-Tempesta said. “The reason we see them all the time is because they’re so cheap and convenient.”</p> <p>During ecoTrio’s presentation, team members also noted that there is increasing pressure on companies from consumers and the government to use more sustainable packaging.</p> <p><strong>Other promising projects rewarded</strong></p> <p>The second-place team, PETTIGREW, took a different approach to the plastic waste problem. Various methods have been used to quicken the decomposition of plastics after they’re used and collected. Unfortunately, the vast majority of plastics aren’t collected for recycling at all.</p> <p>“Some of these plastics take 1,000 years to degrade on their own, which can have consequences including plastic island formation in the ocean,” said PETTIGREW team member Leonardo Zornberg, a PhD candidate.</p> <p>With these problems in mind, PETTIGREW decided to incorporate decomposition-causing bacteria into plastics as they’re being produced. When the bacteria they selected, <em>Bacillus subtilis</em>, is combined with a sugar filler, it can survive the high temperatures used to shape many plastics.</p> <p>The team also found the addition of the bacteria had only a minimal effect on the strength and flexibility of the plastics in some cases.</p> <p>Zornberg acknowledged the potential for pushback from people hesitant to use plastics with living bacteria inside of them, but he noted the bacterial strain his team selected is frequently used to make probiotics for humans, livestock, and agricultural supplements.</p> <p>Going forward, the team believes genetically engineering the bacteria could further enhance its degradation capabilities, and could even give it other abilities like self-cleaning and antimicrobial defenses.</p> <p>“One of the reasons we chose <em>Bacillus</em> is it’s a model organism,” Zornberg told <em>MIT News</em>. “It’s very well-understood how to genetically engineer and modify its strains, and it’s used in industrial-scale enzyme production, so both of these things suggest it would be suitable if we wanted to modify the bacteria for future applications.”</p> <p>RadioStar broke from this year’s plastic trend by creating a low-cost sensor for small-scale farmers. The sensor makes use of retroreflectors, which are cube-shaped structures that send directional light back toward its source efficiently from a variety of angles.</p> <p>The team’s product consists of small retroreflectors made of gels that can be dispersed across farmland. The biodegradable gels can be made to change colors and optical properties in response to different chemical stimuli. Those changes can then be observed using a directional light emitter and detector, which could be special flashlight held by a farmer or drones equipped with cameras.</p> <p>RadioStar’s prototype was made to change color in response to varying PH levels, but the team believes its sensors could be tuned to monitor a variety of soil conditions.</p> <p>“This is just a proof of concept for how this can be used to test PH, but we can extrapolate this to test a bunch of different parameters,” said RadioStar team member Sara Wilson, an undergraduate in DMSE. “For example, nitrogen, water content, and phosphorous are very important for different types of crop growth.”</p> <p><strong>Learning by doing</strong></p> <p>Overall, Tarkanian thinks this year’s program was a success not just because of the potential of the projects, but also because of the amount of learning-by-doing that led to the final presentations.</p> <p>“The high-level goals [of MADMEC] are to give students the chance to make something tangible and to take the classroom knowledge they’ve been acquiring and put it into practice,” Tarkanian said.</p> <p>Zornberg thinks the MADMEC program, which focuses on earlier-stage venture creation compared to other entrepreneurial programs on campus, helps materials science students think through the process of successful innovation.</p> <p>“Having the opportunity to explore prototyping separated from the business plan is really a good way to engage engineering students in thinking about product design,” Zornberg said.</p> <p>MADMEC is hosted by DMSE and sponsored by Saint Gobain and the Dow Chemical Company.&nbsp;</p> Members of the winning team, ecoTrio, from this year’s MADMEC competition. From left to right are Margaret Lee, Sara Sheffels, and Ty Christoff-Tempesta. Image: James HunterInnovation and Entrepreneurship (I&E), DMSE, School of Engineering, Students, graduate, postdoctoral, Undergraduate, Contests and academic competitions, Sustainability, Recycling, Environment, Pollution, Agriculture MIT alumna addresses the world’s mounting plastic waste problem Renewlogy’s system is converting plastic waste from cities and rivers into fuel. Wed, 09 Oct 2019 23:59:59 -0400 Zach Winn | MIT News Office <p>It’s been nearly 10 years since Priyanka Bakaya MBA ’11 founded Renewlogy to develop a system that converts plastic waste into fuel. Today, that system is being used to profitably turn even nonrecyclable plastic into high-value fuels like diesel, as well as&nbsp;the precursors to new plastics.</p> <p>Since its inception, Bakaya has guided Renewlogy through multiple business and product transformations to maximize its impact. During the company’s evolution from a garage-based startup to a global driver of sustainability, it has licensed its technology to&nbsp;waste management companies in the U.S. and Canada, created community-driven supply chains for processing nonrecycled plastic, and started a nonprofit, Renew Oceans, to reduce the flow of plastic into the world’s oceans.</p> <p>The latter project has brought Bakaya and her team to one of the most polluted rivers in the world, the Ganges. With an effort based in Varanasi, a city of much religious, political, and cultural significance in India, Renew Oceans hopes to transform the river basin by incentivizing residents to dispose of omnipresent plastic waste in its “reverse vending machines,” which provide coupons in exchange for certain plastics.</p> <p>Each of Renewlogy’s initiatives has brought challenges Bakaya never could have imagined during her early days tinkering with the system. But she’s approached those hurdles with a creative determination, driven by her belief in the transformative power of the company.</p> <p>“It’s important to focus on big problems you’re really passionate about,” Bakaya says. “The only reason we’ve stuck with it over the years is because it’s extremely meaningful, and I couldn’t imagine working this hard and long on something if it wasn’t deeply meaningful.”</p> <p><strong>A system for sustainability</strong></p> <p>Bakaya began working on a plastic-conversion system with Renewlogy co-founder and Chief Technology Officer Benjamin Coates after coming to MIT’s Sloan School of Management in 2009. While pursuing his PhD at the University of Utah, Coates had been developing continuously operating systems to create fuels from things like wood waste and algae conversion.</p> <p>One of Renewlogy’s key innovations is using a continuous system on plastics, which saves energy by eliminating the need to reheat the system to the high temperatures necessary for conversion.</p> <p>Today, plastics entering Renewlogy’s system are first shredded, then put through a chemical reformer, where a catalyst degrades their long carbon chains.</p> <p>Roughly 15 to 20 percent of those chains are converted into hydrocarbon gas that Renewlogy recycles to heat the system. Five percent turns into char, and the remaining 75 percent is converted into high-value fuels. Bakaya says the system can create about 60 barrels of fuel for every 10 tons of plastic it processes, and it has a 75 percent lower carbon footprint when compared to traditional methods for extracting and distilling diesel fuel.</p> <p>In 2014, the company began running a large-scale plant in Salt Lake City, where it continues to iterate its processes and hold demonstrations.</p> <p>Since then, Renewlogy has set up another commercial-scale facility in Nova Scotia, Canada, where the waste management company Sustane uses it to process about 10 tons of plastic a day, representing 5 percent of the total amount of solid waste the company collects. Renewlogy is also building a similar-sized facility in Phoenix, Arizona, that will be breaking ground next year. That project focuses on processing specific types of plastics (identified by international <a href="" target="_blank">resin codes</a> 3 through 7) that are less easily recycled.</p> <p>In addition to its licensing strategy, the company is spearheading grassroots efforts to gather and process plastic that’s not normally collected for recycling, as part of the Hefty Energy Bag Program.</p> <p>Through the program, residents in cities including Boise, Idaho, Omaha, Nebraska, and Lincoln, Nebraska, can put plastics numbered 4 through 6 into their regular recycling bins using special orange bags. The bags are separated at the recycling facility and sent to Renewlogy’s Salt Lake City plant for processing.</p> <p>The projects have positioned Renewlogy to continue scaling and have earned Bakaya entrepreneurial honors from the likes of <em>Forbes</em>, <em>Fortune</em>, and the World Economic Forum. But a growing crisis in the world’s oceans has drawn her halfway across the world, to the site of the company’s most ambitious project yet.</p> <p><strong>Renewing the planet’s oceans</strong></p> <p>Of the millions of tons of plastic waste flowing through rivers into the world’s oceans each year, roughly 90 percent <a href="" target="_blank">comes from just 10 rivers</a>. The worsening environmental conditions of these rivers represents a growing global crisis that state governments have put billions of dollars toward, often with discouraging results.</p> <p>Bakaya believes she can help.</p> <p>“Most of these plastics tend to be what are referred to as soft plastics, which are typically much more challenging to recycle, but are a good feedstock for Renewlogy’s process,” she says.</p> <p>Bakaya started Renew Oceans as a separate, nonprofit arm of Renewlogy last year. Since then, Renew Oceans has designed fence-like structures to collect river waste that can then be brought to its scaled down machines for processing. These machines can process between 0.1 and 1 ton of plastic a day.</p> <p>Renew Oceans has already built its first machine, and Bakaya says deciding where to put it was easy.</p> <p>From its origins in the Himalayas, the Ganges River flows over 1,500 miles through India and Bangladesh, serving as a means of transportation, irrigation, energy, and as a sacred monument to millions of people who refer to it as Mother Ganges.</p> <p>Renewlogy’s first machine is currently undergoing local commissioning in the Indian city of Varanasi. Bakaya says the project is designed to scale.</p> <p>“The aim is to take this to other major polluted rivers where we can have maximum impact,” Bakaya says. “We’ve started with the Ganges, but we want to go to other regions, especially around Asia, and find circular economies that can support this in the long term so locals can derive value from these plastics.”</p> <p>Scaling down their system was another unforeseen project for Bakaya and Coates, who remember scaling up prototypes during the early days of the company. Throughout the years, Renewlogy has also adjusted its chemical processes in response to changing markets, having begun by producing crude oil, then moving to diesel as oil prices plummeted, and now exploring ways to create high-value petrochemicals like naphtha, which can be used to make new plastics.</p> <p>Indeed, the company’s approach has featured almost as many twists and turns as the Ganges itself. Bakaya says she wouldn’t have it any other way.</p> <p>“I’d really encourage entrepreneurs to not just go down that easy road but to really challenge themselves and try to solve big problems — especially students from MIT. The world is kind of depending on MIT students to push us forward and challenge the realm of possibility. We all should feel that sense of responsibility to solve bigger problems.”</p> Renewlogy co-founder and CEO Priyanka Bakaya inside one of the company's commercial plants, which are capable of processing ten tons of plastic each day to create about 60 barrels of fuel.Image courtesy of RenewlogyInnovation and Entrepreneurship (I&E), Startups, Chemistry, Sloan School of Management, Environment, Pollution, Oceanography and ocean engineering, Social entrepreneurship, Alumni/ae, Recycling, Sustainability Funding for sustainable concrete cemented for five more years The MIT Concrete Sustainability Hub will continue to study the environmental impacts of concrete and the hazard resilience of the built environment. Fri, 04 Oct 2019 13:45:01 -0400 Andrew Logan | Concrete Sustainability Hub <p>The <a href="">MIT Concrete Sustainability Hub</a> (CSHub), an interdisciplinary team of researchers dedicated to concrete and infrastructure science, engineering, and economics, has renewed its relationship with its industry partners for another five years.</p> <p>Founded in 2009, CSHub has spent a decade over two five-year phases collaborating with the <a href="">Portland Cement Association</a> (PCA) and the <a href="">Ready Mixed Concrete Research &amp; Education Foundation</a> (RMC) to achieve durable and sustainable buildings and infrastructure in ever-more-demanding environments. Over its next five-year phase, CSHub will receive $10 million of additional funding from its partners to continue its research efforts.</p> <p>“Taking CSHub’s work to the next level will not only help us achieve our goal of making concrete more sustainable, but will also continue to strengthen our communities by providing designers, owners, and policymakers with the best information and tools available to make the best choices for their construction projects,” says Julia Garbini, the executive director of RMC.</p> <p>According to Michael Ireland, PCA president and CEO, CSHub’s past research has also allowed the industry to investigate the unique properties of concrete and cement. “For 10 years and counting, the MIT CSHub has helped the cement and concrete industry to identify and study the myriad benefits of its products,” he says.</p> <p>Concrete, the world’s most-used building material, is made by mixing cement with abundant aggregate materials like sand and gravel. The result is an extremely strong and stiff material that can be produced nearly anywhere from readily available ingredients using relatively inexperienced labor. Concrete also offers numerous properties such as durability, formability, and thermal mass that can reduce energy consumption.</p> <p>“On a per-unit-weight basis, concrete is a low environmental impact material,” says Jeremy Gregory, CSHub’s executive director. “It’s essential to our built environment due to its durability, strength, and affordability. As a consequence, it’s the most-used building material in the world and hence, there is a significant opportunity to look at how we balance both its role in sustainable development and lower its environmental impact.”</p> <p>To do this, CSHub has taken a bottom-up approach, studying concrete from its nanoscale to its application in pavements and buildings, all the way to its role in urban environments and broader economic systems.</p> <p>“Classical concrete science and structural engineering often use top-down approaches,” says CSHub Faculty Director and MIT Professor Franz-Josef Ulm. “You identify weaknesses at a large scale, go to a smaller scale, make a change, and then observe the response. It is different when you go from the bottom-up — you have all of the possibilities in front of you.”</p> <p>Over the past decade, CSHub researchers have used this bottom-up approach to develop tools that measure the costs, environmental impacts, and hazard resilience of infrastructure and construction projects.</p> <p>In 2018, they developed the <a href="">Break-Even Mitigation Percentage</a> dashboard to provide developers with data on the costs of hazard mitigation. The dashboard shows the return on investment for hazard-resistant construction. In some communities, researchers found that that return can come as early as two years.</p> <p>Their investigation into the life cycle of buildings has also led to the creation of the <a href="">Building Attribute to Impact Algorithm (BAIA)</a>, which informs designers of which aspects of a building will have the strongest impact on its life cycle cost and environmental impact.</p> <p>Researchers have applied these same life cycle perspectives to pavements. <a href="">A case study</a> conducted with North Carolina’s Department of Transportation highlighted actions that could reduce spending on pavements by tens of millions of dollars while meeting or exceeding performance and emissions targets.</p> <p>Recent CSHub materials science research has also informed the discovery of novel solutions to longstanding durability issues in concrete. In particular, researchers identified new explanations for two major causes of damage in concrete — freeze-thaw cycles and alkali-silica reaction.</p> <p>“Whenever you touch old problems, there are perceptions that they are very difficult to change,” says Ulm. “However, here we applied a bottom-up approach to an old problem and found solutions that have not been looked at before.”</p> <p>In the next phase of collaboration, CSHub will expand its scope to investigate concrete’s role in solving economic, environmental, and social challenges.</p> <p>“We have done a lot of work in the past two phases on the technical aspects of concrete,” says Gregory. “What we are trying to do in this next phase is to conduct research that will engage the broader public by leveraging crowdsourced data, artificial intelligence, and the latest tools of data science.”</p> <p>One Phase III project is already in development. Using their past work on pavements, CSHub researchers have created <a href="">Carbin</a>, an app that uses a smartphone to record pavement quality from within a moving vehicle. Through crowdsourcing, the app has recorded data on over 130,000 miles of roads across the world. The data will eventually support decisions on infrastructure maintenance at a far lower cost than that of traditional technologies, like laser scanning.</p> <p>“With the CSHub now entering its third phase, we are excited about the opportunities this close industry-academia collaboration brings to MIT, the concrete industry, and society at large,” says Markus Buehler, Jerry McAfee Professor in Engineering and MIT Department of Civil and Environmental Engineering head. “Applying cutting-edge fundamental research to problems in industry has the potential for large-scale impact.”</p> Concrete, the world’s most-used building material, is made by mixing cement with abundant aggregate materials like sand and gravel. The result is an extremely strong and stiff material.Concrete Sustainability Hub, Civil and environmental engineering, School of Engineering, Sustainability, Climate change, Greenhouse gases, Industry, Funding Deploying drones to prepare for climate change PhD student Norhan Bayomi uses drones to investigate how building construction impacts communities’ resilience to rising temperatures. Fri, 04 Oct 2019 00:00:00 -0400 Daysia Tolentino | MIT News correspondent <p>While doing field research for her graduate thesis in her hometown of Cairo, Norhan Magdy Bayomi observed firsthand the impact of climate change on her local community.</p> <p>The residents of the low-income neighborhood she was studying were living in small, poorly insulated apartments that were ill-equipped for dealing with the region’s rising temperatures. Sharing cramped quarters — with families in studios less than 500 square feet — and generally lacking air conditioning or even fans, many people avoided staying in their homes altogether on the hottest days.</p> <p>It was a powerful illustration of one of the most terrible aspects of climate change: Those who are facing its most extreme impacts also tend to have the fewest resources for adapting.</p> <p>This understanding has guided Bayomi’s research as a PhD student in the Department of Architecture’s Building Technology Program. Currently in her third year of the program, she has mainly looked at countries in the developing world, studying how low-income communities there adapt to changing heat patterns and <a href="" target="_blank">documenting</a> global heatwaves and populations’ adaptive capacity to heat. A key focus of her research is how building construction and neighborhoods’ design affect residents’ vulnerability to hotter temperatures.</p> <p>She uses drones with infrared cameras to document the surface temperatures of urban buildings, including structures with a variety of designs and building materials, and outdoor conditions in the urban canyons between buildings.</p> <p>“When you look at technologies like drones, they are not really designed or commonly used to tackle problems like this. We’re trying to incorporate this kind of technology to understand what kind of adaptation strategies are suitable for addressing climate change, especially for underserved populations,” she says.</p> <p><strong>Eyes in the sky</strong></p> <p>Bayomi is currently developing a computational tool to model heat risk in urban areas that incorporates building performance, available urban resources for adaptation, and population adaptive capacity into its data.</p> <p>“Most of the tools that are available right now are mostly using statistical data about the population, the income, and the temperature. I’m trying to incorporate how the building affects indoor conditions, what resources are available to urban residents, and how they adapt to heat exposure — for instance, if they have a cooling space they could go to, or if there is a problem with the power supplies and they don’t have access to ceiling fans,” she says. “I’m trying to add these details to the equation to see how they would affect risk in the future.”</p> <p>She recently began <a href="">looking at similar changes</a> in communities in the Bronx, New York, in order to see how building construction, population adaptation, and the effects of climate change differ based on region. Bayomi says that her advisor, Professor John Fernández, motivated her to think about how she could apply different technologies into her field of research.</p> <p>Bayomi’s interest in drones and urban development isn’t limited to thermal mapping. As a participant in the School of Architecture and Planning’s DesignX entrepreneurship program, she and her team founded Airworks, a company that uses aerial data collected by the drones to provide developers with automated site plans and building models. Bayomi worked on thermal imaging for the company, and she hopes to continue this work after she finishes her studies.</p> <p>Bayomi is also working with Fernández’s Urban Metabolism Group on an aerial thermography project in collaboration with Tarek Rakha PhD ’15, an assistant professor at Georgia Tech. The project is developing a cyber-physical platform to calibrate building energy models, using drones equipped with infrared sensors that autonomously detect heat transfer anomalies and envelope material conditions. Bayomi’s group is currently working on a drone that will be able to capture these data and process them in real-time.</p> <p><strong>Second home</strong></p> <p>Bayomi says the personal connections that she has developed at MIT, both within her program and across the Institute, have profoundly shaped her graduate experience.</p> <p>“MIT is a place where I felt home and welcome. Even as an Arabic muslim woman, I always felt home,” she says. “My relationship with my advisor was one of the main unique things that kept me centered and focused, as I was blessed with an advisor who understands and respects my ideas and gives me freedom to explore new areas.”</p> <p>She also appreciates the Building Technology program’s “unique family vibe,” with its multiple academic and nonacademic events including lunch seminars and social events.</p> <p>When she’s not working on climate technologies, Bayomi enjoys playing and producing music. She has played the guitar for 20 years now and was part of a band during her undergraduate years. Music serves an important role in Bayomi’s life and is a crucial creative outlet for her. She currently produces rock-influenced trance music, a genre categorized by melodic, electronic sounds. She released her first single under the moniker Nourey last year and is working on an upcoming track. She likes incorporating guitar into her songs, an element not typically heard in trance tunes.</p> <p>“'I’m trying to do&nbsp; something using guitars with ambient influences in trance music, which is not very common,” she says.</p> <p>Bayomi has been a member of the MIT Egyptian Students Association since she arrived at MIT in 2015, and now serves as vice president. The club works to connect Egyptian students at MIT and students in Egypt, to encourage prospective students to apply and provide guidance based on the members’ own experiences.</p> <p>“We currently have an amazing mix of students in engineering, Sloan [School of Management], Media Lab, and architecture, including graduate and undergraduate members. Also, with this club we try to create a little piece of home here at MIT for those who feel homesick and disconnected due to culture challenges,” she says.</p> <p>In 2017 she participated in MIT’s Vacation Week for Massachusetts Public Schools at the MIT Museum, and in 2018 she participated in the Climate Changed ideas competition, where her team’s <a href="" target="_blank">entry</a> was selected as one of the top three finalists.</p> <p>“I am keen to participate whenever possible in these kind of activities, which enhance my academic experience here,” she says. “MIT is a rich place for such events.”</p> Norhan BayomiImage: Jake BelcherGraduate, postdoctoral, Students, Profile, Architecture, School of Architecture and Planning, Innovation and Entrepreneurship (I&E), Drones, Climate change, Africa, Middle East, Music Experts urge “full speed ahead” on climate action Panelists at MIT climate change symposium describe the state of knowledge in climate science and stress the urgent need for action. Thu, 03 Oct 2019 17:10:12 -0400 David L. Chandler | MIT News Office <p>In the first of <a href="">six symposia</a> planned at MIT this academic year on the subject of climate change, panels of specialists on the science of global climate described the state of knowledge on the subject today. They also discussed the areas where more research is needed to pin down exactly how severely and quickly climate change’s effects may occur, and what kinds of actions are urgently needed to address the enormous disruptions climate change will bring.</p> <p>Keynote speaker Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, gave an overview of the state of climate science today, explaining that the vastness of the timescales involved “is one of the things that makes this problem so fascinating.” However, she added, it also presents a real challenge in communicating the urgency of the issue, because carbon dioxide emissions being produced now can persist in the air for centuries, with their effects building over time.</p> <p>Even if the world were to stabilize greenhouse gas emissions at today’s level, the temperature would continue to rise, and sea level would continue to rise even more, she said. Anywhere from 50 to 100 percent of the expected temperature increase from a given amount of carbon dioxide “is in the pipeline,” she said, because it takes time for the changed atmosphere and oceans to reach a new state of equilibrium: “The temperature stabilizes after a few hundred years, but the sea level just keeps going and going.”</p> <p>She said “it’s sobering to take a look at the 25 warmest years that have been recorded, and realize that if you’re 32, you’ve been alive for all of them. We, this generation of people, are living on the warmest planet that has ever been measured in the environmental record.” And that increase is something we’re stuck with, she said. “Even if we go cold turkey” and eliminate all greenhouse gas emissions, “temperatures go almost constant for 1,000 years. The cumulative carbon dioxide that’s been emitted is what controls it.”</p> <p>The symposium, which drew a capacity crowd to MIT’s Kresge Auditorium, was chaired by Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science, and featured two panels of leading climate scientists who described the state of present knowledge about the effects and extent of climate change, remaining uncertainties and how to address them, and how the physical effects of warming may vary under different policy approaches.</p> <p>MIT President L. Rafael Reif, in <a href="">introducing</a> the first of the six planned symposia, said, “I believe that, as a society, we must find ways to invest aggressively in advancing climate science and in making climate mitigation and adaptation technologies dramatically less expensive: inexpensive enough to win widespread political support, to be affordable for every society, and to deploy on a planetary scale.”</p> <p>Reif added that one way to foster that would be through a tax on carbon, which “will keep pushing prices [of renewables] down and make noncarbon alternatives more attractive. That is clearly true. Less clear, however, is whether the carbon-cost hammer is enough to drive the nail of global societal change.” Continued progress with noncarbon or low-carbon alternatives is also essential, he said.</p> <p>While the picture of human-induced global climate change is well-established overall, in one of the panel discussions Ray Pierrehumbert, a professor of physics at Oxford University, described some the remaining sources of uncertainty. The greatest source of uncertainty, he said, lies in some of the complex feedback effects that may occur, especially involving clouds.</p> <p>Clouds reflect sunlight and therefore provide some cooling, but also are insulating and so help keep the surface warm. Their dynamics are highly complex, “involving interactions between things at the scale of millimeters up to thousands of kilometers.” As a result, “one reason we don’t know how bad it’s going to get is because of clouds,” Pierrehumbert said.</p> <p>But that uncertainty is no cause for complacency. “It’s extremely unlikely that there is some mystical effect that would make things better” than present projections, he said. Rather, “it’s quite possible things would be worse.”</p> <p>Tapio Schneider, a professor of environmental science at Caltech, added that the uncertainties about clouds include how they are affected by air pollution, which provides nucleation centers for water droplets. These interactions are complicated to model, but “it seems that some of these aerosol effects are stronger than expected.” That may mean that overall warming could be greater than expected, he said.</p> <p>Paul O’Gorman, an MIT professor of atmospheric science, said that it’s important to look at how the effects of a warming atmosphere will vary depending on local conditions. “Some countries will see larger monsoons,” he said, for example in India, where rainfall could actually double in some regions because of changes in atmospheric circulation patterns. “There are a lot of outstanding questions” in the details of these changes, and the answers could be crucial for regional planning.</p> <p>Pierrehumbert added that while nations have made commitments to try to limit global warming to no more than 2 degrees Celsius, that is a somewhat arbitrary cap. “Even if we don’t think we can halt warming at two degrees, we need to go full speed ahead” on curbing emissions. “Things will be horrible at two degrees, but much more horrible at four degrees.”</p> <p>Maria Zuber, MIT’s vice president for research, chaired the second panel discussion and said this series of symposia is intended as a way “to both educate and engage the MIT community” in the issue of climate change and “how we dial it up” in efforts to combat the problem.</p> <p>Sherri Goodman, a senior fellow at the Wilson Center, described the impact of climate change on military facilities and overall military readiness. “It’s a threat multiplier,” she said. “It will amplify and aggravate in different ways our national security challenges,” she said.</p> <p>For example, the opening of the Arctic ocean because of melting sea ice is creating a whole new area of conflicting interests, where both Russia and China have been making moves to control the region’s potential resources, from shipping lanes to petroleum reserves.</p> <p>Philip Duffy, president of the Woods Hole Research Center, described his work in providing corporations with detailed information about the specific local impacts they can expect at their facilities as a result of climate change. Climate change may be a multiplier of risks in that context as well, he added, citing regional conflicts and outmigration resulting from droughts and other effects.</p> <p>John Reilly, co-director of the Joint Program on the Science and Policy of Global Change, also stressed that regardless of any remaining uncertainties in the details of climate change’s effects, “it doesn’t mean we should wait until the science is resolved. Actually, we need the opposite effect.” If there is a whole range of possible outcomes, it’s important to take very seriously “the really extreme and catastrophic effects.” Among the range of possible outcomes indicated by climate models, without concerted action, climate change “could make huge parts of the planet uninhabitable. Even if that probability is very small, that can dominate the entire cost-benefit calculation,” he said.</p> Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry at MIT, delivered the symposium's keynote address.Image: Jake BelcherMIT Energy Initiative, Climate, Climate change, Special events and guest speakers, Global Warming, Policy, Faculty, President L. Rafael Reif, Administration, ESI, Sustainability President Reif speaks at MIT Climate Symposium Wed, 02 Oct 2019 13:27:14 -0400 MIT News Office <p><em>President L. Rafael Reif delivered the below introductory remarks at today’s “Progress in Climate Science” symposium.</em></p> <p>Good afternoon!&nbsp; I am delighted to be here with all of you.</p> <p>At MIT, persuading people to leave their labs and their classrooms to attend a daytime event is notoriously difficult. Attracting a crowd to fill Kresge Auditorium can feel almost impossible! So, the full house we have this afternoon is deeply significant. It is a sobering mark of the urgency and importance of the subject matter and an inspiring sign of the breadth, depth, and passionate commitment of MIT’s climate action community.</p> <p>Also: A warm hello to everyone joining us via livestream! It is wonderful and fitting that the knowledge and ideas from this session are being shared around the world.</p> <p>This is the first in a series of six symposia.&nbsp; For the tremendous effort it took over many months to create this outstanding series, I want to express my thanks and admiration to the Climate Action Symposia Organizing Committee – and especially to its chair, Professor Paul Joskow. The challenges of dealing with climate change will take all of our collective talents and the best work of countless MIT minds and hands, so I hope we can maintain this terrific level of interest and attendance for all six in the series!</p> <p>These six symposia will help us take stock of all that the people of MIT have accomplished through MIT’s Climate Action Plan, and they will inform and inspire our plans going forward. In this work, I am grateful to Vice President for Research Maria Zuber for her leadership in creating the plan four years ago, in tracking our progress ever since, and in raising our sights for the future.</p> <p>I would also like to express my profound admiration for today’s keynote speaker, Professor Susan Solomon. Susan has an incomparable record of producing superb science on subjects from the depletion of the ozone layer to global warming: superb science that formed the springboard for policies that have literally changed the world. We were fortunate in 2012 when she joined our faculty. We are certainly fortunate to have her with us today. And, we could not ask for a more powerful and inspiring voice in our drive to increase fundamental knowledge and to accelerate progress towards a sustainable human society.</p> <p>Before we begin, I would like to acknowledge that this is a serious moment in the life of the MIT community. It is a moment for engaging intensely with each other on many urgent questions, including how we should raise funds for the work of the Institute and what principles should guide us. It is a time for serious debate – and for serious listening.</p> <p>In this era of growing fortunes and shrinking federal funds, it is clear that as a community, we need to consider many questions. We need to understand the changing nature of the donor population. We need to decide how to weigh the political, cultural and economic impacts of donors’ behavior – and much more.</p> <p>Questions like these are certainly relevant to how we fund MIT’s work on energy and the environment, the work of the people in this room.</p> <p>As members of MIT’s climate action community, we need to have serious conversations with one another about the best way to move forward. Our capacity for respectful argument has always been a signature strength of MIT. So I hope you will begin those conversations with each other in the days ahead, especially with the people who disagree with you.</p> <p>Considering those who will speak today, and looking out at all of you, I am conscious that this is a room full of climate experts – and that, as a former electrical engineer, I am not one of them. So I will offer just a few comments based on my observations and conversations here, in Washington, and in philanthropic circles, as I have been striving to build support for the work of climate science and solutions at MIT.</p> <p>Last June, our MIT Commencement speaker was the prominent philanthropist and former mayor of New York City Michael Bloomberg. On the day of his remarks, he announced a remarkable personal commitment: $500 million to launch a new national climate initiative, which he calls “Beyond Carbon.”</p> <p>He described an ambitious agenda of political action: taking necessary steps to close coal plants, to block the creation of new gas plants, to support the leadership of state and local politicians, to create incentives like a carbon tax and, in his words, to take the climate challenge “directly to the people.”</p> <p>As he explained to our MIT audience, in his view (and I quote), “At least for the foreseeable future, winning the battle against climate change will depend less on scientific advancement and more on political activism.” And just ten days ago, former US Vice President and climate action pioneer Al Gore published a piece in the New York Times. He made a similar argument about the need for political action, because, in his words, “We have the technology we need.”</p> <p>I am a big admirer of both Mayor Bloomberg and Vice President Gore. I am profoundly grateful for their early leadership and relentless activism. I appreciate their faith in the kind of technologies that have already been developed – some of them invented and advanced on this campus. I agree that unless and until society demands a change in policy, priorities, and behaviors, technology alone can’t save us. And I share their view that it is absolutely vital to build popular political support for climate action.</p> <p>But with the greatest respect, I would like to propose an additional perspective, because I am convinced that we also need to do a great many other important things, at the same time.</p> <ul> <li>We need to dramatically improve our ability to predict the localized impacts of climate change and to design solutions that allow coastal cities and other vulnerable areas to adapt to and survive them.</li> <li>We need to make solar cells and wind turbines more efficient and to produce them with less reliance on rare or costly materials.</li> <li>And, we need even better grid-scale storage options to handle intermittency.</li> <li>We need to find ways to expand the nation’s transmission infrastructure to support the efficient deployment of solar and wind.</li> <li>We need to make car batteries and carbon-free hydrogen cheaper and more efficient.</li> <li>We need better mass transit – and not just here in Boston!</li> <li>We need smaller, safer, more modern and less costly nuclear plants to supplement intermittent renewable sources.</li> <li>And, we need to address not only electricity and transportation, but also agriculture, manufacturing, buildings, and much more.</li> </ul> <p>In short, I am convinced that building broad and deep popular support for climate action would be much, much easier, and much more likely to succeed, if we could offer to society much more fine-grained scientific models and much less costly technological solutions. To break the impasse, I believe that, as a society, we must find ways to invest aggressively in advancing climate science and in making climate mitigation and adaptation technologies dramatically less expensive: inexpensive enough to win widespread political support, to be affordable for every society, and to deploy on a planetary scale.</p> <p>Many climate activists argue that the best path lies in political will. They note, for example, that the cost of renewables has been dropping for years and that once we put a tax on carbon, market incentives will keep pushing prices down and make non-carbon alternatives more attractive. That is clearly true. Less clear, however, is whether the carbon-cost hammer is enough to drive the nail of global societal change.&nbsp;</p> <p>In my view, it is crucial to understand that while passing a carbon tax would surely spur the development of cheaper low- and zero-carbon energy, developing cheaper low- and zero-carbon energy sources would make it much easier to pass a carbon tax! So, we need to do both, as fast as we can!</p> <p>Ordinarily, funding at the necessary scale would come with government leadership. Certainly, when we developed our Climate Action Plan in 2015, we expected to encounter reliable, long-term federal support.&nbsp;</p> <p>In the current political environment, I believe the answer, until government leadership becomes available, is private philanthropy – a conclusion that brings us back to the questions for our community that I highlighted at the start. I believe that those of us committed to this cause need to come together to seek out new ways to support the advanced science and technology that will enable political action to succeed on the path to a sustainable future for us all.</p> <p>I look forward to joining you in this urgent work. Thank you.</p> Community, Faculty, Staff, Students, Climate change, Alternative energy, Energy, Greenhouse gases, President L. Rafael Reif MIT Solve selects 2019 cohort of tech entrepreneurs At Solve Challenge Finals in New York, judges selected 32 innovators, and Solve announces $1.5 million in prize funding. Tue, 01 Oct 2019 17:00:01 -0400 Claire Crowther | MIT Solve <p>On Sept. 22, 61 entrepreneurs traveled from 22 countries around the world to attend <a href="" target="_blank">Solve Challenge Finals</a> in New York and pitch their solutions to Solve’s 2019 Global Challenges: Circular Economy, Community-Driven Innovation, Early Childhood Development, and Healthy Cities.&nbsp;</p> <p>These innovators pitched everything from a compact waste-evaporating toilet to an online marketplace for businesses to buy and sell unused textiles. After a busy day packed with pitches and hours of deliberation, judges selected eight from each challenge to form the 2019 Solver Class, including:</p> <ul> <li><a href="">Circular Economy Solver teams</a>;</li> <li><a href="">Community-Driven Innovation Solver teams</a>;</li> <li><a href="">Early Childhood Development Solver teams</a>; and</li> <li><a href="">Healthy Cities Solver teams</a>.</li> </ul> <p>Solve also announced <a href="">$1.5 million in prize funding</a> for these Solver teams. A selection of highlights follows, and an <a href="" target="_blank">archived livestream</a> is available.</p> <p>In the opening plenary session, “Bridging the SDG Innovation Gap,” XPRIZE CEO Anousheh Ansari and Conservation International CEO M. Sanjayan spoke about sourcing, supporting, and scaling innovation to achieve the sustainable development goals (SDGs).&nbsp;</p> <p>Ansari explained that some solutions can be more relevant in certain geographies and contexts. Sanjayan agreed, saying, “While we have ever-more information and access to amazing individuals and a diversity of ideas, there is still a strong bias toward a single solution.”&nbsp;</p> <p>He described a meeting he once facilitated with a group of young people from the United States and top leaders dealing with elephant ivory poaching in Africa. “We were meeting with people who had spent their entire lives protecting elephants,” he said. “This young group was telling those folks how they should do things. It was astonishing to watch. Not that their ideas were bad, but at least have the humility to say, there’s context here.” Without that context, he added, these solutions are unlikely to work.</p> <p>Both Ansari and Sanjayan agreed that to achieve the SDGs by 2030, we’ll need context-focused tech breakthroughs, and both behavioral and policy changes.</p> <p>To kick off the closing plenary, “Inclusive Innovation and Entrepreneurship,” artist Zaria Forman wowed the audience with stunning photographs of her pastel drawings. By capturing glaciers and other natural wonders in the wake of climate change, she seeks to “convey the beauty of these places instead of the devastation.” Forman prefers to focus on positive change. And with all the negative news around climate change, she “celebrates what is still here.”&nbsp;</p> <p>This optimistic presentation provided an excellent introduction to a conversation around corporate social and environmental responsibility. Vijay Vaitheeswaran '90 of <em>The Economist</em> and Jesper Brodin, president and CEO of Ingka Group (IKEA), discussed IKEA’s mission to “create a better daily life for the people.”&nbsp;</p> <p>IKEA is at the forefront of innovation for sustainability, and much of the conversation focused on the company’s commitment to climate action. Brodin explained that IKEA products now require sustainable design principles, ensuring they can be broken down into raw materials.&nbsp;</p> <p>Bringing the conversation back to technology, Emi Mahmoud, United Nations High Commissioner for Refugees goodwill ambassador and award-winning slam poet, performed a powerful poem about access to technology. She emphasized that access to technology is no longer a privilege — it is a right.&nbsp;</p> <p>“Technology can restore the dignity of people. It changes our approach to aid and change-making so that it’s more about upward mobility, giving people something that they can run with — not just depend on.”</p> <p>The final discussion of the closing plenary featured Fred Swaniker, founder of the African Leadership Group, and Monique Idlett, founder and managing partner of Reign Ventures, and centered on building a more inclusive innovation ecosystem.</p> <p>Swaniker, whose programs develop emerging leaders in Africa, reflected on his time studying at Stanford University. He wondered, “Was there anything special about the air or water in Silicon Valley? Why is it that all this innovation comes out of there?”</p> <p>“The only difference is that they give a 16-year-old kid with an idea a chance,” Swaniker says. “The same brilliant kids with game-changing ideas are in Africa. The only difference is that no one is giving them a chance.” This, he says, is the goal of the African Leadership Academy.</p> <p>At Reign Ventures, Idlett takes this chance on promising startups. She aims to build a portfolio that “reflects the world,” ensuring that it has gender, racial, and industry diversity. When it comes to scaling these startups, Idlett says the art of collaboration is undervalued.&nbsp;</p> <p>“We don’t have to do this alone,” she explains. “As a founder, CEO, or investor, it’s really important that you find a community that can support you and that you can build together with.”</p> <p>Swaniker says the African Leadership Academy offers this support to its emerging leaders. Its learning model is very hands-on and emphasizes “learning by doing.” The academy then connects talent to opportunity — the networks, partnerships, and investment they need. “That’s the ecosystem,” he says. “Select the top talent, develop it, and then connect it.”</p> <p>The 2019 Solver Class will spend the next nine months working closely with Solve to scale their solutions through partnerships built with the Solve community.</p> <div></div> Solve Challenge Finals took place in New York City Image: Matt Mateiescu/MIT SolveSolve, Environment, Health, Learning, Community, Global, International development, Special events and guest speakers, Startups