MIT News - Energy MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Thu, 05 Mar 2020 15:05:01 -0500 3 Questions: Emre Gençer on the evolving role of hydrogen in the energy system With support from renewable energy sources, the MIT research scientist says, we can consider hydrogen fuel as a tool for decarbonization. Thu, 05 Mar 2020 15:05:01 -0500 Nafisa Syed | MIT Energy Initiative <p><em>As the world increasingly recognizes the need to develop more sustainable and renewable energy sources, low-carbon hydrogen has reemerged as an energy carrier with the potential to play a key role in sectors from transportation to power. </em></p> <p><em>At MITEI’s&nbsp;2019 Spring Symposium, MIT Energy Initiative Research Scientist Emre Gençer gave a presentation titled “</em><a href="" target="_blank"><em>Hydrogen towards Deep Decarbonization</em></a><em>,” in which he elaborated on how hydrogen can be used across all energy sectors. Other themes </em><a href="" target="_blank"><em>discussed</em></a><em> by experts at the symposium included industry’s role in promoting hydrogen, public safety concerns surrounding the hydrogen infrastructure, and the policy landscape required to scale hydrogen around the world. </em></p> <p><em>Here, Gençer shares his thoughts on the history of hydrogen and how it could be incorporated into our energy system as a tool for deep decarbonization to address climate change. </em></p> <p><strong>Q: </strong>How has public perception of hydrogen changed over time?<strong> </strong></p> <p><strong>A: </strong>Hydrogen has been in the public imagination since the 1870s. Jules Verne wrote that “water will be the coal of the future” in his novel “The Mysterious Island.”<em> </em>The concept of hydrogen has persisted in the public imagination for over a century, though interest in hydrogen has changed over time.</p> <p>Initial conversations about hydrogen focused on using it to supplement depleting fuel sources on Earth, but the role of hydrogen is evolving. Now we know that there is enough fuel on Earth, especially with the support of renewable energy sources, and that we can consider hydrogen as a tool for decarbonization.</p> <p>The first “hydrogen economy” concept was introduced in the 1970s. The term “hydrogen economy” refers to using hydrogen as an energy carrier, mostly for the transportation sector. In this context, hydrogen can be compared to electricity. Electricity requires a primary energy source and transmission lines to transmit electrons. In the case of hydrogen, energy sources and transmission infrastructure are required to transport protons.</p> <p>In 2004, there was a big initiative in the U.S. to involve hydrogen in all energy sectors to ensure access to reliable and safe energy sources. That year, the National Research Council and National Academy of Engineering released a report titled “<a href="">The Hydrogen Economy: Opportunities, Costs, Barriers, and R&amp;D Needs</a>.” This report described how hydrogen could be used to increase energy security and reduce environmental impacts. Because its combustion yields only water vapor, hydrogen does not produce carbon dioxide (CO<sub>2</sub>) emissions. As a result, we can really benefit from eliminating CO<sub>2</sub> emissions in many of its end-use applications.</p> <p>Today, hydrogen is primarily used in industry to remove contaminants from diesel fuel and to produce ammonia. Hydrogen is also used in consumer vehicles with hydrogen fuel cells, and countries such as Japan are exploring its use in <a href="">public transportation</a>. In the future, there is ample room for hydrogen in the energy space. Some of the work I completed for my PhD in 2015 involved researching efficient hydrogen production via solar thermal and other renewable sources. This application of renewable energy is now coming back to the fore as we think about “deep decarbonization.”</p> <p><strong>Q: </strong>How can hydrogen be incorporated into our energy system?<strong> </strong></p> <p><strong>A: </strong>When we consider deep decarbonization, or economy-wide decarbonization, there are some sectors that are hard to decarbonize with electricity alone. They include heavy industries that require high temperatures, heavy-duty transportation, and long-term energy storage. We are now thinking about the role hydrogen can play in decarbonizing these sectors.</p> <p>Hydrogen has a number of properties that make it safer to handle and use than the conventional fuels used in our energy system today. Hydrogen is nontoxic and much lighter than air. In the case of a leak, its lightness allows for relatively rapid dispersal. All fuels have some degree of danger associated with them, but we can design fuel systems with engineering controls and establish standards to ensure their safe handling and use. As the number of successful hydrogen projects grows, the public will become increasingly confident that hydrogen can be as safe as the fuels we use today.</p> <p>To expand hydrogen’s uses, we first need to explore ways of integrating it into as many energy sectors as possible. This presents a challenge because the entry points can vary for different regions. For example, in colder regions like the northeastern U.S., hydrogen can help provide heating. In California, it can be used for energy storage and light-duty transportation. And in the southern U.S., hydrogen can be used in industry as a feedstock or energy source.</p> <p>Once the most strategic entry points for hydrogen are identified for each region, the supporting infrastructure can be built and used for additional purposes. For example, if the northeastern U.S. implements hydrogen as its primary source of residential heating, other uses for hydrogen will follow, such as for transportation or energy storage. At that point, we hope that the market will shift so that it is profitable to use hydrogen across all energy sectors.</p> <p><strong>Q: </strong>What challenges need to be overcome so that hydrogen can be used to support decarbonization, and what are some solutions to these challenges?</p> <p><strong>A: </strong>The first challenge involves addressing the large capital investment that needs to be made, especially in infrastructure. Once industry and policymakers are convinced that hydrogen will be a critical component for decarbonization, investing in that infrastructure is the next step. Currently, we have many hydrogen plants — we know how to produce hydrogen. But in order to move toward a semi-hydrogen economy, we need to identify the sectors or end users that really require or could benefit from using hydrogen. The way I see it, we need two energy vectors for decarbonization. One is electricity; we are sure about that. But it's not enough. The second vector can be, and should be, hydrogen.</p> <p>Another key issue is the nature of hydrogen production itself. Though hydrogen does not generate any emissions directly when used, hydrogen production can have a huge environmental impact. Today, close to 95 percent of its production is from fossil resources. As a result, the CO<sub>2</sub> emissions from hydrogen production are quite high.</p> <p>There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. In this case, usually a CO<sub>2</sub> emissions reduction of around 90 percent is feasible.</p> <p>The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. Here, the source of electricity is very important. Our source of hydrogen needs to produce very low levels of CO<sub>2</sub> emissions, if not zero. Otherwise, there will not be any environmental benefit. If we start with clean, low-carbon electricity sources such as renewables, our CO<sub>2</sub> emissions will be quite low.</p> Emre Gençer discusses hydrogen at the MIT Energy Initiative’s 2019 Spring Symposium.Photo: Kelley TraversMIT Energy Initiative, Research, Energy, Sustainability, Alternative energy, Infrastructure, Policy, 3 Questions, Staff, Carbon, Emissions New approach to sustainable building takes shape in Boston A five-story mixed-use structure in Roxbury represents a new kind of net-zero-energy building, made from wood. Wed, 04 Mar 2020 23:59:59 -0500 David L. Chandler | MIT News Office <p>A new building about to take shape in Boston’s Roxbury area could, its designers hope, herald a new way of building residential structures in cities.</p> <p>Designed by architects from MIT and the design and construction firm Placetailor, the five-story building’s structure will be made from cross-laminated timber (CLT), which eliminates most of the greenhouse-gas emissions associated with standard building materials. It will be assembled on site mostly from factory-built subunits, and it will be so energy-efficient that its net carbon emissions will be essentially zero.</p> <p>Most attempts to quantify a building’s greenhouse gas contributions focus on the building’s operations, especially its heating and cooling systems. But the materials used in a building’s construction, especially steel and concrete, are also major sources of carbon emissions and need to be included in any realistic comparison of different types of construction.</p> <p>Wood construction has tended to be limited to single-family houses or smaller apartment buildings with just a few units, narrowing the impact that it can have in urban areas. But recent developments — involving the production of large-scale wood components, known as mass timber; the use of techniques such as cross-laminated timber; and changes in U.S. building codes — now make it possible to extend wood’s reach into much larger buildings, potentially up to 18 stories high.</p> <p>Several recent buildings in Europe have been pushing these limits, and now a few larger wooden buildings are beginning to take shape in the U.S. as well. The new project in Boston will be one of the largest such residential buildings in the U.S. to date, as well as one of the most innovative, thanks to its construction methods.</p> <p>Described as a Passive House Demonstration Project, the Boston building will consist of 14 residential units of various sizes, along with a ground-floor co-working space for the community. The building was designed by Generate Architecture and Technologies, a startup company out of MIT and Harvard University, headed by John Klein, in partnership with Placetailor, a design, development, and construction company that has specialized in building net-zero-energy and carbon-neutral buildings for more than a decade in the Boston area.</p> <p>Klein, who has been a principal investigator in MIT’s Department of Architecture and now serves as CEO of Generate, says that large buildings made from mass timber and assembled using the kit-of-parts approach he and his colleagues have been developing have a number of potential advantages over conventionally built structures of similar dimensions. For starters, even when factoring in the energy used in felling, transporting, assembling, and finishing the structural lumber pieces, the total carbon emissions produced would be less than half that of a comparable building made with conventional steel or concrete. Klein, along with collaborators from engineering firm BuroHappold Engineering and ecological market development firm Olifant, will be presenting a detailed analysis of these lifecycle emissions comparisons later this year at the annual Passive and Low Energy Architecture (<a href="">PLEA</a>) conference in A Coruña, Spain, whose theme this year is “planning post-carbon cities.”</p> <p>For that study, Klein and his co-authors modeled nine different versions of an eight-story mass-timber building, along with one steel and one concrete version of the building, all with the same overall scale and specifications. Their analysis showed that materials for the steel-based building produced the most greenhouse emissions; the concrete version produced 8 percent less than that; and one version of the mass-timber building produced 53 percent less.</p> <p>The first question people tend to ask about the idea of building tall structures out of wood is: What about fire? But Klein says this question has been thoroughly studied, and tests have shown that, in fact, a mass-timber building retains its structural strength longer than a comparable steel-framed building. That’s because the large timber elements, typically a foot thick or more, are made by gluing together several layers of conventional dimensioned lumber. These will char on the outside when exposed to fire, but the charred layer actually provides good insulation and protects the wood for an extended period. Steel buildings, by contrast, can collapse suddenly when the temperature of the fire approaches steel’s melting point and causes it to soften.</p> <p>The kit-based approach that Generate and Placetailor have developed, which the team calls Model-C, means that in designing a new building, it’s possible to use a series of preconfigured modules, assembled in different ways, to create a wide variety of structures of different sizes and for different uses, much like assembling a toy structure out of LEGO blocks. These subunits can be built in factories in a standardized process and then trucked to the site and bolted together. This process can reduce the impact of weather by keeping much of the fabrication process indoors in a controlled environment, while minimizing the construction time on site and thus reducing the construction’s impact on the neighborhood.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 333px;" /></p> <p><em style="font-size: 10px;">Animation depicts the process of assembling the mass-timber building from a set of factory-built components. Courtesy of&nbsp;Generate Architecture and Technologies</em></p> <p>“It’s a way to rapidly deploy these kinds of projects through a standardized system,” Klein says. “It’s a way to build rapidly in cities, using an aesthetic that embraces offsite industrial construction.”</p> <p>Because the thick wood structural elements are naturally very good insulators, the Roxbury building’s energy needs for heating and cooling are reduced compared to conventional construction, Klein says. They also produce very good acoustic insulation for its occupants. In addition, the building is designed to have solar panels on its roof, which will help to offset the building’s energy use.</p> <p>The team won a wood innovation grant in 2018 from the U.S. Forest Service, to develop a mass-timber based system for midscale housing developments. The new Boston building will be the first demonstration project for the system they developed.</p> <p>“It’s really a system, not a one-off prototype,” Klein says. With the on-site assembly of factory-built modules, which includes fully assembled bathrooms with the plumbing in place, he says the basic structure of the building can be completed in only about one week per floor.</p> <p>“We're all aware of the need for an immediate transition to a zero-carbon economy, and the building sector is a prime target,” says Andres Bernal SM ’13, Placetailor’s director of architecture. “As a company that has delivered only zero-carbon buildings for over a decade, we're very excited to be working with CLT/mass timber as an option for scaling up our approach and sharing the kit-of-parts and lessons learned with the rest of the Boston community.”</p> <p>With U.S. building codes now allowing for mass timber buildings of up to 18 stories, Klein hopes that this building will mark the beginning of a new boom in wood-based or hybrid construction, which he says could help to provide a market for large-scale sustainable forestry, as well as for sustainable, net-zero energy housing.</p> <p>“We see it as very competitive with concrete and steel for buildings of between eight and 12 stories,” he says. Such buildings, he adds, are likely to have great appeal, especially to younger generations, because “sustainability is very important to them. This provides solutions for developers, that have a real market differentiation.”</p> <p>He adds that Boston has set a goal of building thousands of new units of housing, and also a goal of making the city carbon-neutral. “Here’s a solution that does both,” he says.</p> <p>The project team included&nbsp;Evan Smith and Colin Booth at Placetailor Development; in addition to Klein<strong>,</strong>&nbsp;Zlatan Sehovic, Chris Weaver, John Fechtel, Jaehun Woo, and Clarence Yi-Hsien Lee at Generate Design; Andres Bernal, Michelangelo LaTona, Travis Anderson, and Elizabeth Hauver at Placetailor Design<strong>; </strong>Laura Jolly and Evan Smith at Placetailor Construction<strong>; </strong>Paul Richardson and Wolf Mangelsdorf at Burohappold<strong>; </strong>Sonia Barrantes and Jacob Staub at Ripcord Engineering; and<strong> </strong>Brian Kuhn and Caitlin Gamache at Code Red.</p> Architect's rendering shows the new mass-timber residential building that will soon begin construction in Boston's Roxbury neighborhood.Images: Generate Architecture and TechnologiesResearch, Architecture, Building, Sustainability, Emissions, Cities, Energy, Greenhouse gases, Carbon, Startups, Innovation and Entrepreneurship (I&E), School of Architecture and Planning Making a remarkable material even better Aerogels for solar devices and windows are more transparent than glass. Tue, 25 Feb 2020 13:10:01 -0500 Nancy W. Stauffer | MIT Energy Initiative <p>In recent decades, the search for high-performance thermal insulation for buildings has prompted manufacturers to turn to aerogels. Invented in the 1930s, these remarkable materials are translucent, ultraporous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow, making them ideal for keeping heat inside on a cold winter day and outside when summer temperatures soar.</p> <p>Five years ago, researchers led by&nbsp;<a href="">Evelyn Wang</a>, a professor and head of the Department of Mechanical Engineering, and Gang Chen, the Carl Richard Soderberg Professor in Power Engineering, set out to add one more property to that list. They aimed to make a silica aerogel that was truly transparent.</p> <p>“We started out trying to realize an optically transparent, thermally insulating aerogel for solar thermal systems,” says Wang. Incorporated into a solar thermal collector, a slab of aerogel would allow sunshine to come in unimpeded but prevent heat from coming back out — a key problem in today’s systems. And if the transparent aerogel were sufficiently clear, it could be incorporated into windows, where it would act as a good heat barrier but still allow occupants to see out.</p> <p>When the researchers started their work, even the best aerogels weren’t up to those tasks. “People had known for decades that aerogels are a good thermal insulator, but they hadn’t been able to make them very optically transparent,” says Lin Zhao PhD ’19 of mechanical engineering. “So in our work, we’ve been trying to understand exactly why they’re not very transparent, and then how we can improve their transparency.”</p> <p><strong>Aerogels: opportunities and challenges</strong></p> <p>The remarkable properties of a silica aerogel are the result of its nanoscale structure. To visualize that structure, think of holding a pile of small, clear particles in your hand. Imagine that the particles touch one another and slightly stick together, leaving gaps between them that are filled with air. Similarly, in a silica aerogel, clear, loosely connected, nanoscale silica particles form a three-dimensional solid network within an overall structure that is mostly air. Because of all that air, a silica aerogel has an extremely low density — in fact, one of the lowest densities of any known bulk material — yet it’s solid and structurally strong, though brittle.</p> <p>If a silica aerogel is made of transparent particles and air, why isn’t it transparent? Because the light that enters doesn’t all pass straight through. It is diverted whenever it encounters an interface between a solid particle and the air surrounding it. Figure 1 in the slideshow above illustrates the process. When light enters the aerogel, some is absorbed inside it. Some — called direct transmittance — travels straight through. And some is redirected along the way by those interfaces. It can be scattered many times and in any direction, ultimately exiting the aerogel at an angle. If it exits from the surface through which it entered, it is called diffuse reflectance; if it exits from the other side, it is called diffuse transmittance.</p> <p>To make an aerogel for a solar thermal system, the researchers needed to maximize the total transmittance: the direct plus the diffuse components. And to make an aerogel for a window, they needed to maximize the total transmittance and simultaneously minimize the fraction of the total that is diffuse light. “Minimizing the diffuse light is critical because it’ll make the window look cloudy,” says Zhao. “Our eyes are very sensitive to any imperfection in a transparent material.”</p> <p><strong>Developing a model</strong></p> <p>The sizes of the nanoparticles and the pores between them have a direct impact on the fate of light passing through an aerogel. But figuring out that interaction by trial and error would require synthesizing and characterizing too many samples to be practical. “People haven’t been able to systematically understand the relationship between the structure and the performance,” says Zhao. “So we needed to develop a model that would connect the two.”</p> <p>To begin, Zhao turned to the radiative transport equation, which describes mathematically how the propagation of light (radiation) through a medium is affected by absorption and scattering. It is generally used for calculating the transfer of light through the atmospheres of Earth and other planets. As far as Wang knows, it has not been fully explored for the aerogel problem.</p> <p>Both scattering and absorption can reduce the amount of light transmitted through an aerogel, and light can be scattered multiple times. To account for those effects, the model decouples the two phenomena and quantifies them separately — and for each wavelength of light.</p> <p>Based on the sizes of the silica particles and the density of the sample (an indicator of total pore volume), the model calculates light intensity within an aerogel layer by determining its absorption and scattering behavior using predictions from electromagnetic theory. Using those results, it calculates how much of the incoming light passes directly through the sample and how much of it is scattered along the way and comes out diffuse.</p> <p>The next task was to validate the model by comparing its theoretical predictions with experimental results.</p> <p><strong>Synthesizing aerogels</strong></p> <p>Working in parallel, graduate student Elise Strobach of mechanical engineering had been learning how best to synthe­size aerogel samples — both to guide development of the model and ultimately to validate it. In the process, she produced new insights on how to synthesize an aerogel with a specific desired structure.</p> <p>Her procedure starts with a common form of silicon called silane, which chemically reacts with water to form an aerogel. During that reaction, tiny nucleation sites occur where particles begin to form. How fast they build up determines the end structure. To control the reaction, she adds a catalyst, ammonia. By carefully selecting the ammonia-to-silane ratio, she gets the silica particles to grow quickly at first and then abruptly stop growing when the precursor materials are gone — a means of producing particles that are small and uniform. She also adds a solvent, methanol, to dilute the mixture and control the density of the nucleation sites, thus the pores between the particles.</p> <p>The reaction between the silane and water forms a gel containing a solid nanostructure with interior pores filled with the solvent. To dry the wet gel, Strobach needs to get the solvent out of the pores and replace it with air — without crushing the delicate structure. She puts the aerogel into the pressure chamber of a critical point dryer and floods liquid CO<sub>2</sub>&nbsp;into the chamber. The liquid CO<sub>2</sub>&nbsp;flushes out the solvent and takes its place inside the pores. She then slowly raises the temperature and pressure inside the chamber until the liquid CO<sub>2</sub>&nbsp;transforms to its supercritical state, where the liquid and gas phases can no longer be differentiated. Slowly venting the chamber releases the CO<sub>2</sub>&nbsp;and leaves the aerogel behind, now filled with air. She then subjects the sample to 24 hours of annealing — a standard heat-treatment process — which slightly reduces scatter without sacrificing the strong thermal insulating behavior. Even with the 24 hours of annealing, her novel procedure shortens the required aerogel synthesis time from several weeks to less than four days.</p> <p><strong>Validating and using the model</strong></p> <p>To validate the model, Strobach fabricated samples with carefully controlled thicknesses, densities, and pore and particle sizes — as determined by small-angle X-ray scattering — and used a standard spectrophotometer to measure the total and diffuse transmittance.</p> <p>The data confirmed that, based on measured physical properties of an aerogel sample, the model could calculate total transmittance of light as well as a measure of clarity called haze, defined as the fraction of total transmittance that is made up of diffuse light.</p> <p>The exercise confirmed simplifying assumptions made by Zhao in developing the model. Also, it showed that the radiative properties are independent of sample geometry, so his model can simulate light transport in aerogels of any shape. And it can be applied not just to aerogels, but to any porous materials.</p> <p>Wang notes what she considers the most important insight from the modeling and experimental results: “Overall, we determined that the key to getting high transparency and minimal haze — without reducing thermal insulating capability — is to have particles and pores that are really small and uniform in size,” she says.</p> <p>One analysis demonstrates the change in behavior that can come with a small change in particle size. Many applications call for using a thicker piece of transparent aerogel to better block heat transfer. But increasing thickness may decrease transparency. With their samples, as long as particle size is small, increasing thickness to achieve greater thermal insulation will not significantly decrease total transmittance or increase haze.</p> <p><strong>Comparing aerogels from MIT and elsewhere</strong></p> <p>How much difference does their approach make? “Our aerogels are more transparent than glass because they don’t reflect — they don’t have that glare spot where the glass catches the light and reflects to you,” says Strobach.</p> <p>To Lin, a main contribution of their work is the development of general guidelines for material design, as demonstrated by Figure 4 in the slideshow above. Aided by such a “design map,” users can tailor an aerogel for a particular application. Based on the contour plots, they can determine the combinations of controllable aerogel properties — namely, density and particle size — needed to achieve a targeted haze and transmittance outcome for many applications.</p> <p><strong>Aerogels in solar thermal collectors</strong></p> <p>The researchers have already demonstrated the value of their new aerogels for solar thermal energy conversion systems, which convert sunlight into thermal energy by absorbing radiation and transforming it into heat. Current solar thermal systems can produce thermal energy at so-called intermediate temperatures — between 120 and 220 degrees Celsius — which can be used for water and space heating, steam generation, industrial processes, and more. Indeed, in 2016, U.S. consumption of thermal energy exceeded the total electricity generation from all renewable sources.</p> <p>However, state-of-the-art solar thermal systems rely on expensive optical systems to concentrate the incoming sunlight, specially designed surfaces to absorb radiation and retain heat, and costly and difficult-to-maintain vacuum enclosures to keep that heat from escaping. To date, the costs of those components have limited market adoption.</p> <p>Zhao and his colleagues thought that using a transparent aerogel layer might solve those problems. Placed above the absorber, it could let through incident solar radiation and then prevent the heat from escaping. So it would essentially replicate the natural greenhouse effect that’s causing global warming — but to an extreme degree, on a small scale, and with a positive outcome.</p> <p>To try it out, the researchers designed an aerogel-based solar thermal receiver. The device consists of a nearly “blackbody” absorber (a thin copper sheet coated with black paint that absorbs all radiant energy that falls on it), and above it a stack of optimized, low-scattering silica aerogel blocks, which efficiently transmit sunlight and suppress conduction, convection, and radiation heat losses simultaneously. The nanostructure of the aerogel is tailored to maximize its optical trans­parency while maintaining its ultralow thermal conductivity. With the aerogel present, there is no need for expensive optics, surfaces, or vacuum enclosures.</p> <p>After extensive laboratory tests of the device, the researchers decided to test it “in the field” — in this case, on the roof of an MIT building. On a sunny day in winter, they set up their device, fixing the receiver toward the south and tilted 60 degrees from horizontal to maximize solar exposure. They then monitored its performance between 11 a.m. and 1 p.m. Despite the cold ambient temperature (less than 1 C) and the presence of clouds in the afternoon, the temperature of the absorber started increasing right away and eventually stabilized above 220 C.</p> <p>To Zhao, the performance already demonstrated by the artificial greenhouse effect opens up what he calls “an exciting pathway to the promotion of solar thermal energy utilization.” Already, he and his colleagues have demonstrated that it can convert water to steam that is greater than 120 C. In collaboration with researchers at the Indian Institute of Technology Bombay, they are now exploring possible process steam applications in India and performing field tests of a low-cost, completely passive solar autoclave for sterilizing medical equipment in rural communities.</p> <p><strong>Windows and more</strong></p> <p>Strobach has been pursuing another promising application for the transparent aerogel — in windows. “In trying to make more transparent aerogels, we hit a regime in our fabrication process where we could make things smaller, but it didn’t result in a significant change in the transparency,” she says. “But it did make a significant change in the clarity,” a key feature for a window.</p> <p>The availability of an affordable, thermally insulating window would have several impacts, says Strobach. Every winter, windows in the United States lose enough energy to power over 50 million homes. That wasted energy costs the economy more than $32 billion a year and generates about 350 million tons of CO<sub>2</sub> — more than is emitted by 76 million cars. Consumers can choose high-efficiency triple-pane windows, but they’re so expensive that they’re not widely used.</p> <p>Analyses by Strobach and her colleagues showed that replacing the air gap in a conventional double-pane window with an aerogel pane could be the answer. The result could be a double-pane window that is 40 percent more insulating than traditional ones and 85 percent as insulating as today’s triple-pane windows — at less than half the price. Better still, the technology could be adopted quickly. The aerogel pane is designed to fit within the current two-pane manufacturing process that’s ubiquitous across the industry, so it could be manufactured at low cost on existing production lines with only minor changes.</p> <p>Guided by Zhao’s model, the researchers are continuing to improve the performance of their aerogels, with a special focus on increasing clarity while maintaining transparency and thermal insulation. In addition, they are considering other traditional low-cost systems that would — like the solar thermal and window technologies — benefit from sliding in an optimized aerogel to create a high-performance heat barrier that lets in abundant sunlight.</p> <p>This research was supported by the Full-Spectrum Optimized Conversion and Utilization of Sunlight program of the U.S. Department of Energy’s Advanced Research Projects Agency–Energy; the Solid-State Solar Thermal Energy Conversion Center, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences; and the MIT Tata Center for Technology and Design. Elise Strobach received funding from the National Science Foundation Graduate Research Fellowship Program. Lin Zhao PhD ’19 is now an optics design engineer at 3M in St. Paul, Minnesota.&nbsp;</p> <p><em>This article appears in the&nbsp;<a href="" target="_blank">Autumn 2019</a>&nbsp;issue of&nbsp;</em><a href="" target="_blank">Energy Futures</a><em>, the magazine of the MIT Energy Initiative.&nbsp;</em></p> MIT Professor Evelyn Wang (right), graduate student Elise Strobach (left), and their colleagues have been performing theoretical and experimental studies of low-cost silica aerogels optimized to serve as a transparent heat barrier in specific devices.Photo: Stuart DarschMIT Energy Initiative, Mechanical engineering, Energy, School of Engineering, Tata Center, Research, Nanoscience and nanotechnology, Materials Science and Engineering A material’s insulating properties can be tuned at will Most materials have a fixed ability to conduct heat, but applying voltage to this thin film changes its thermal properties drastically. Mon, 24 Feb 2020 14:50:23 -0500 David Chandler | MIT News Office <p>Materials whose electronic and magnetic properties can be significantly changed by applying electrical inputs form the backbone of all of modern electronics. But achieving the same kind of tunable control over the thermal conductivity of any material has been an elusive quest.</p> <p>Now, a team of researchers at MIT have made a major leap forward. They have designed a long-sought device, which they refer to as an “electrical heat valve,” that can vary the thermal conductivity on demand. They demonstrated that the material’s ability to conduct heat can be “tuned” by a factor of 10 at room temperature.</p> <p>This technique could potentially open the door to new technologies for controllable insulation in smart windows, smart walls, smart clothing, or even new ways of harvesting the energy of waste heat.&nbsp;</p> <p>The findings are reported today in the journal <em>Nature Materials</em>, in a paper by MIT professors Bilge Yildiz and Gang Chen, recent graduates Qiyang Lu PhD ’18 and Samuel Huberman PhD ’18, and six others at MIT and at Brookhaven National Laboratory.</p> <p>Thermal conductivity describes how well heat can transfer through a material. For example, it’s the reason you can easily pick up a hot frying pan with a wooden handle, because of wood’s low thermal conductivity, but you might get burned picking up a similar frying pan with a metal handle, which has high thermal conductivity.</p> <p>The researchers used a material called strontium cobalt oxide (SCO), which can be made in the form of thin films. By adding oxygen to SCO in a crystalline form called brownmillerite, thermal conductivity increased. Adding hydrogen to it caused conductivity to decrease.</p> <p>The process of adding or removing oxygen and hydrogen can be controlled simply by varying a voltage applied to the material. In essence, the process is electrochemically driven. Overall, at room temperature, the researchers found this process provided a tenfold variation in the material’s heat conduction. Such an order-of-magnitude range of electrically controllable variation has never been seen in any material before, the researchers say.</p> <p>In most known materials, thermal conductivity is invariable — wood never conducts heat well, and metals never conduct heat poorly. As such, when the researchers found that adding certain atoms into the molecular structure of a material could actually increase its thermal conductivity, it was an unexpected result. If anything, adding the extra atoms — or, more specifically, ions, atoms stripped of some electrons, or with excess electrons, to give them a net charge — should make conductivity worse (which, it turned out, was the case when adding hydrogen, but not oxygen).</p> <p>“It was a surprise to me when I saw the result,” Chen says. But after further studies of the system, he says, “now we have a better understanding” of why this unexpected phenomenon happens.</p> <p>It turns out that inserting oxygen ions into the structure of the brownmillerite SCO transforms it into what’s known as a perovskite structure — one that has an even more highly ordered structure than the original. “It goes from a low-symmetry structure to a high-symmetry one. It also reduces the amount of so-called oxygen vacancy defect sites. These together lead to its higher heat conduction,” Yildiz says.</p> <p>Heat is conducted readily through such highly ordered structures, while it tends to be scattered and dissipated by highly irregular atomic structures. Introducing hydrogen ions, by contrast, causes a more disordered structure.</p> <p>“We can introduce more order, which increases thermal conductivity, or we can introduce more disorder, which gives rise to lower conductivity. We could figure this out by performing computational modeling, in addition to our experiments,” Yildiz explains.</p> <p>While the thermal conductivity can be varied by about a factor of 10 at room temperature, at lower temperatures the variation is even greater, she adds.</p> <p>The new method makes it possible to continuously vary that degree of order, in both directions, simply by varying a voltage applied to the thin-film material. The material is either immersed in an ionic liquid (essentially a liquid salt) or in contact with a solid electrolyte, that supplies either negative oxygen ions or positive hydrogen ions (protons) into the material when the voltage is turned on. In the liquid electrolyte case, the source of oxygen and hydrogen is hydrolysis of water from the surrounding air.</p> <p>“What we have shown here is really a demonstration of the concept,” Yildiz explains. The fact that they require the use of a liquid electrolyte medium for the full range of hydrogenation and oxygenation makes this version of the system “not easily applicable to an all-solid-state device,” which would be the ultimate goal, she says. Further research will be needed to produce a more practical version. “We know there are solid-state electrolyte materials” that could theoretically be substituted for the liquids, she says. The team is continuing to explore these possibilities, and have demonstrated working devices with solid electrolytes as well.</p> <p>Chen says “there are many applications where you want to regulate heat flow.” For example, for energy storage in the form of heat, such as from a solar-thermal installation, it would be useful to have a container that could be highly insulating to retain the heat until it’s needed, but which then could be switched to be highly conductive when it comes time to retrieve that heat. “The holy grail would be something we could use for energy storage,” he says. “That’s the dream, but we’re not there yet.”</p> <p>But this finding is so new that there may also be a variety of other potential uses. This approach, Yildiz says, “could open up new applications we didn’t think of before.” And while the work was initially confined to the SCO material, “the concept is applicable to other materials, because we know we can oxygenate or hydrogenate a range of materials electrically, electrochemically” she says. In addition, although this research focused on changing the thermal properties, the same process actually has other effects as well, Chen says: “It not only changes thermal conductivity, but it also changes optical properties.”</p> <p>“This is a truly innovative and novel way for using ion insertion and extraction in solids to tune or switch thermal conductivity,” says Juergen Fleig, a professor of chemical technology and analytics at the University of Vienna, Austria, who was not involved in this work. “The measured effects (caused by two phase transitions) are not only quite large but also bi-directional, which is exciting. I’m also impressed that the processes work so well at room temperature, since such oxide materials are usually operated at much higher temperatures.”</p> <p>Yongjie Hu, an associate professor of mechanical and aerospace engineering at the University of California at Los Angeles, who also was not involved in this work, says “Active control over thermal transport is fundamentally challenging. This is a very exciting study and represents an important step to achieve the goal. It is the first report that has looked in detail at the structures and thermal properties of tri-state phases, and may open up new venues for thermal management and energy applications.”</p> <p>The research team also included Hantao Zhang, Qichen Song, Jayue Wang and Gulin Vardar at MIT, and Adrian Hunt and Iradwikanari Waluyo at Brookhaven National Laboratory in Upton, New York. The work was supported by the National Science Foundation and the U.S. Department of Energy.</p> Researchers found that strontium cobalt oxide (SCO) naturally occurs in an atomic configuration called brownmillerite (center), but when oxygen ions are added to it (right), it becomes more orderly and more heat conductive, and when hydrogen ions are added (left) it becomes less orderly and less heat conductive.Image: courtesy of the researchersMechanical engineering, Nuclear science and engineering, DMSE, Materials Science and Engineering, Research, Energy, Renewable energy, Nanoscience and nanotechnology, School of Engineering MIT continues to advance toward greenhouse gas reduction goals Investments in energy efficiency projects, sustainable design elements essential as campus transforms. Fri, 21 Feb 2020 14:20:01 -0500 Nicole Morell | Office of Sustainability <p>At MIT, making a better world often starts on campus. That’s why, as the Institute works to find solutions to complex global problems, MIT has taken important steps to grow and transform its physical campus: adding new capacity, capabilities, and facilities to better support student life, education, and research. But growing and transforming the campus relies on resource and energy use — use that can exacerbate the complex global problem of climate change. This raises the question: How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?</p> <p>It’s a question — and a challenge — that MIT is committed to tackling.</p> <p><strong>Tracking toward 2030 goals</strong></p> <p>Guided by the <a href="" target="_blank">2015 Plan for Action on Climate Change</a>, MIT continues to work toward a goal of a minimum of 32 percent reduction in campus greenhouse gas emissions by 2030. As reported in the MIT Office of Sustainability’s (MITOS) <a href="!2019%20ghg%20emissions" target="_blank">climate action plan update</a>, campus greenhouse gas (GHG) emissions rose by 2 percent in 2019, in part due to a longer cooling season as well as the new MIT.nano facility coming fully online. Despite this, overall net emissions are 18 percent below the 2014 baseline, and MIT continues to track toward its 2030 goal.</p> <p>Joe Higgins, vice president for campus services and stewardship, is optimistic about MIT’s ability to not only meet, but exceed this current goal. “With this growth [to campus], we are discovering unparalleled opportunities to work toward carbon neutrality by collaborating with key stakeholders across the Institute, tapping into the creativity of our faculty, students, and researchers, and partnering with industry experts. We are committed to making steady progress toward achieving our GHG reduction goal,” he says.</p> <p><strong>New growth to campus </strong></p> <p>This past year marked the first full year of operation for the new MIT.nano facility. This facility includes many energy-intensive labs that necessitate high ventilation rates to meet the requirements of a nano technology clean room fabrication laboratory. As a result, the facility’s energy demands and GHG emissions can be much higher than a traditional science building. In addition, this facility — among others — uses specialty research gases that can act as potent greenhouse gases. Still, the 214,000-square-foot building has a number of sustainable, high-energy-efficiency design features, including an innovative air filtering process to support clean room standards while minimizing energy use. For these sustainable design elements, the facility was recognized with an International Institute for Sustainable Laboratories (I2SL) 2019 <a href="" target="_blank">Go Beyond Award</a>.</p> <p>In 2020, MIT.nano will be joined by new residential and multi-use buildings in both West Campus and Kendall Square, with the Vassar Street Residence and Kendall Square Sites 4 and 5 set to be completed. In keeping with MIT’s target for LEED v4 Gold Certification for new projects, these buildings were designed for high energy efficiency to minimize emissions and include a number of other sustainability measures, from green roofs to high-performance building envelopes. With new construction on campus, integrated design processes allow for sustainability and energy efficiency strategies to be adopted at the outset.</p> <p><strong>Energy efficiency on an established campus</strong></p> <p>For years, MIT has been keenly focused on increasing the energy efficiency and reducing emissions of its existing buildings, but as the campus grows, reducing emissions of current buildings through deep energy enhancements is an increasingly important part of offsetting emissions from new growth.</p> <p>To best accomplish this, the Department of Facilities — in close collaboration with the Office of Sustainability — has developed and rolled out a governance structure that relies on cross-functional teams to create new standards and policies, identify opportunities, develop projects, and assess progress relevant to building efficiency and emissions reduction. “Engaging across campus and across departments is essential to building out MIT’s full capacity to advance emissions reductions,” explains Director of Sustainability Julie Newman.</p> <p>These cross-functional teams — which include Campus Construction; Campus Services and Maintenance; Environment, Health, and Safety; Facilities Engineering; the Office of Sustainability; and Utilities — have focused on a number of strategies in the past year, including both building-wide and targeted energy strategies that have revealed priority candidates for energy retrofits to drive efficiency and minimize emissions.</p> <p>Carlo Fanone, director of facilities engineering, explains that “the cross-functional teams play an especially critical role at MIT, since we are a district energy campus. We supply most of our own energy, we distribute it, and we are the end users, so the teams represent a holistic approach that looks at all three of these elements equally — supply, distribution, and end-use — and considers energy solutions that address any or all of these elements.” Fanone notes that MIT has also identified 25 facilities on campus that have a high energy-use intensity and a high greenhouse gas emissions footprint. These 25 buildings account for up to 50 percent of energy consumption on the MIT campus. “Going forward,” Fanone says, “we are focusing our energy work on these buildings and on other energy enhancements that could have a measurable impact on the progress toward MIT’s 2030 goal.”</p> <p>Armed with these data, the Department of Facilities last year led retrofits for smart lighting and mechanical systems upgrades, as well as smart building management systems, in a number of buildings across campus. These building audits will continue to guide future projects focused on improving and optimizing energy elements such as heat recovery, lighting, and building systems controls.</p> <p>In addition to building-level efficiency improvements, MIT’s <a href="">Central Utilities Plant</a> upgrade is expected to contribute significantly to the reduction of on-campus emissions in upcoming years. The upgraded plant — set to be completed this year — will incorporate more efficient equipment and state-of-the-art controls. Between this upgrade, a fuel switch improvement made in 2015, and the building-level energy improvements, regulated pollutant emissions on campus are expected to reduce by more than 25 percent and campus greenhouse gas emissions by 10 percent from 2014 levels, helping to offset a projected 10 percent increase in greenhouse gas emissions due to energy demands created by new growth.</p> <p><strong>Climate research and action on campus</strong></p> <p>As MIT explores energy efficiency opportunities, the campus itself plays an important role as an incubator for new ideas.</p> <p>In 2019, MITOS director Julie Newman and professor of mechanical engineering Timothy Gutowski are once again teaching 11.S938 / 2.S999 (Solving for Carbon Neutrality at MIT) this semester. <strong>“</strong>The course, along with others that have emerged across campus, provides students the opportunity to devise ideas and solutions for real-world challenges while connecting them back to campus. It also gives the students a sense of ownership on this campus, sharing ideas to chart the course for carbon-neutral MIT,” Newman says.</p> <p>Also on campus, a new energy storage project is being developed to test the feasibility and scalability of using different battery storage technologies to redistribute electricity provided by variable renewable energy. Funded by a Campus Sustainability Incubator Fund grant and led by Jessika Trancik, associate professor in the Institute for Data, Systems, and Society, the project aims to test software approaches to synchronizing energy demand and supply and evaluate the performance of different energy-storage technologies against these use cases. It has the benefit of connecting on-campus climate research with climate action. “Building this storage testbed, and testing technologies under real-world conditions, can inform new algorithms and battery technologies and act as a multiplier, so that the lessons we learn at MIT can be applied far beyond campus,” says Trancik of the project.</p> <p><strong>Supporting on-campus efforts</strong></p> <p>MIT’s work toward emissions reductions already extends beyond campus as the Institute continues to benefit from the Institute’s 25-year commitment to purchase electricity generated through its <a href="" target="_self">Summit Farms Power Purchase Agreement</a> (PPA), which enabled the construction of a 650-acre, 60-megawatt solar farm in North Carolina. Through the purchase of 87,300 megawatt-hours of solar power, MIT was able to offset over 30,000 metric tons of greenhouse gas emissions from our on-campus operations in 2019.</p> <p>The Summit Farms PPA model has provided inspiration for similar projects around the country and has also demonstrated what MIT can accomplish through partnership. MIT continues to explore the possibility of collaborating on similar large power-purchase agreements, possibly involving other local institutions and city governments.</p> <p><strong>Looking ahead</strong></p> <p>As the campus continues to work toward reducing emissions, Fanone notes that a comprehensive approach will help MIT address the challenge of growing a campus while reducing emissions.</p> <p>“District-level energy solutions, additional renewables, coupled with energy enhancements within our buildings, will allow MIT to offset growth and meet our 2030 GHG goals,” says Fanone. Adds Newman, “It’s an exciting time that MIT is now positioned to put the steps in place to respond to this global crisis at the local level.”</p> How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?Photo: Maia Weinstock Sustainability, MIT.nano, Facilities, Campus buildings and architecture, Campus development, IDSS, Mechanical engineering, Climate change, Energy, Greenhouse gases, Community Maintaining the equipment that powers our world By organizing performance data and predicting problems, Tagup helps energy companies keep their equipment running. Wed, 12 Feb 2020 09:39:37 -0500 Zach Winn | MIT News Office <p>Most people only think about the systems that power their cities when something goes wrong. Unfortunately, many people in the San Francisco Bay Area had a lot to think about recently when their utility company began scheduled power outages in an attempt to prevent wildfires. The decision came after devastating fires last year were found to be the result of faulty equipment, including transformers.</p> <p>Transformers are the links between power plants, power transmission lines, and distribution networks. If something goes wrong with a transformer, entire power plants can go dark. To fix the problem, operators work around the clock to assess various components of the plant, consider disparate data sources, and decide what needs to be repaired or replaced.</p> <p>Power equipment maintenance and failure is such a far-reaching problem it’s difficult to attach a dollar sign to. Beyond the lost revenue of the plant, there are businesses that can’t operate, people stuck in elevators and subways, and schools that can’t open.</p> <p>Now the startup Tagup is working to modernize the maintenance of transformers and other industrial equipment. The company’s platform lets operators view all of their data streams in one place and use machine learning to estimate if and when components will fail.</p> <p>Founded by CEO Jon Garrity ’11 and CTO Will Vega-Brown ’11, SM ’13 —&nbsp;who recently completed his PhD program in MIT’s Department of Mechanical Engineering and will be graduating this month — Tagup is currently being used by energy companies to monitor approximately 60,000 pieces of equipment around North America and Europe. That includes transformers, offshore wind turbines, and reverse osmosis systems for water filtration, among other things.</p> <p>“Our mission is to use AI to make the machines that power the world safer, more reliable, and more efficient,” Garrity says.</p> <p><strong>A light bulb goes on</strong></p> <p>Vega-Brown and Garrity crossed paths in a number of ways at MIT over the years. As undergraduates, they took a few of the same courses, with Vega-Brown double majoring in mechanical engineering and physics and Garrity double majoring in economics and physics. They were also fraternity brothers as well as teammates on the football team.</p> <p>Garrity was&nbsp;first exposed&nbsp;to entrepreneurship as an undergraduate in MIT’s Energy Ventures class and in the Martin Trust Center for Entrepreneurship.&nbsp;Later, when Garrity returned to campus while attending Harvard Business School and Vega-Brown was pursuing his doctorate, they were again classmates in MIT’s New Enterprises course.</p> <p>Still, the founders didn’t think about starting a company until 2015, after Garrity had worked at GE Energy and Vega-Brown was well into his PhD work at MIT’s Computer Science and Artificial Intelligence Laboratory.</p> <p>At GE, Garrity discovered an intriguing business model through which critical assets like jet engines were leased by customers — in this case airlines — rather than purchased, and manufacturers held responsibility for remotely monitoring and maintaining them. The arrangement allowed GE and others to leverage their engineering expertise while the customers focused on their own industries.</p> <p>"When I worked at GE, I always wondered: Why isn’t this service available for any equipment type? The answer is economics.” Garrity says. “It is expensive to set up a remote monitoring center, to instrument the equipment in the field, to staff the 50 or more engineering subject matter experts, and to provide the support required to end customers. The cost of equipment failure, both in terms of business interruption and equipment breakdown, must be enormous to justify the high average fixed cost."</p> <p>“We realized two things,” Garrity continues. “With the increasing availability of sensors and cloud infrastructure, we can dramatically reduce the cost [of monitoring critical assets] from the infrastructure and communications side. And, with new machine-learning methods, we can increase the productivity of engineers who review equipment data manually.”</p> <p>That realization led to Tagup, though it would take time to prove the founders’ technology. “The problem with using AI for industrial applications is the lack of high-quality data,” Vega-Brown explains. “Many of our customers have giant datasets, but the information density in industrial data is often quite low. That means we need to be very careful in how we hunt for signal and validate our models, so that we can reliably make accurate forecasts and predictions.”</p> <p>The founders leveraged their MIT ties to get the company off the ground. They received guidance from MIT’s Venture Mentoring Service, and Tagup was in the first cohort of startups accepted into the MIT Industrial Liaison Program’s (ILP) STEX 25 accelerator, which connects high potential startups with members of industry. Tagup has since secured several customers through ILP, and those early partnerships helped the company train and validate some of its machine-learning models.</p> <p><strong>Making power more reliable</strong></p> <p>Tagup’s platform combines all of a customer’s equipment data into one sortable master list that displays the likelihood of each asset causing a disruption. Users can click on specific assets to see charts of historic data and trends that feed into Tagup’s models.</p> <p>The company doesn’t deploy any sensors of its own. Instead, it combines customers’ real-time sensor measurements with other data sources like maintenance records and machine parameters to improve its proprietary machine-learning models.</p> <p>The founders also began with a focused approach to building their system. Transformers were one of the first types of equipment they worked with, and they’ve expanded to other groups of assets gradually.</p> <p>Tagup’s first deployment was in August of 2016 with a power plant that faces the Charles River close to MIT’s campus. Just a few months after it was installed, Garrity was at a meeting overseas when he got a call from the plant manager about a transformer that had just gone offline unexpectedly. From his phone, Garrity was able to inspect real-time data from the transformer&nbsp;and give the manager the information he needed to restart the system. Garrity says it saved the plant about 26 hours of downtime and $150,000 in revenue.</p> <p>“These are really catastrophic events in terms of business outcomes,” Garrity says, noting transformer failures are estimated to cost $23 billion annually.</p> <p>Since then they’ve secured partnerships with several large utility companies, including National Grid and Consolidated Edison Company of New York.</p> <p>Down the line, Garrity and Vega-Brown are excited about using machine learning to control the operation of equipment. For example, a machine could manage itself in the same way an autonomous car can sense an obstacle and steer around it.</p> <p>Those capabilities have major implications for the systems that ensure the lights go on when we flip switches at night.</p> <p>“Where it gets really exciting is moving toward optimization,” Garrity says. Vega-Brown agrees, adding, “Enormous amounts of power and water are wasted because there aren't enough experts to tune the controllers on every industrial machine in the world. If we can use AI to capture some of the expert knowledge in an algorithm, we can cut inefficiency and improve safety at scale.”</p> Tagup's industrial equipment monitoring platform is currently being used by energy companies to monitor approximately 60,000 pieces of equipment around North America and Europe. That includes transformers, offshore wind turbines, and reverse osmosis systems for water filtration.Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Computer Science and Artificial Intelligence Laboratory (CSAIL), Mechanical engineering, School of Engineering, Machine learning, Energy, Artificial intelligence Researchers develop a roadmap for growth of new solar cells Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon. Thu, 06 Feb 2020 10:57:11 -0500 David L. Chandler | MIT News Office <p>Materials called perovskites show strong potential for a new generation of solar cells, but they’ve had trouble gaining traction in a market dominated by silicon-based solar cells. Now, a study by researchers at MIT and elsewhere outlines a roadmap for how this promising technology could move from the laboratory to a significant place in the global solar market.</p> <p>The “technoeconomic” analysis shows that by starting with higher-value niche markets and gradually expanding, solar panel manufacturers could avoid the very steep initial capital costs that would be required to make perovskite-based panels directly competitive with silicon for large utility-scale installations at the outset. Rather than making a prohibitively expensive initial investment, of hundreds of millions or even billions of dollars, to build a plant for utility-scale production, the team found that starting with more specialized applications could be accomplished for more realistic initial capital investment on the order of $40 million.</p> <p>The results are described in a paper in the journal <em>Joule</em> by MIT postdoc Ian Mathews, research scientist Marius Peters, professor of mechanical engineering Tonio Buonassisi, and five others at MIT, Wellesley College, and Swift Solar Inc.</p> <p>Solar cells based on perovskites — a broad category of compounds characterized by a certain arrangement of their molecular structure — could provide dramatic improvements in solar installations. Their constituent materials are inexpensive, and they could be manufactured in a roll-to-roll process like printing a newspaper, and printed onto lightweight and flexible backing material. This could greatly reduce costs associated with transportation and installation, although they still require further work to improve their durability. Other promising new solar cell materials are also under development in labs around the world, but none has yet made inroads in the marketplace.</p> <p>“There have been a lot of new solar cell materials and companies launched over the years,” says Mathews, “and yet, despite that, silicon remains the dominant material in the industry and has been for decades.”</p> <p>Why is that the case? “People have always said that one of the things that holds new technologies back is that the expense of constructing large factories to actually produce these systems at scale is just too much,” he says. “It’s difficult for a startup to cross what’s called ‘the valley of death,’ to raise the tens of millions of dollars required to get to the scale where this technology might be profitable in the wider solar energy industry.”</p> <p>But there are a variety of more specialized solar cell applications where the special qualities of perovskite-based solar cells, such as their light weight, flexibility, and potential for transparency, would provide a significant advantage, Mathews says. By focusing on these markets initially, a startup solar company could build up to scale gradually, leveraging the profits from the premium products to expand its production capabilities over time.</p> <p>Describing the literature on perovskite-based solar cells being developed in various labs, he says, “They’re claiming very low costs. But they’re claiming it once your factory reaches a certain scale. And I thought, we’ve seen this before — people claim a new photovoltaic material is going to be cheaper than all the rest and better than all the rest. That’s great, except we need to have a plan as to how we actually get the material and the technology to scale.”</p> <p>As a starting point, he says, “We took the approach that I haven’t really seen anyone else take: Let’s actually model the cost to manufacture these modules as a function of scale. So if you just have 10 people in a small factory, how much do you need to sell your solar panels at in order to be profitable? And once you reach scale, how cheap will your product become?”</p> <p>The analysis confirmed that trying to leap directly into the marketplace for rooftop solar or utility-scale solar installations would require very large upfront capital investment, he says. But “we looked at the prices people might get in the internet of things, or the market in building-integrated photovoltaics. People usually pay a higher price in these markets because they’re more of a specialized product. They’ll pay a little more if your product is flexible or if the module fits into a building envelope.” Other potential niche markets include self-powered microelectronics devices.</p> <p>Such applications would make the entry into the market feasible without needing massive capital investments. “If you do that, the amount you need to invest in your company is much, much less, on the order of a few million dollars instead of tens or hundreds of millions of dollars, and that allows you to more quickly develop a profitable company,” he says.</p> <p>“It’s a way for them to prove their technology, both technically and by actually building and selling a product and making sure it survives in the field,” Mathews says, “and also, just to prove that you can manufacture at a certain price point.”</p> <p>Already, there are a handful of startup companies working to try to bring perovskite solar cells to market, he points out, although none of them yet has an actual product for sale. The companies have taken different approaches, and some seem to be embarking on the kind of step-by-step growth approach outlined by this research, he says. “Probably the company that’s raised the most money is a company called Oxford PV, and they’re looking at tandem cells,” which incorporate both silicon and perovskite cells to improve overall efficiency. Another company is one started by Joel Jean PhD ’17 (who is also a co-author of this paper) and others, called Swift Solar, which is working on flexible perovskites. And there’s a company called Saule Technologies, working on printable perovskites.</p> <p>Mathews says the kind of technoeconomic analysis the team used in its study could be applied to a wide variety of other new energy-related technologies, including rechargeable batteries and other storage systems, or other types of new solar cell materials.</p> <p>“There are many scientific papers and academic studies that look at how much it will cost to manufacture a technology once it’s at scale,” he says. “But very few people actually look at how much does it cost at very small scale, and what are the factors affecting economies of scale? And I think that can be done for many technologies, and it would help us accelerate how we get innovations from lab to market.”</p> <p>The research team also included MIT alumni Sarah Sofia PhD ’19 and Sin Cheng Siah PhD ’15, Wellesley College student Erica Ma, and former MIT postdoc Hannu Laine. The work was supported by the European Union’s Horizon 2020 research and innovation program, the Martin Family Society for Fellows of Sustainability, the U.S. Department of Energy, Shell, through the MIT Energy Initiative, and the Singapore-MIT Alliance for Research and Technology.</p> Perovskites, a family of materials defined by a particular kind of molecular structure as illustrated here, have great potential for new kinds of solar cells. A new study from MIT shows how these materials could gain a foothold in the solar marketplace.Image: Christine Daniloff, MITResearch, School of Engineering, Energy, Solar, Nanoscience and nanotechnology, Materials Science and Engineering, Mechanical engineering, National Science Foundation (NSF), Renewable energy, Alternative energy, Sustainability, Artificial intelligence, Machine learning, MIT Energy Initiative, Singapore-MIT Alliance for Research and Technology (SMART) Decarbonizing the making of consumer products Researchers are devising new methods of synthesizing chemicals used in goods from clothing, detergents, and antifreeze to pharmaceuticals and plastics. Wed, 05 Feb 2020 13:40:01 -0500 Nancy W. Stauffer | MIT Energy Initiative <p>Most efforts to reduce energy consumption and carbon emissions have focused on the transportation and residential sectors. Little attention has been paid to industrial manufacturing, even though it consumes more energy than either of those sectors and emits high levels of CO<sub>2</sub>&nbsp;in the process.</p> <p>To help address that situation, Assistant Professor&nbsp;<a href="">Karthish Manthiram</a>, postdoc Kyoungsuk Jin, graduate students Joseph H. Maalouf and Minju Chung, and their colleagues, all of the MIT Department of Chemical Engineering, have been devising new methods of synthesizing epoxides, a group of chemicals used in the manufacture of consumer goods ranging from polyester clothing, detergents, and antifreeze to pharmaceuticals and plastics.</p> <p>“We don’t think about the embedded energy and carbon dioxide footprint of a plastic bottle we’re using or the clothing we’re putting on,” says Manthiram. “But epoxides are everywhere!”</p> <p>As solar and wind and storage technologies mature, it’s time to address what Manthiram calls the “hidden energy and carbon footprints of materials made from epoxides.” And the key, he argues, may be to perform epoxide synthesis using electricity from renewable sources along with specially designed catalysts and an unlikely starting material: water.</p> <p><strong>The challenge</strong></p> <p>Epoxides can be made from a variety of carbon-containing compounds known generically as olefins. But regardless of the olefin used, the conversion process generally produces high levels of CO<sub>2</sub>&nbsp;or has other serious drawbacks.</p> <p>To illustrate the problem, Manthiram describes processes now used to manufacture ethylene oxide, an epoxide used in making detergents, thickeners, solvents, plastics, and other consumer goods. Demand for ethylene oxide is so high that it has the fifth-largest CO<sub>2</sub>&nbsp;footprint of any chemical made today.</p> <p>The top panel of Figure 1 in the slideshow above illustrates one common synthesis process. The recipe is simple: Combine ethylene molecules and oxygen molecules, subject the mixture to high temperatures and pressures, and separate out the ethylene oxide that forms.</p> <p>However, those ethylene oxide molecules are accompanied by molecules of CO<sub>2</sub> — a problem, given the volume of ethylene oxide produced nationwide. In addition, the high temperatures and pressures required are generally produced by burning fossil fuels. And the conditions are so extreme that the reaction must take place in a massive pressure vessel. The capital investment required is high, so epoxides are generally produced in a central location and then transported long distances to the point of consumption.</p> <p>Another widely synthesized epoxide is propylene oxide, which is used in making a variety of products, including perfumes, plasticizers, detergents, and polyurethanes. In this case, the olefin — propylene — is combined with&nbsp;tert-butyl hydroperoxide, as illustrated in the bottom panel of Figure 1. An oxygen atom moves from the&nbsp;tert-butyl hydroperoxide molecule to the propylene to form the desired propylene oxide. The reaction conditions are somewhat less harsh than in ethylene oxide synthesis, but a side product must be dealt with. And while no CO<sub>2</sub>&nbsp;is created, the&nbsp;tert-butyl hydroperoxide is highly reactive, flammable, and toxic, so it must be handled with extreme care.</p> <p>In short, current methods of epoxide synthesis produce CO<sub>2</sub>, involve dangerous chemicals, require huge pressure vessels, or call for fossil fuel combustion. Manthiram and his team believed there must be a better way.</p> <p><strong>A new approach</strong></p> <p>The goal in epoxide synthesis is straightforward: Simply transfer an oxygen atom from a source molecule onto an olefin molecule. Manthiram and his lab came up with an idea: Could water be used as a sustainable and benign source of the needed oxygen atoms? The concept was counterintuitive. “Organic chemists would say that it shouldn’t be possible because water and olefins don’t react with one another,” he says. “But what if we use electricity to liberate the oxygen atoms in water? Electrochemistry causes interesting things to happen — and it’s at the heart of what our group does.”</p> <p>Using electricity to split water into oxygen and hydrogen is a standard practice called electrolysis. Usually, the goal of water electrolysis is to produce hydrogen gas for certain industrial applications or for use as a fuel. The oxygen is simply vented to the atmosphere.</p> <p>To Manthiram, that practice seemed wasteful. Why not do something useful with the oxygen? Making an epoxide seemed the perfect opportunity — and the benefits could be significant. Generating two valuable products instead of one would bring down the high cost of water electrolysis. Indeed, it might become a cheaper, carbon-free alternative to today’s usual practice of producing hydrogen from natural gas. The electricity needed for the process could be generated from renewable sources such as solar and wind. There wouldn’t be any hazardous reactants or undesirable byproducts involved. And there would be no need for massive, costly, and accident-prone pressure vessels. As a result, epoxides could be made at small-scale, modular facilities close to the place they’re going to be used — no need to transport, distribute, or store the chemicals produced.</p> <p><strong>Will the reaction work?</strong></p> <p>However, there was a chance that the proposed process might not work. During electrolysis, the oxygen atoms quickly pair up to form oxygen gas. The proposed process — illustrated in Figure 2 in the slideshow above<strong> </strong>— would require that some of the oxygen atoms move onto the olefin before they combine with one another.</p> <p>To investigate the feasibility of the process, Manthiram’s group performed a fundamental analysis to find out whether the reaction is thermodynamically favorable. Does the energy of the overall system shift to a lower state by making the move? In other words, is the product more stable than the reactants were?</p> <p>They started with a thermodynamic analysis of the proposed reaction at various combinations of temperature and pressure — the standard variables used in hydrocarbon processing. As an example, they again used ethylene oxide. The results, shown in Figure 3 in the slideshow above, were not encouraging. As the uniform blue in the left-hand figure shows, even at elevated temperatures and pressures, the conversion of ethylene and water to ethylene oxide plus hydrogen doesn’t happen — just as a chemist’s intuition would predict.</p> <p>But their proposal was to use voltage rather than pressure to drive the chemical reaction. As the right-hand figure in Figure 3 shows, with that change, the outcome of the analysis looked more promising. Conversion of ethylene to ethylene oxide occurs at around 0.8 volts. So the process is viable at voltages below that of an everyday AA battery and at essentially room temperature.</p> <p>While a thermodynamic analysis can show that a reaction is possible, it doesn’t reveal how quickly it will occur, and reactions must be fast to be cost-effective. So the researchers needed to design a catalyst — a material that would speed up the reaction without getting consumed.</p> <p>Designing catalysts for specific electrochemical reactions is a focus of Manthiram’s group. For this reaction, they decided to start with manganese oxide, a material known to catalyze the water-splitting reaction. And to increase the catalyst’s effectiveness, they fabricated it into nanoparticles — a particle size that would maximize the surface area on which reactions can take place.</p> <p>Figure 4 in the slideshow above shows the special electrochemical cell they designed. Like all such cells, it has two electrodes — in this case, an anode where oxygen is transferred to make an olefin into an epoxide, and a cathode where hydrogen gas forms. The anode is made of carbon paper decorated with the nanoparticles of manganese oxide (shown in yellow). The cathode is made of platinum. Between the anode and the cathode is an electrolyte that ferries electrically charged ions between them. In this case, the electrolyte is a mixture of a solvent, water (the oxygen source), and the olefin.</p> <p>The magnified views in Figure 4 show what happens at the two electrodes. The right-hand view shows the olefin and water (H<sub>2</sub>O) molecules arriving at the anode surface. Encouraged by the catalyst, the water molecules break apart, sending two electrons (negatively charged particles, e<sup>–</sup>) into the anode and releasing two protons (positively charged hydrogen ions, H<sup>+</sup>) into the electrolyte. The leftover oxygen atom (O) joins the olefin molecule on the surface of the electrode, forming the desired epoxide molecule.</p> <p>The two liberated electrons travel through the anode and around the external circuit (shown in red), where they pass through a power source — ideally, fueled by a renewable source such as wind or solar—and gain extra energy. When the two energized electrons reach the cathode, they join the two protons arriving in the electrolyte and — as shown in the left-hand magnified view — they form hydrogen gas (H<sub>2</sub>), which exits the top of the cell.</p> <p><strong>Experimental results</strong></p> <p>Experiments with that setup have been encouraging. Thus far, the work has involved an olefin called cyclooctene, a well-known molecule that’s been widely used by people studying oxidation reactions. “Ethylene and the like are structurally more important and need to be solved, but we’re developing a foundation on a well-known molecule just to get us started,” says Manthiram.</p> <p>Results have already allayed a major concern. In one test, the researchers applied 3.8 volts across their mixture at room temperature, and, after four hours, about half of the cyclooctene had converted into its epoxide counterpart, cyclooctene oxide. “So that result confirms that we can split water to make hydrogen and oxygen and then intercept the oxygen atoms so they move onto the olefin and convert it into an epoxide,” says Manthiram.</p> <p>But how efficiently does the conversion happen? If this reaction is perfectly efficient, one oxygen atom will move onto an olefin for every two electrons that go into the anode. Thus, one epoxide molecule will form for each hydrogen molecule that forms. Using special equipment, the researchers counted the number of epoxide molecules formed for each pair of electrons passing through the external circuit to form hydrogen.</p> <p>That analysis showed that their conversion efficiency was 30 percent of the maximum theoretical efficiency. “That’s because the electrons are also doing other reactions — maybe making oxygen, for instance, or oxidizing some of the solvent,” says Manthiram. “But for us, 30 percent is a remarkable number for a new reaction that was previously unknown. For that to be the first step, we’re very happy about it.”</p> <p>Manthiram recognizes that the efficiency might need to be twice as high, or even higher, for the process to be commercially viable. “Techno-economics will ultimately guide where that number needs to be,” he says. “But I would say that the heart of our discoveries so far is the realization that there is a catalyst that can make this happen. That’s what has opened up everything that we’ve explored since the initial discovery.”</p> <p><strong>Encouraging results and future challenges</strong></p> <p>Manthiram is cautious not to overstate the potential implications of the work. “We know what the outcome is,” he says. “We put olefin in, and we get epoxide out.” But to optimize the conversion efficiency they need to know at a molecular level all the steps involved in that conversion. For example, does the electron transfer first by itself, or does it move with a proton at the same time? How does the catalyst bind the oxygen atom? And how does the oxygen atom transfer to the olefin on the surface of the catalyst?</p> <p>According to Manthiram, he and his group have hypothesized a reaction sequence, and several analytical techniques have provided a “handful of observables” that support it. But he admits that there is much more theoretical and experimental work to do to develop and validate a detailed mechanism that they can use to guide the optimization process. And then there are practical considerations, such as how to extract the epoxides from the electrochemical cell and how to scale up production.</p> <p>Manthiram believes that this work on epoxides is just “the tip of the iceberg” for his group. There are many other chemicals they might be able to make using voltage and specially designed catalysts. And while some attempts may not work, with each one they’ll learn more about how voltages and electrons and surfaces influence the outcome.</p> <p>He and his team predict that the face of the chemical industry will change dramatically in the years to come. The need to reduce CO<sub>2</sub>&nbsp;emissions and energy use is already pushing research on chemical manufacturing toward using electricity from renewable sources. And that electricity will increasingly be made at distributed sites. “If we have solar panels and wind turbines everywhere, why not do chemical synthesis close to where the power is generated, and make commercial products close to the communities that need them?” says Manthiram. The result will be a distributed, electrified, and decarbonized chemical industry — and a dramatic reduction in both energy use and CO<sub>2</sub>&nbsp;emissions.</p> <p>This research was supported by MIT’s Department of Chemical Engineering and by National Science Foundation Graduate Research Fellowships.</p> <p><em>This article appears in the&nbsp;<a href="" target="_blank">Autumn 2019&nbsp;</a>issue of&nbsp;</em><a href="" target="_blank">Energy Futures</a>, <em>the magazine of the MIT Energy Initiative.&nbsp;</em></p> Assistant Professor Karthish Manthiram (center), postdoc Kyoungsuk Jin (right), graduate student Joseph Maalouf (left), and their colleagues are working to help decarbonize the chemical industry by finding ways to drive critical chemical reactions using electricity from renewable sources. Photo: Stuart Darsch MIT Energy Initiative, Chemical engineering, Research, Energy, Emissions, Manufacturing, School of Engineering, Industry, Carbon New electrode design may lead to more powerful batteries An MIT team has devised a lithium metal anode that could improve the longevity and energy density of future batteries. Mon, 03 Feb 2020 10:59:59 -0500 David L. Chandler | MIT News Office <p>New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer, based on the long-sought goal of using pure lithium metal as one of the battery’s two electrodes, the anode. &nbsp;</p> <p>The new electrode concept comes from the laboratory of Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. It is described today in the journal <em>Nature</em>, in a paper co-authored by Yuming Chen and Ziqiang Wang at MIT, along with 11 others at MIT and in Hong Kong, Florida, and Texas.</p> <p>The design is part of a concept for developing safe all-solid-state batteries, dispensing with the liquid or polymer gel usually used as the electrolyte material between the battery’s two electrodes. An electrolyte allows lithium ions to travel back and forth during the charging and discharging cycles of the battery, and an all-solid version could be safer than liquid electrolytes, which have high volatilility and have been the source of explosions in lithium batteries.</p> <p>“There has been a lot of work on solid-state batteries, with lithium metal electrodes and solid electrolytes,” Li says, but these efforts have faced a number of issues.</p> <p>One of the biggest problems is that when the battery is charged up, atoms accumulate inside the lithium metal, causing it to expand. The metal then shrinks again during discharge, as the battery is used. These repeated changes in the metal’s dimensions, somewhat like the process of inhaling and exhaling, make it difficult for the solids to maintain constant contact, and tend to cause the solid electrolyte to fracture or detach.</p> <p>Another problem is that none of the proposed solid electrolytes are truly chemically stable while in contact with the highly reactive lithium metal, and they tend to degrade over time.</p> <p>Most attempts to overcome these problems have focused on designing solid electrolyte materials that are absolutely stable against lithium metal, which turns out to be difficult.&nbsp; Instead, Li and his team adopted an unusual design that utilizes two additional classes of solids, “mixed ionic-electronic conductors” (MIEC) and “electron and Li-ion insulators” (ELI), which are absolutely chemically stable in contact with lithium metal.</p> <p>The researchers developed a three-dimensional nanoarchitecture in the form of a honeycomb-like array of hexagonal MIEC tubes, partially infused with the solid lithium metal to form one electrode of the battery, but with extra space left inside each tube. When the lithium expands in the charging process, it flows into the empty space in the interior of the tubes, moving like a liquid even though it retains its solid crystalline structure. This flow, entirely confined inside the honeycomb structure, relieves the pressure from the expansion caused by charging, but without changing the electrode’s outer dimensions or the boundary between the electrode and electrolyte. The other material, the ELI, serves as a crucial mechanical binder between the MIEC walls and the solid electrolyte layer.</p> <p>“We designed this structure that gives us three-dimensional electrodes, like a honeycomb,” Li says. The void spaces in each tube of the structure allow the lithium to “creep backward” into the tubes, “and that way, it doesn’t build up stress to crack the solid electrolyte.” The expanding and contracting lithium inside these tubes moves in and out, sort of like a car engine’s pistons inside their cylinders. Because these structures are built at nanoscale dimensions (the tubes are about 100 to 300 nanometers in diameter, and tens of microns in height), the result is like “an engine with 10 billion pistons, with lithium metal as the working fluid,” Li says.</p> <p>Because the walls of these honeycomb-like structures are made of chemically stable MIEC, the lithium never loses electrical contact with the material, Li says. Thus, the whole solid battery can remain mechanically and chemically stable as it goes through its cycles of use. The team has proved the concept experimentally, putting a test device through 100 cycles of charging and discharging without producing any fracturing of the solids.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 440px;" /></p> <p><em><span style="font-size:10px;">Reversible Li metal plating and stripping in a carbon tubule with&nbsp;an inner diameter of 100nm. Courtesy of the researchers.</span></em></p> <p>Li says that though many other groups are working on what they call solid batteries, most of those systems actually work better with some liquid electrolyte mixed with the solid electrolyte material. “But in our case,” he says, “it’s truly all solid. There is no liquid or gel in it of any kind.”&nbsp;&nbsp;</p> <p>The new system could lead to safe anodes that weigh only a quarter as much as their conventional counterparts in lithium-ion batteries, for the same amount of storage capacity. If combined with new concepts for lightweight versions of the other electrode, the cathode, this work could lead to substantial reductions in the overall weight of lithium-ion batteries. For example, the team hopes it could lead to cellphones that could be charged just once every three days, without making the phones any heavier or bulkier.</p> <p>One new concept for a lighter cathode was described by another team led by Li, in a paper that appeared last month in the journal <em>Nature Energy</em>, co-authored by MIT postdoc Zhi Zhu and graduate student Daiwei Yu. The material would reduce the use of nickel and cobalt, which are expensive and toxic and used in present-day cathodes. The new cathode does not rely only on the capacity contribution from these transition-metals in battery cycling. Instead, it would rely more on the redox capacity of oxygen, which is much lighter and more abundant. But in this process the oxygen ions become more mobile, which can cause them to escape from the cathode particles. The researchers used a high-temperature surface treatment with molten salt to produce a protective surface layer on particles of manganese- and lithium-rich metal-oxide, so the amount of oxygen loss is drastically reduced.</p> <p>Even though the surface layer is very thin, just 5 to 20 nanometers thick on a 400 nanometer-wide particle, it provides good protection for the underlying material. “It’s almost like immunization,” Li says, against the destructive effects of oxygen loss in batteries used at room temperature. The present versions provide at least a 50 percent improvement in the amount of energy that can be stored for a given weight, with much better cycling stability.</p> <p>The team has only built small lab-scale devices so far, but “I expect this can be scaled up very quickly,” Li says. The materials needed, mostly manganese, are significantly cheaper than the nickel or cobalt used by other systems, so these cathodes could cost as little as a fifth as much as the conventional versions.</p> <p>The research teams included researchers from MIT, Hong Kong Polytechnic University, the University of Central Florida, the University of Texas at Austin, and Brookhaven National Laboratories in Upton, New York. The work was supported by the National Science Foundation.</p> New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer.Credit: MIT NewsResearch, Research Laboratory of Electronics, Nuclear science and engineering, School of Engineering, Materials Science and Engineering, DMSE, Batteries, Energy storage, Energy, National Science Foundation (NSF), Nanoscience and nanotechnology Powering the planet Fikile Brushett and his team are designing electrochemical technology to secure the planet’s energy future. Wed, 29 Jan 2020 09:00:00 -0500 Zain Humayun | School of Engineering <p>Before Fikile Brushett wanted to be an engineer, he wanted to be a soccer player. Today, however, Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering. Building 66 might not look much like a soccer field, but Brushett says the sport taught him a fundamental lesson that has proved invaluable in his scientific endeavors.<br /> <br /> “The teams that are successful are the teams that work together,” Brushett says.</p> <p>That philosophy inspires the Brushett Research Group, which draws on disciplines as diverse as organic chemistry and economics to create new electrochemical processes and devices.</p> <div class="cms-placeholder-content-video"></div> <p>As the world moves toward cleaner and sustainable sources of energy, one of the major challenges is converting efficiently between electrical and chemical energy. This is the challenge undertaken by Brushett and his colleagues, who are trying to push the frontiers of electrochemical technology.</p> <p>Brushett’s research focuses on ways to improve redox flow batteries, which are potentially low-cost alternatives to conventional batteries and a viable way of storing energy from renewable sources like wind and the sun. His group also explores means to recycle carbon dioxide — a greenhouse gas — into fuels and useful chemicals, and to extract energy from biomass.</p> <p>In his work, Brushett is helping to transform every stage of the energy pipeline: from unlocking the potential of solar and wind energy to replacing combustion engines with fuel cells, and even enabling greener industrial processes.</p> <p>“A lot of times, electrochemical technologies work in some areas, but we'd like them to work much more broadly than we've asked them to do beforehand,” Brushett says. “A lot of that is now driving the need for new innovation in the area, and that's where we come in.”</p> Fikile Brushett is the Cecil and Ida Green Career Development Associate Professor in the Department of Chemical Engineering.Photo: Lillie Paquette/School of EngineeringSchool of Engineering, Chemical engineering, Energy, Energy storage, Climate change, Batteries, Profile, Faculty, Sustainability, Chemistry, electronics For cheaper solar cells, thinner really is better Solar panel costs have dropped lately, but slimming down silicon wafers could lead to even lower costs and faster industry expansion. Sun, 26 Jan 2020 23:59:59 -0500 David L. Chandler | MIT News Office <p>Costs of solar panels have plummeted over the last several years, leading to rates of solar installations far greater than most analysts had expected. But with most of the potential areas for cost savings already pushed to the extreme, further cost reductions are becoming more challenging to find.</p> <p>Now, researchers at MIT and at the National Renewable Energy Laboratory (NREL) have outlined a pathway to slashing costs further, this time by slimming down the silicon cells themselves.</p> <p>Thinner silicon cells have been explored before, especially around a dozen years ago when the cost of silicon peaked because of supply shortages. But this approach suffered from some difficulties: The thin silicon wafers were too brittle and fragile, leading to unacceptable levels of losses during the manufacturing process, and they had lower efficiency. The researchers say there are now ways to begin addressing these challenges through the use of better handling equipment and some recent developments in solar cell architecture.</p> <p>The new findings are detailed in a paper in the journal <em>Energy and Environmental Science</em>, co-authored by MIT postdoc Zhe Liu, professor of mechanical engineering Tonio Buonassisi, and five others at MIT and NREL.</p> <p>The researchers describe their approach as “technoeconomic,” stressing that at this point economic considerations are as crucial as the technological ones in achieving further improvements in affordability of solar panels.</p> <p>Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year, the researchers say. Today’s silicon photovoltaic cells, the heart of these solar panels, are made from wafers of silicon that are 160 micrometers thick, but with improved handling methods, the researchers propose this could be shaved down to 100 micrometers —&nbsp; and eventually as little as 40 micrometers or less, which would only require one-fourth as much silicon for a given size of panel.</p> <p>That could not only reduce the cost of the individual panels, they say, but even more importantly it could allow for rapid expansion of solar panel manufacturing capacity. That’s because the expansion can be constrained by limits on how fast new plants can be built to produce the silicon crystal ingots that are then sliced like salami to make the wafers. These plants, which are generally separate from the solar cell manufacturing plants themselves, tend to be capital-intensive and time-consuming to build, which could lead to a bottleneck in the rate of expansion of solar panel production. Reducing wafer thickness could potentially alleviate that problem, the researchers say.</p> <p>The study looked at the efficiency levels of four variations of solar cell architecture, including PERC (passivated emitter and rear contact) cells and other advanced high-efficiency technologies, comparing their outputs at different thickness levels. The team found there was in fact little decline in performance down to thicknesses as low as 40 micrometers, using today’s improved manufacturing processes.</p> <p>“We see that there’s this area (of the graphs of efficiency versus thickness) where the efficiency is flat,” Liu says, “and so that’s the region where you could potentially save some money.” Because of these advances in cell architecture, he says, “we really started to see that it was time to revisit the cost benefits.”</p> <p>Changing over the huge panel-manufacturing plants to adapt to the thinner wafers will be a time-consuming and expensive process, but the analysis shows the benefits can far outweigh the costs, Liu says. It will take time to develop the necessary equipment and procedures to allow for the thinner material, but with existing technology, he says, “it should be relatively simple to go down to 100 micrometers,” which would already provide some significant savings. Further improvements in technology such as better detection of microcracks before they grow could help reduce thicknesses further.</p> <p>In the future, the thickness could potentially be reduced to as little as 15 micrometers, he says. New technologies that grow thin wafers of silicon crystal directly rather than slicing them from a larger cylinder could help enable such further thinning, he says.</p> <p>Development of thin silicon has received little attention in recent years because the price of silicon has declined from its earlier peak. But, because of cost reductions that have already taken place in solar cell efficiency and other parts of the solar panel manufacturing process and supply chain, the cost of the silicon is once again a factor that can make a difference, he says.</p> <p>“Efficiency can only go up by a few percent. So if you want to get further improvements, thickness is the way to go,” Buonassisi says. But the conversion will require large capital investments for full-scale deployment.</p> <p>The purpose of this study, he says, is to provide a roadmap for those who may be planning expansion in solar manufacturing technologies. By making the path “concrete and tangible,” he says, it may help companies incorporate this in their planning. “There is a path,” he says. “It’s not easy, but there is a path. And for the first movers, the advantage is significant.”</p> <p>What may be required, he says, is for the different key players in the industry to get together and lay out a specific set of steps forward and agreed-upon standards, as the integrated circuit industry did early on to enable the explosive growth of that industry. “That would be truly transformative,” he says.</p> <p>Andre Augusto, an associate research scientist at Arizona State University who was not connected with this research, says “refining silicon and wafer manufacturing is the most capital-expense (capex) demanding part of the process of manufacturing solar panels. So in a scenario of fast expansion, the wafer supply can become an issue. Going thin solves this problem in part as you can manufacture more wafers per machine without increasing significantly the capex.” He adds that “thinner wafers may deliver performance advantages in certain climates,” performing better in warmer conditions.</p> <p>Renewable energy analyst Gregory Wilson of Gregory Wilson Consulting, who was not associated with this work, says “The impact of reducing the amount of silicon used in mainstream cells would be very significant, as the paper points out. The most obvious gain is in the total amount of capital required to scale the PV industry to the multi-terawatt scale required by the climate change problem. Another benefit is in the amount of energy required to produce silicon PV panels. This is because the polysilicon production and ingot growth processes that are required for the production of high efficiency cells are very energy intensive.”</p> <p>Wilson adds “Major PV cell and module manufacturers need to hear from credible groups like Prof. Buonassisi’s at MIT, since they will make this shift when they can clearly see the economic benefits.”</p> <p>The team also included Sarah Sofia, Hannu Lane, Sarah Wieghold and Marius Peters at MIT and Michael Woodhouse at NREL. The work was partly supported by the U.S. Department of Energy, the Singapore-MIT Alliance for Research and Technology (SMART),&nbsp;and by a Total Energy Fellowship through the MIT Energy Initiative.</p> Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year.Research, School of Engineering, Energy, Solar, Nanoscience and nanotechnology, Materials Science and Engineering, Mechanical engineering, Renewable energy, Alternative energy, Sustainability, MIT Energy Initiative, Climate change, Department of Energy (DoE), Singapore-MIT Alliance for Research and Technology (SMART) Understanding combustion Assistant Professor Sili Deng is on a quest to understand the chemistry involved in combustion and develop strategies to make it cleaner. Thu, 23 Jan 2020 15:15:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>Much of the conversation around energy sustainability is dominated by clean-energy technologies like wind, solar, and thermal. However, with roughly 80 percent of energy use in the United States coming from fossil fuels, combustion remains the dominant method of energy conversion for power generation, electricity, and transportation.</p> <p>“People think of combustion as a dirty technology, but it’s currently the most feasible way to produce electricity and power,” explains Sili Deng, assistant professor of mechanical engineering and the Brit (1961) &amp; Alex (1949) d’Arbeloff Career Development Professor.</p> <p>Deng is working toward understanding the chemistry and flow that interacts in combustion in an effort to improve technologies for current or near-future energy conversion applications. “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” she adds.</p> <p>Deng’s interest in combustion stemmed from a conversation she had with a friend before applying to Tsinghua University for undergraduate study. “One day, I was talking about my dream school and major with a friend and she said ‘What if you could increase the efficiency of energy utilization by just 1 percent?’” recalls Deng. “Considering how much energy we use globally each year, you could make a huge difference.”</p> <p>This discussion inspired Deng to study combustion. After graduating with a bachelor’s degree in thermal engineering, she received her master’s and PhD from Princeton University. At Princeton, Deng focused on the how the coupling effects of chemistry and flow influence combustion and emissions.</p> <p>“The details of combustion are much more complicated than our general understanding of fuel and air combining to form water, carbon dioxide, and heat,” Deng explains. “There are hundreds of chemical species and thousands of reactions involved, depending on the type of fuel, fuel-air mixing, and flow dynamics.”</p> <p>Along with her team at the <a href="" target="_blank">Deng Energy and Nanotechnology Group at MIT</a>, she hopes that understanding chemically reacting flow in the combustion process will result in new strategies to control the process of combustion and reduce or eliminate the soot generated in combustion.&nbsp;</p> <p>“My group utilizes both experimental and computational tools to build a fundamental understanding of the combustion process that can guide the design of combustors for high performance and low emissions,” Deng adds. Her team is also utilizing artificial intelligence algorithms along with physical models to predict — and hopefully control — the combustion process.</p> <p>By understanding and controlling the combustion process, Deng is uncovering more about how soot, combustion’s most notorious by-product, is created.</p> <p>“Once soot leaves the site of combustion, it is difficult to contain. There isn’t much you can do to prevent haze or smog from developing,” she explains.</p> <p>The production of soot starts within the flame itself — even on a small scale, such as burning a candle. As Deng describes it, a “chemical soup” of hydrocarbons, vapor, melting wax, and oxygen interact to create soot particles visible as the yellow glow surrounding a candle light.</p> <p>“By understanding exactly how this soot is generated within a flame, we’re hoping to develop methods to reduce or eliminate it before it gets out of the combustion channel,” says Deng.</p> <p>Deng’s research on flames extends beyond the formation of soot. By developing a technology called flame synthesis, she is working on producing nanomaterials that can be used for renewable energy applications.</p> <p>The process of synthesizing nanomaterials via flames shares similarities with the soot formation in flames. Instead of generating the byproducts of incomplete combustion, certain precursors are added to the flame, which result in the production of nanomaterials. One common example of using flame synthesis to create nanomaterials is the production of titanium dioxide, a white pigment often used in paint and sunscreen.&nbsp;</p> <p>“I’m hoping to create a similar type of reaction to develop new materials that can be used for things like renewable energy, water treatment, pollution reduction, and catalysts,” she explains. Her team has been tweaking the various parameters of combustion — from temperature to the type of fuel used — to create nanomaterials that could eventually be used to clean up other, more nefarious byproducts created in combustion.</p> <p>To be successful in her quest to make combustion cleaner, Deng acknowledges that collaboration will be key. “There’s an opportunity to combine the fundamental research on combustion that my lab is doing with the materials, devices, and products being developed across areas like materials science and automotive engineering,” she says.</p> <p>Since we may be decades away from transitioning to a grid powered by renewable resources like solar, wave, and wind, Deng is helping carve out an important role for fellow combustion scientists.</p> <p>“While clean-energy technologies are continuing to be developed, it’s crucial that we continue to work toward finding ways to improve combustion technologies,” she adds.</p> “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” says Sili Deng, assistant professor of mechanical engineering at MIT.Photo: Tony PulsoneMechanical engineering, School of Engineering, Energy, Environment, Faculty, Oil and gas, Carbon, Emissions, Profile, Sustainability, Nanoscience and nanotechnology Reducing risk, empowering resilience to disruptive global change Workshop highlights how MIT research can guide adaptation at local, regional, and national scales. Thu, 23 Jan 2020 15:15:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>Five-hundred-year floods. Persistent droughts and heat waves. More devastating wildfires. As these and other planetary perils become more commonplace, they pose serious risks to natural, managed, and built environments around the world. Assessing the magnitude of these risks over multiple decades and identifying strategies to prepare for them at local, regional, and national scales will be essential to making societies and economies more resilient and sustainable.</p> <p>With that goal in mind, the <a href="">MIT Joint Program on the Science of Global Change</a> launched in 2019 its Adaptation-at-Scale initiative (<a href="">AS-MIT</a>), which seeks evidence-based solutions to global change-driven risks. Using its Integrated Global System Modeling (<a href="">IGSM</a>) framework, as well as a suite of resource and infrastructure assessment models, AS-MIT targets, diagnoses, and projects changing risks to life-sustaining resources under impending societal and environmental stressors, and evaluates the effectiveness of potential risk-reduction measures. &nbsp;</p> <p>In pursuit of these objectives, MIT Joint Program researchers are collaborating with other adaptation-at-scale thought leaders across MIT. And at a conference on Jan. 10 on the MIT campus, they showcased some of their most promising efforts in this space. Part of a series of MIT Joint Program workshops aimed at providing decision-makers with actionable information on key global change concerns, the conference covered risks and resilience strategies for food, energy, and water systems; urban-scale solutions; predicting the evolving risk of extreme events; and decision-making and early warning capabilities — and featured a lunch seminar on renewable energy for resilience and adaptation by an expert from the National Renewable Energy Laboratory.</p> <p><strong>Food, energy, and water systems</strong></p> <p><a href="">Greg Sixt</a>, research manager in the Abdul Latif Jameel Water and Food Systems Lab (<a href="">J-WAFS</a>), described the work of J-WAFS’ Alliance for Climate Change and Food Systems Research, <a href="">an emerging alliance</a> of premier research institutions and key stakeholders to collaboratively frame challenges, identify research paths, and fund and pursue convergence research on building more resilience across the food system, from production to supply chains to consumption.</p> <p>MIT Joint Program Deputy Director <a href="">Sergey Paltsev</a>, also a senior research scientist at the MIT Energy Initiative (MITEI), explored climate-related risks to energy systems. He highlighted physical risks, such as potential impacts of permafrost degradation on roads, airports, natural gas pipelines, and other infrastructure in the Arctic, and of an increase in extreme temperature, wind, and icing events on power distribution infrastructure in the U.S. Northeast.</p> <p>“No matter what we do in terms of climate mitigation, the physical risks will remain the same for decades because of inertia in the climate system,” says Paltsev. “Even with very aggressive emissions-reduction policies, decision-makers must take physical risks into consideration.”</p> <p>They must also account for <a href="">transition risks</a> — long-term financial and investment risks to fossil fuel infrastructure posed by climate policies. Paltsev showed how <a href="">energy scenarios</a> developed at MIT and elsewhere can enable decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy.</p> <p>MIT Joint Program Deputy Director <a href="">Adam Schlosser</a> discussed MIT Joint Program (JP) efforts to assess risks to, and optimal adaptation strategies for, water systems subject to drought, flooding, and other challenges impacting water availability and quality posed by a changing environment. Schlosser noted that in some cases, efficiency improvements can go a long way in meeting these challenges, as shown in <a href="">one JP study</a> that found improving municipal and industrial efficiencies was just as effective as climate mitigation in confronting projected water shortages in Asia. Finally, he introduced a new JP <a href="">project</a> funded by the U.S. Department of Energy that will explore how in U.S. floodplains, foresight could increase resilience to future forces, stressors, and disturbances imposed by nature and human activity.</p> <p>“In assessing how we avoid and adapt to risk, we need to think about all plausible futures,” says Schlosser. “Our approach is to take all [of those] futures, put them into our [integrated global] system of human and natural systems, and think about how we use water optimally.”</p> <p><strong>Urban-scale solutions</strong></p> <p><a href="">Brian Goldberg</a>, assistant director of the MIT Office of Sustainability, detailed MIT’s plans to sustain MIT campus infrastructure amid intensifying climate disruptions and impacts over the next 100 years. Toward that end, the <a href="">MIT Climate Resiliency Committee</a> is working to shore up multiple, interdependent layers of resiliency that include the campus site, infrastructure and utilities, buildings, and community, and creating modeling tools to evaluate flood risk.</p> <p>“We’re using the campus as a testbed to develop solutions, advance research, and ultimately grow a more climate-resilient campus,” says Goldberg. “Perhaps the models we develop and engage with at the campus scale can then influence the city or region scale and then be shared globally.”</p> <p>MIT Joint Program/MITEI Research Scientist <a href="">Mei Yuan</a> described an upcoming study to assess the potential of the building sector to reduce its greenhouse gas emissions through more energy-efficient design and intelligent telecommunications — and thereby lower climate-related risk to urban infrastructure. Yuan aims to achieve this objective by linking the program’ s U.S. Regional Energy Policy (<a href="">USREP</a>) model with a detailed building sector model that explicitly represents energy-consuming technologies (e.g., for heating, cooling, lighting, and household appliances).&nbsp;</p> <p>“Incorporating this building sector model within an integrated framework that combines USREP with an hourly electricity dispatch model (EleMod) could enable us to simulate the supply and demand of electricity at finer spatial and temporal resolution,” says Yuan, “and thereby better understand how the power sector will need to adapt to future energy needs.”</p> <p><strong>Renewable energy for resilience and adaptation</strong></p> <p><a href="">Jill Engel-Cox</a>, director of NREL’s Joint Institute for Strategic Energy Analysis, presented several promising adaptation measures for energy resilience that incorporate renewables. These include placing critical power lines underground; increasing demand-side energy efficiency to decrease energy consumption and power system instability; diversifying generation so electric power distribution can be sustained when one power source is down; deploying distributed generation (e.g., photovoltaics, small wind turbines, energy storage systems) so that if one part of the grid is disconnected, other parts continue to function; and implementing smart grids and micro-grids.</p> <p>“Adaptation and resilience measures tend to be very localized,” says Engel-Cox. “So we need to come up with strategies that will work for particular locations and risks.”</p> <p>These include storm-proofing photovoltaics and wind turbine systems, deploying hydropower with greater flexibility to account for variability in water flow, incorporating renewables in planning for natural gas system outages, and co-locating wind and PV systems on agricultural land.</p> <p><strong>Extreme events</strong></p> <p>MIT Joint Program Principal Research Scientist <a href="">Xiang Gao</a> showed how a <a href="">statistical method</a> that she developed has produced predictions of the risk of <a href="">heavy precipitation</a>, heat waves, and other extreme weather events that are more consistent with observations than conventional climate models do. Known as the “analog method,” the technique detects extreme events based on large-scale atmospheric patterns associated with such events.</p> <p>“Improved prediction of extreme weather events enabled by the analog method offers a promising pathway to provide meaningful climate mitigation and adaptation actions,” says Gao.</p> <p><a href="">Sai Ravela</a>, a principal research scientist at MIT’s Department of Earth, Atmospheric and Planetary Sciences, showed how artificial intelligence could be exploited to predict extreme events. Key methods that Ravela and his research group are developing combine climate statistics, atmospheric modeling, and physics to assess the risk of future extreme events. The group’s long-range predictions draw upon deep learning and small-sample statistics using local sensor data and global oscillations. Applying these methods, Ravela and his co-investigators are developing a model to assess the risk of extreme weather events to infrastructure, such as that of wind and flooding damage to a nuclear plant or city.&nbsp;</p> <p><strong>Decision-making and early warning capabilities</strong></p> <p>MIT Joint Program/MITEI Research Scientist <a href="">Jennifer Morris</a> explored uncertainty and decision-making for adaptation to global change-driven challenges ranging from coastal adaptation to grid resilience. Morris described the MIT Joint Program approach as a four-step process: quantify stressors and influences, evaluate vulnerabilities, identify response options and transition pathways, and develop decision-making frameworks. She then used the following Q&amp;A to show how this four-pronged approach can be applied to the <a href="">case of grid resilience</a>.</p> <p><strong>Q:</strong> Do human-induced changes in damaging weather events present a rising, widespread risk of premature failure in the nation’s power grid — and, if so, what are the cost-effective near-term actions to hedge against that risk?<em> </em></p> <p><strong>A:</strong> First, identify critical junctures within power grid, starting with large power transformers (LPTs). Next, use an analogue approach (described above) to construct distribution of expected changes in extreme heat wave events which would be damaging to LPTs under different climate scenarios. Next, use energy-economic and electric power models to assess electricity demand and economic costs related to LPT failure. And finally, make decisions under uncertainty to identify near-term actions to mitigate risks of LPT failure (e.g., upgrading or replacing LPTs).</p> <p><a href="">John Aldridge</a>, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, highlighted the group’s efforts to combine advanced remote sensing and decision support systems to assess the impacts of natural disasters, support hurricane evacuation decision-making, and guide proactive climate adaptation and resilience. Lincoln Laboratory is collaborating with MIT campus partners to develop the Climate Resilience Early Warning System Network (<a href="" target="_blank">CREWSNET</a>), which draws on MIT strengths in cutting-edge climate forecasting, impact models, and applied decision support tools to empower climate resilience and adaptation on a global scale.</p> <p>“From extreme event prediction to scenario-based risk analysis, this workshop showcased the core capabilities of the joint program and its partners across MIT that can&nbsp;advance scalable&nbsp;solutions to adaptation challenges across&nbsp;the globe,” says Adam&nbsp;Schlosser, who coordinated the day’s presentations.&nbsp;“Applying leading-edge modeling tools, our research is well-positioned to provide decision-makers with guidance and strategies to build a more resilient future."</p> An Army Corps of Engineers flood model depicting the Ala Wai watershed after a 100-year rain event. The owner of a local design firm described the Ala Wai Flood Control Project as the largest climate impact project in Hawai's modern history.Image: U.S. Army Corps of Engineers-Honolulu DistrictJoint Program on the Science and Policy of Global Change, Abdul Latif Jameel World Water and Food Security Lab (J-WAFS), MIT Energy Initiative, EAPS, Lincoln Laboratory, Energy, Greenhouse gases, Renewable energy, Climate, Climate change, Environment, Policy, Emissions, Pollution Students propose plans for a carbon-neutral campus Students in class 2.S999 (Solving for Carbon Neutrality at MIT) are charged with developing plans to make MIT’s campus carbon neutral by 2060. Fri, 17 Jan 2020 09:50:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>While so many faculty and researchers at MIT are developing technologies to reduce carbon emissions and increase energy sustainability, one class puts the power in students’ hands.</p> <p>In 2.S999 (Solving for Carbon Neutrality at MIT), teams of students are tasked with developing a plan to achieve carbon neutrality on MIT’s campus by 2060. “It’s a ‘roll up your sleeves and solve a real problem’ kind of class,” says Timothy Gutowski, professor of mechanical engineering and co-instructor for the class.</p> <p>In nearly every class, students hear from guest lecturers who offer their own expert views on energy sustainability and carbon emissions. In addition to faculty and staff from across MIT, guest lecturers include local government officials, industry specialists, and economists. Whether it’s the science and ethics behind climate change, the evolution of the electric grid, or the development of MIT’s upgraded Central Utilities Plant, these experts introduce students to considerations on a campus, regional, national, and global level.</p> <p>“It’s essential to expose students to these different perspectives so they understand the complexity and the multidisciplinary nature of this challenge,” says Julie Newman, director of MIT’s Office of Sustainability and co-instructor.</p> <p>In one class, students get the opportunity to embody different perspectives through a debate about the installation of an offshore wind farm near a small coastal town. Each student is given a particular role to play in a debate. Caroline Boone, a junior studying mechanical engineering, played the role of a beachfront property owner who objected to the installation.</p> <p>“It was a really good way of grasping how those negotiations happen in the real world,” recalls Boone. “The fact of the matter is, you’re going to have to work with groups who have their own interests — that requires compromise and negotiation.”</p> <p>Armed with these negotiation skills, along with insights from different experts, students are divided into teams and charged with developing a strategy that outlines year-by-year how MIT can achieve carbon neutrality by 2060. “The final project uses the campus as a test bed for engaging and exposing students to the complexity of solving for these global issues in their own backyard,” Newman adds.</p> <p>Student teams took a number of approaches in their strategies to achieve carbon neutrality. Tom Hubschman’s team focused on the immediate impact MIT could have through power purchase agreements — also known as PPAs.</p> <p>“Our team quickly realized that, given the harsh New England environment and the limited space on campus, building a giant solar or wind farm in the middle of Cambridge wasn’t a sound strategy,” says Hubschman, a mechanical engineering graduate student. Instead, his team built their strategy around replicating MIT’s current PPA that has resulted in the construction of a 650-acre solar farm in North Carolina.&nbsp;</p> <p>Boone’s team, meanwhile, took a different approach, developing a plan that didn’t include PPAs. “Our team was a bit contrarian in not having any PPAs, but we thought it was important to have that contrasting perspective,” she explains. Boone’s role within her team was to examine building energy use on campus. One takeaway from her research was the need for better controls and sensors to ensure campus buildings are running more efficiently.</p> <p>Regardless of their approach, each team had to deal with a level of uncertainty with regard to the efficiency of New England’s electric grid. “Right now, the electricity produced by MIT’s own power plant emits less carbon than the current grid,” adds Gutowski. “But the question is, as new regulations are put in place and new technologies are developed, when will there be a crossover in the grid emitting less carbon than our own power plant?” Students have to build this uncertainty into the predictive modeling for their proposed solutions.&nbsp;</p> <p>In the two years that the class has been offered, student projects have been helpful in shaping the Office of Sustainability’s own strategy. “These projects have reinforced our calculations and confirmed our strategy of using PPAs to contribute to greenhouse gas reduction off-site as we work toward developing on-site solutions,” explains Newman.</p> <p>This spring, Gutowski and Newman will work with a number of universities in South America on launching similar classes for their curricula. They will visit Ecuador, Chile, and Columbia, encouraging university administrators to task their students with solving for carbon neutrality on their own campuses.</p> Julie Newman, director of sustainability at MIT, says the final project for course 2.S999 “uses the campus as a test bed for engaging and exposing students to the complexity of solving [for] global issues in their own backyard.”Photo: Ken RichardsonMechanical engineering, School of Engineering, Classes and programs, Sustainability, Campus buildings and architecture, Climate change, Energy, Greenhouse gases, Students Zeroing in on decarbonization Wielding complex algorithms, nuclear science and engineering doctoral candidate Nestor Sepulveda spins out scenarios for combating climate change. Wed, 15 Jan 2020 00:00:00 -0500 Leda Zimmerman | Department of Nuclear Science and Engineering <p>To avoid the most destructive consequences of climate change, the world’s electric energy systems must stop producing carbon by 2050. It seems like an overwhelming technological, political, and economic challenge — but not to Nestor Sepulveda.</p> <p>“My work has shown me that we&nbsp;do&nbsp;have the means to tackle the problem, and we can start now,” he says. “I am optimistic.”</p> <p>Sepulveda’s research, first as a master’s student and now as a doctoral candidate in the MIT Department of Nuclear Science and Engineering (NSE), involves complex simulations that describe potential pathways to decarbonization. In work published last year in the journal&nbsp;<em>Joule,&nbsp;</em>Sepulveda and his co-authors made a powerful case for using a mix of renewable and “firm” electricity sources, such as nuclear energy, as the least costly, and most likely, route to a low- or no-carbon grid.</p> <p>These insights, which flow from a unique computational framework blending optimization and data science, operations research, and policy methodologies, have attracted interest from&nbsp;<em>The New York Times&nbsp;</em>and&nbsp;<em>The Economist,&nbsp;</em>as well as from such notable players in the energy arena as Bill Gates. For Sepulveda, the attention could not come at a more vital moment.</p> <p>“Right now, people are at extremes: on the one hand worrying that steps to address climate change might weaken the economy, and on the other advocating a Green New Deal to transform the economy that depends solely on solar, wind, and battery storage,” he says. “I think my data-based work can help bridge the gap and enable people to find a middle point where they can have a conversation.”</p> <p><strong>An optimization tool</strong></p> <p>The computational model Sepulveda is developing to generate this data, the centerpiece of his dissertation research, was sparked by classroom experiences at the start of his NSE master’s degree.</p> <p>“In courses like Nuclear Technology and Society [22.16], which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” he says. “I began wondering how to determine the value of different technologies.”</p> <p>Recognizing that “absolutes exist in people’s minds, but not in reality,” Sepulveda sought to develop a tool that might yield an optimal solution to the decarbonization question. His inaugural effort in modeling focused on weighing the advantages of utilizing advanced nuclear reactor designs against exclusive use of existing light-water reactor technology in the decarbonization effort.</p> <p>“I showed that in spite of their increased costs, advanced reactors proved more valuable to achieving the low-carbon transition than conventional reactor technology alone,” he says. This research formed the basis of Sepulveda’s master’s thesis in 2016, for a degree spanning NSE and the Technology and Policy Program. It also informed the MIT Energy Initiative’s report,&nbsp;“The Future of Nuclear Energy in a Carbon-Constrained World.”</p> <p><strong>The right stuff</strong></p> <p>Sepulveda comes to the climate challenge armed with a lifelong commitment to service, an appetite for problem-solving, and grit. Born in Santiago, he enlisted in the Chilean navy, completing his high school and college education at the national naval academy.</p> <p>“Chile has natural disasters every year, and the defense forces are the ones that jump in to help people, which I found really attractive,” he says. He opted for the most difficult academic specialty, electrical engineering, over combat and weaponry. Early in his career, the climate change issue struck him, he says, and for his senior project, he designed a ship powered by hydrogen fuel cells.</p> <p>After he graduated, the Chilean navy rewarded his performance with major responsibilities in the fleet, including outfitting a $100 million amphibious ship intended for moving marines and for providing emergency relief services. But Sepulveda was anxious to focus fully on sustainable energy, and petitioned the navy to allow him to pursue a master’s at MIT in 2014.</p> <p>It was while conducting research for this degree that Sepulveda confronted a life-altering health crisis: a heart defect that led to open-heart surgery. “People told me to take time off and wait another year to finish my degree,” he recalls. Instead, he decided to press on: “I was deep into ideas about decarbonization, which I found really fulfilling.”</p> <p>After graduating in 2016, he returned to naval life in Chile, but “couldn’t stop thinking about the potential of informing energy policy around the world and making a long-lasting impact,” he says. “Every day, looking in the mirror, I saw the big scar on my chest that reminded me to do something bigger with my life, or at least try.”</p> <p>Convinced that he could play a significant role in addressing the critical carbon problem if he continued his MIT education, Sepulveda successfully petitioned naval superiors to sanction his return to Cambridge, Massachusetts.</p> <p><strong>Simulating the energy transition</strong></p> <p>Since resuming studies here in 2018, Sepulveda has wasted little time. He is focused on refining his modeling tool to play out the potential impacts and costs of increasingly complex energy technology scenarios on achieving deep decarbonization. This has meant rapidly acquiring knowledge in fields such as economics, math, and law.</p> <p>“The navy gave me discipline, and MIT gave me flexibility of mind — how to look at problems from different angles,” he says.</p> <p>With mentors and collaborators such as Associate Provost and Japan Steel Industry Professor Richard Lester and MIT Sloan School of Management professors Juan Pablo Vielma and Christopher Knittel, Sepulveda has been tweaking his models. His simulations, which can involve more than 1,000 scenarios, factor in existing and emerging technologies, uncertainties such as the possible emergence of fusion energy, and different regional constraints, to identify optimal investment strategies for low-carbon systems and to determine what pathways generate the most cost-effective solutions.</p> <p>“The idea isn’t to say we need this many solar farms or nuclear plants, but to look at the trends and value the future impact of technologies for climate change, so we can focus money on those with the highest impact, and generate policies that push harder on those,” he says.</p> <p>Sepulveda hopes his models won’t just lead the way to decarbonization, but do so in a way that minimizes social costs. “I come from a developing nation, where there are other problems like health care and education, so my goal is to achieve a pathway that leaves resources to address these other issues.”</p> <p>As he refines his computations with the help of MIT’s massive computing clusters, Sepulveda has been building a life in the United States. He has found a vibrant Chilean community at MIT&nbsp;and discovered local opportunities for venturing out on the water, such as summer sailing on the Charles.</p> <p>After graduation, he plans to leverage his modeling tool for the public benefit, through direct interactions with policy makers (U.S. congressional staffers have already begun to reach out to him), and with businesses looking to bend their strategies toward a zero-carbon future.</p> <p>It is a future that weighs even more heavily on him these days: Sepulveda is expecting his first child. “Right now, we’re buying stuff for the baby, but my mind keeps going into algorithmic mode,” he says. “I’m so immersed in decarbonization that I sometimes dream about it.”</p> “In courses like Nuclear Technology and Society, which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” says doctoral student Nestor Sepulveda. “I began wondering how to determine the value of different technologies.”Photo: Gretchen ErtlNuclear science and engineering, MIT Energy Initiative, School of Engineering, Technology and policy, Students, Research, Alternative energy, Energy, Energy storage, Greenhouse gases, Climate change, Global Warming, Sustainability, Emissions, Renewable energy, Economics, Policy, Nuclear power and reactors, Profile, graduate, Graduate, postdoctoral Pathways to a low-carbon future A new study looks at how the global energy mix could change over the next 20 years. Thu, 09 Jan 2020 13:30:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>When it comes to fulfilling ambitious energy and climate commitments, few nations successfully walk their talk. A case in point is the Paris Agreement initiated four years ago. Nearly 200 signatory nations submitted voluntary pledges to cut their contribution to the world’s greenhouse gas emissions by 2030, but <a href="">many are not on track</a> to fulfill these pledges. Moreover, only a small number of countries are now pursuing climate policies consistent with keeping global warming well below 2 degrees Celsius, the long-term target recommended by the Intergovernmental Panel on Climate Change (IPCC). &nbsp;&nbsp;&nbsp;</p> <p>This growing discrepancy between current policies and long-term targets — combined with uncertainty about individual nations’ ability to fulfill their commitments due to administrative, technological, and cultural challenges — makes it increasingly difficult for scientists to project the future of the global energy system and its impact on the global climate. Nonetheless, these projections remain essential for decision-makers to assess the physical and financial risks of climate change and of efforts to transition to a low-carbon economy.</p> <p>Toward that end, several expert groups continue to produce energy scenarios and analyze their implications for the climate. In a <a href="">study</a> in the journal <em>Economics of Energy &amp; Environmental Policy</em>, <a href="">Sergey Paltsev</a>, deputy director of the <a href="">MIT Joint Program on the Science and Policy of Global Change</a> and a senior research scientist at the <a href="">MIT Energy Initiative</a>, collected projections of the global energy mix over the next two decades from several major energy-scenario producers. Aggregating results from scenarios developed by the MIT Joint Program, International Energy Agency, Shell, BP and ExxonMobil, and contrasting them with scenarios assessed by the IPCC that would be required to follow a pathway that limits global warming to 1.5 C, Paltsev arrived at three notable findings:</p> <p>1. Fossil fuels decline, but still dominate. Assuming current Paris Agreement pledges are maintained beyond 2030, the share of fossil fuels in the global energy mix declines from approximately 80 percent today to 73-76 percent in 2040. In scenarios consistent with the 2 C goal, this share decreases to 56-61 percent in 2040. Meanwhile, the share of wind and solar rises from 2 percent today to 6-13 percent (current pledges) and further to 17-26 percent (2 C scenarios) in 2040.</p> <p>2. Carbon capture waits in the wings. The multiple scenarios also show a mixed future for fossil fuels as the globe shifts away from carbon-intensive energy sources. Coal use does not have a sustainable future unless combined with carbon capture and storage (CCS) technology, and most near-term projections show no large-scale deployment of CCS in the next 10-15 years. Natural gas consumption, however, is likely to increase in the next 20 years, but also projected to decline thereafter without CCS. For pathways consistent with the “well below 2 C” goal, CCS scale-up by midcentury is essential for all carbon-emitting technologies.&nbsp;</p> <p>3. Solar and wind thrive, but storage challenges remain. The scenarios show the critical importance of energy-efficiency improvements on the pace of the low-carbon transition but little consensus on the magnitude of such improvements. They do, however, unequivocally point to successful upcoming decades for solar and wind energy. This positive outlook is due to declining costs and escalating research and innovation in addressing intermittency and long-term energy storage challenges.</p> <p>While the scenarios considered in this study project an increased share of renewables in the next 20 years, they do not indicate anything close to a complete decarbonization of the energy system during that time frame. To assess what happens beyond 2040, the study concludes that decision-makers should be drawing upon a range of projections of plausible futures, because the dominant technologies of the near term may not prevail over the long term.</p> <p>“While energy projections are becoming more difficult because of the widening gulf between current policies and stated goals, they remain stakeholders’ sharpest tool in assessing the near- and long-term physical and financial risks associated with climate change and the world’s ongoing transition to a low-carbon energy system,” says Paltsev. “Combining the results from multiple sources provides additional insight into the evolution of the global energy mix.”</p> The AES Corporation, based in Virginia, installed the world’s largest solar-plus-storage system on the southern end of the Hawaiian island of Kauai. A scaled-down version was first tested at the National Renewable Energy Laboratory. Photo: Dennis Schroeder/NRELJoint Program on the Science and Policy of Global Change, MIT Energy Initiative, Energy, Greenhouse gases, Renewable energy, Climate, Climate change, Environment, Policy, Alternative energy, Emissions, Research, Pollution Preventing energy loss in windows Mechanical engineers are developing technologies that could prevent heat from entering or escaping windows, potentially preventing a massive loss of energy. Mon, 06 Jan 2020 15:30:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>In the quest to make buildings more energy efficient, windows present a particularly difficult problem. According to the U.S. Department of Energy, heat that either escapes or enters windows accounts for roughly 30 percent of the energy used to heat and cool buildings. Researchers are developing a variety of window technologies that could prevent this massive loss of energy.</p> <p>“The choice of windows in a building has a direct influence on energy consumption,” says Nicholas Fang, professor of mechanical engineering. “We need an effective way of blocking solar radiation.”</p> <p>Fang is part of a large collaboration that is working together to develop smart adaptive control and monitoring systems for buildings. The research team, which includes researchers from the Hong Kong University of Science and Technology and Leon Glicksman, professor of building technology and mechanical engineering at MIT, has been tasked with helping Hong Kong achieve its ambitious goal to reduce carbon emissions by 40 percent by 2025.</p> <p>“Our idea is to adapt new sensors and smart windows in an effort to help achieve energy efficiency and improve thermal comfort for people inside buildings,” Fang explains.</p> <p>His contribution is the development of a smart material that can be placed on a window as a film that blocks heat from entering. The film remains transparent when the surface temperature is under 32 degrees Celsius, but turns milky when it exceeds 32 C. This change in appearance is due to thermochromic microparticles that change phases in response to heat. The smart window’s milky appearance can block up to 70 percent of solar radiation from passing through the window, translating to a 30 percent reduction in cooling load.&nbsp;</p> <p>In addition to this thermochromic material, Fang’s team is hoping to embed windows with sensors that monitor sunlight, luminance, and temperature. “Overall, we want an integral solution to reduce the load on HVAC systems,” he explains.</p> <p>Like Fang, graduate student Elise Strobach is working on a material that could significantly reduce the amount of heat that either escapes or enters through windows. She has developed a high-clarity silica aerogel that, when placed between two panes of glass, is 50 percent more insulating than traditional windows and lasts up to a decade longer.</p> <p>“Over the course of the past two years, we’ve developed a material that has demonstrated performance and is promising enough to start commercializing,” says Strobach, who is a PhD candidate in MIT’s Device Research Laboratory. To help in this commercialization, Strobach has co-founded the startup <a href="">AeroShield Materials</a>.&nbsp;</p> <p>Lighter than a marshmallow, AeroShield’s material comprises 95 percent air. The rest of the material is made up of silica nanoparticles that are just 1-2 nanometers large. This structure blocks all three modes of heat loss: conduction, convection, and radiation. When gas is trapped inside the material’s small voids, it can no longer collide and transfer energy through convection. Meanwhile, the silica nanoparticles absorb radiation and re-emit it back in the direction it came from.</p> <p>“The material’s composition allows for a really intense temperature gradient that keeps the heat where you want it, whether it’s hot or cold outside,” explains Strobach, who, along with AeroShield co-founder Kyle Wilke, was named one of <a href="">Forbes’ 30 Under 30 in Energy</a>. Commercialization of this research is being supported by the MIT Deshpande Center for Technological Innovation.</p> <p>Strobach also sees possibilities for combining AeroShield technologies with other window solutions being developed at MIT, including Fang’s work and research being conducted by Gang Chen, Carl Richard Soderberg Professor of Power Engineering, and research scientist Svetlana Boriskina.</p> <p>“Buildings represent one third of U.S. energy usage, so in many ways windows are low-hanging fruit,” explains Chen.</p> <p>Chen and Boriskina previously worked with Strobach on the first iteration of the AeroShield material for their project developing a solar thermal aerogel receiver. More recently, they have developed polymers that could be used in windows or building facades to trap or reflect heat, regardless of color.&nbsp;</p> <p>These polymers were partially inspired by stained-glass windows. “I have an optical background, so I’m always drawn to the visual aspects of energy applications,” says Boriskina. “The problem is, when you introduce color it affects whatever energy strategy you are trying to pursue.”</p> <p>Using a mix of polyethylene and a solvent, Chen and Boriskina added various nanoparticles to provide color. Once stretched, the material becomes translucent and its composition changes. Previously disorganized carbon chains reform as parallel lines, which are much better at conducting heat.</p> <p>While these polymers need further development for use in transparent windows, they could possibly be used in colorful, translucent windows that reflect or trap heat, ultimately leading to energy savings. “The material isn’t as transparent as glass, but it’s translucent. It could be useful for windows in places you don’t want direct sunlight to enter — like gyms or classrooms,” Boriskina adds.</p> <p>Boriskina is also using these materials for military applications. Through a three-year project funded by the U.S. Army, she is developing lightweight, custom-colored, and unbreakable polymer windows. These windows can provide passive temperature control and camouflage for portable shelters and vehicles.</p> <p>For any of these technologies to have a meaningful impact on energy consumption, researchers must improve scalability and affordability. “Right now, the cost barrier for these technologies is too high — we need to look into more economical and scalable versions,” Fang adds.&nbsp;</p> <p>If researchers are successful in developing manufacturable and affordable solutions, their window technologies could vastly improve building efficiency and lead to a substantial reduction in building energy consumption worldwide.</p> A smart window developed by Professor Nicholas Fang includes thermochromic material that turns frosty when exposed to temperatures of 32 C or higher, such as when a researcher touches the window with her hand. Photo courtesy of the researchers.Mechanical engineering, School of Engineering, Materials Science and Engineering, Energy, Architecture, Climate change, Glass, Nanoscience and nanotechnology Tracking emissions in China Evaluating a 2014 policy change yields some good news and some concerns. Mon, 30 Dec 2019 11:10:01 -0500 Nancy W. Stauffer | MIT Energy Initiative <p>In January 2013, many people in Beijing experienced a multiweek period of severely degraded air, known colloquially as the “Airpocalypse,” which made them sick and kept them indoors. As part of its response, the central Chinese government accelerated implementation of tougher air pollution standards for power plants, with limits to take effect in July 2014. One key standard limited emissions of <span class="st">sulfur dioxide (</span>SO<sub>2</sub>), which contributes to the formation of airborne particulate pollution and can cause serious lung and heart problems. The limits were introduced nationwide, but varied by location. Restrictions were especially stringent in certain “key” regions, defined as highly polluted and populous areas in Greater Beijing, the Pearl River Delta, and the Yangtze River Delta.</p> <p>All power plants had to meet the new standards by July 2014. So how did they do? “In most developing countries, there are policies on the books that look very similar to policies elsewhere in the world,” says&nbsp;<a href="">Valerie J. Karplus</a>, an assistant professor of global economics and management at the MIT Sloan School of Management. “But there have been few attempts to look systematically at plants’ compliance with environmental regulation. We wanted to understand whether policy actually changes behavior.”</p> <p><strong>Focus on power plants</strong></p> <p>For China, focusing environmental policies on power plants makes sense. Fully 60 percent of the country’s primary energy use is coal, and about half of it is used to generate electricity. With that use comes a range of pollutant emissions. In 2007, China’s Ministry of Environmental Protection required thousands of power plants to install continuous emissions monitoring systems (CEMS) on their exhaust stacks and to upload hourly, pollutant-specific concentration data to a publicly available website.</p> <p>Among the pollutants tracked on the website was SO<sub>2</sub>. To Karplus and two colleagues — Shuang Zhang, an assistant professor of economics at the University of Colorado at Boulder, and Douglas Almond, a professor in the School of International and Public Affairs and the Department of Economics at Columbia University — the CEMS data on SO<sub>2</sub>&nbsp;emissions were an as-yet-untapped resource for exploring the on-the-ground impacts of the 2014 emissions standards, over time and plant-by-plant.</p> <p>To begin their study, Karplus, Zhang, and Almond examined changes in the CEMS data around July 2014, when the new regulations went into effect. Their study sample included 256 power plants in four provinces, among them 43 that they deemed “large,” with a generating capacity greater than 1,000 megawatts (MW). They examined the average monthly SO<sub>2</sub>&nbsp;concentrations reported by each plant starting in November 2013, eight months before the July 2014 policy deadline.</p> <p>Emissions levels from the 256 plants varied considerably. The researchers were interested in relative changes within individual facilities before and after the policy, so they determined changes relative to each plant’s average emissions — a calculation known as demeaning. For each plant, they calculated the average emissions level over the whole time period being considered. They then calculated how much that plant’s reading for each month was above or below that baseline. By taking the averages of those changes-from-baseline numbers at all plants in each month, they could see how much emissions from the group of plants changed over time.</p> <p>The demeaned CEMS concentrations are plotted in the first accompanying graph, labeled “SO<sub>2</sub> concentrations (demeaned).” At zero on the Y axis in Figure 1 in the slideshow above, levels at all plants — big emitters and small — are on average equal to their baseline. Accordingly, in January 2014 plants were well above their baseline, and by July 2016 they were well below it. So average plant-level SO<sub>2</sub>&nbsp;concentrations were declining slightly before the July 2014 compliance deadline, but they dropped far more dramatically after it.</p> <p><strong>Checking the reported data</strong></p> <p>Based on the CEMS data from all the plants, the researchers calculated that total SO<sub>2</sub>&nbsp;emissions fell by 13.9 percent in response to the imposition of the policy in 2014. “That’s a substantial reduction,” notes Karplus. “But are those reported CEMS readings accurate?”</p> <p>To find out, she, Zhang, and Almond compared the measured CEMS concentrations with SO<sub>2</sub>&nbsp;concentrations detected in the atmosphere by NASA’s Ozone Monitoring Instrument. “We believed that the satellite data could provide a kind of independent check on the policy response as captured by the CEMS measurements,” she says.</p> <p>For the comparison, they limited the analysis to their 43 1,000-MW power plants — large plants that should generate the strongest signal in the satellite observations. Figure 2 in the slideshow above shows data from both the CEMS and the satellite sources. Patterns in the two measures are similar, with substantial declines in the months just before and after July 2014. That general agreement suggests that the CEMS measurements can serve as a good proxy for atmospheric concentrations of SO<sub>2</sub>.</p> <p>To double-check that outcome, the researchers selected 35 relatively isolated power plants whose capacity makes up at least half of the total capacity of all plants within a 35-kilometer radius. Using that restricted sample, they again compared the CEMS measurements and the satellite data. They found that the new emissions standards reduced both SO<sub>2</sub>&nbsp;measures. However, the SO<sub>2</sub>&nbsp;concentrations in the CEMS data fell by 36.8 percent after the policy, while concentrations in the satellite data fell by only 18.3 percent. So the CEMS measurements showed twice as great a reduction as the satellite data did. Further restricting the sample to isolated power plants with capacity larger than 1,000 MW produced similar results.</p> <p><strong>Key versus non-key regions</strong></p> <p>One possible explanation for the mismatch between the two datasets is that some firms overstated the reductions in their CEMS measurements. The researchers hypothesized that the difficulty of meeting targets would be higher in key regions, which faced the biggest cuts. In non-key regions, the limit fell from 400 to 200 milligrams per cubic meter (mg/m<sup>3</sup>). But in key regions, the limit went from 400 to 50 mg/m<sup>3</sup>. Firms may have been unable to make such a dramatic reduction in so short a time, so the incentive to manipulate their CEMS readings may have increased. For example, they may have put monitors on only a few of all their exhaust stacks, or turned monitors off during periods of high emissions.</p> <p>Figure 3 in the slideshow above shows results from analyzing non-key and key regions separately. At large, isolated plants in non-key regions, the CEMS measurements show a 29.3 percent reduction in SO<sub>2</sub>&nbsp;and the satellite data a 22.7 percent reduction. The ratio of the estimated post-policy declines is 77 percent — not too far out of line.</p> <p>But a comparable analysis of large, isolated plants in key regions produced very different results. The CEMS measurements showed a 53.6 percent reduction in SO<sub>2</sub>, while the satellite data showed no statistically significant change at all.</p> <p>One possible explanation is that power plants actually did decrease their SO<sub>2</sub>&nbsp;emissions after 2014, but at the same time nearby industrial facilities or other sources increased theirs, with the net effect being that the satellite data showed little or no change. However, the researchers examined emissions from neighboring high-emitting facilities during the same time period and found no contemporaneous jump in their SO<sub>2</sub>&nbsp;emissions. With that possibility dismissed, they concluded that manipulation of the CEMS data in regions facing the toughest emissions standards was “plausible,” says Karplus.</p> <p><strong>Compliance with the new standards</strong></p> <p>Another interesting question was how often the reported CEMS emissions levels were within the regulated limits. The researchers calculated the compliance rate at individual plants — that is, the fraction of time their emissions were at or below their limits — in non-key and key regions, based on their reported CEMS measurements. The results appear in Figure 4 in the slideshow above. In non-key regions, the compliance rate at all plants was about 90 percent in early 2014. It dropped a little in July 2014, when plants had to meet their (somewhat) stricter limits, and then went back up to almost 100 percent. In contrast, the compliance rate in key regions was almost 100 percent in early 2014 and then plummeted to about 50 percent at and after July 2014.</p> <p>Karplus, Zhang, and Almond interpret that result as an indication of the toughness of complying with the stringent new standards. “If you think about it from the plant’s perspective, complying with tighter standards is a lot harder than complying with more lenient standards, especially if plants have recently made investments to comply with prior standards, but those changes are no longer adequate,” she says. “So in these key regions, many plants fell out of compliance.”</p> <p>She makes another interesting observation. Their analyses had already produced evidence that firms in key areas may have falsified their reported CEMS measurements. “So that means they could be both manipulating their data and complying less,” she says.</p> <p><strong>Encouraging results plus insights for policymaking</strong></p> <p>Karplus stresses the positive outcomes of their study. She’s encouraged that the CEMS and satellite data both show emission levels dropping at most plants. Compliance rates were down at some plants in key regions, but that’s not surprising when the required cuts were large. And she notes that even though firms may not have complied, they still reduced their emissions to some extent as a result of the new standard.</p> <p>She also observes that, for the most part, there’s close correlation between the CEMS and satellite data. So the quality of the CEMS data isn’t all bad. And where it’s bad — where firms may have manipulated their measurements — it may have been because they’d been set a seemingly impossible task and timeline. “At some point, plant managers might just throw up their hands,” says Karplus. The lesson for policymakers may be to set emissions-reduction goals that are deep but long-term so that firms have enough time to make the necessary investment and infrastructure adjustments.</p> <p>To Karplus, an important practical implication of the study is “demonstrating that you can look at the alignment between ground and remote data sources to evaluate the impact of specific policies.” A series of tests confirmed the validity of their method and the robustness of their results. For example, they performed a comparable analysis focusing on July 2015, when there was no change in emissions standards. There was no evidence of the same effects. They accounted for SO<sub>2</sub>&nbsp;emitted by manufacturing facilities and other sources, and their results were unaffected. And they demonstrated that when clouds or other obstructions interfered with satellite observations, the resulting data gap had no impact on their results.</p> <p>The researchers note that their approach can be used for other short-lived industrial air pollutants and by any country seeking low-cost tools to improve data quality and policy compliance, especially when plants’ emissions are high to begin with. “Our work provides an illustration of how you can use satellite data to obtain an independent check on emissions from pretty much any high-emitting facility,” says Karplus. “And, over time, NASA will have instruments that can take measurements that are even more temporally and spatially resolved, which I think is quite exciting for environmental protection agencies and for those who would seek to improve the environmental performance of their energy assets.”</p> <p>This research was supported by a seed grant from the Samuel Tak Lee Real Estate Entrepreneurship Laboratory at MIT and by the U.S. National Science Foundation.</p> <div> <p><em>This article appears in the <a class="Hyperlink SCXW206095923 BCX0" href="" rel="noreferrer" style="margin: 0px; padding: 0px; user-select: text; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; text-decoration-line: none; color: inherit;" target="_blank">Autumn 2019 issue</a> of&nbsp;</em>Energy Futures<em>, the magazine of the MIT Energy Initiative.&nbsp;</em></p> </div> Assistant Professor Valerie Karplus and her collaborators have demonstrated that measurements of air pollutants taken by NASA satellites are often a good indicator of emissions on the ground. Their approach provides regulators with a low-cost tool to ensure that industrial firms are complying with emissions standards.Photo: Kelley TraversMIT Energy Initiative, Sloan School of Management, Energy, China, Emissions, Economics, Policy, Pollution, Research, Government, Business and management A new way to remove contaminants from nuclear wastewater Method concentrates radionuclides in a small portion of a nuclear plant’s wastewater, allowing the rest to be recycled. Thu, 19 Dec 2019 09:23:05 -0500 David L. Chandler | MIT News Office <p>Nuclear power continues to expand globally, propelled, in part, by the fact that it produces few greenhouse gas emissions while providing steady power output. But along with that expansion comes an increased need for dealing with the large volumes of water used for cooling these plants, which becomes contaminated with radioactive isotopes that require special long-term disposal.</p> <p>Now, a method developed at MIT provides a way of substantially reducing the volume of contaminated water that needs to be disposed of, instead concentrating the contaminants and allowing the rest of the water to be recycled through the plant’s cooling system. The proposed system is described in the journal <em>Environmental Science and Technology</em>, in a paper by graduate student Mohammad Alkhadra, professor of chemical engineering Martin Bazant, and three others.</p> <p>The method makes use of a process called shock electrodialysis, which uses an electric field to generate a deionization shockwave in the water. The shockwave pushes the electrically charged particles, or ions, to one side of a tube filled with charged porous material, so that concentrated stream of contaminants can be separated out from the rest of the water. The group discovered that two radionuclide contaminants — isotopes of cobalt and cesium — can be selectively removed from water that also contains boric acid and lithium. After the water stream is cleansed of its cobalt and cesium contaminants, it can be reused in the reactor.</p> <p>The shock electrodialysis process was initially developed by Bazant and his co-workers as a general method of removing salt from water, as demonstrated in their <a href="">first scalable prototype</a> four years ago. Now, the team has focused on this more specific application, which could help improve the economics and environmental impact of working nuclear power plants. In ongoing research, they are also continuing to develop a system for removing other contaminants, including lead, from drinking water.</p> <p>Not only is the new system inexpensive and scalable to large sizes, but in principle it also can deal with a wide range of contaminants, Bazant says. “It’s a single device that can perform a whole range of separations for any specific application,” he says.</p> <p>In their earlier desalination work, the researchers used measurements of the water’s electrical conductivity to determine how much salt was removed. In the years since then, the team has developed other methods for detecting and quantifying the details of what’s in the concentrated radioactive waste and the cleaned water.</p> <p>“We carefully measure the composition of all the stuff going in and out,” says Bazant, who is the E.G. Roos Professor of Chemical Engineering as well as a professor of mathematics. “This really opened up a new direction for our research.” They began to focus on separation processes that would be useful for health reasons or that would result in concentrating material that has high value, either for reuse or to offset disposal costs.</p> <p>The method they developed works for sea water desalination, but it is a relatively energy-intensive process for that application. The energy cost is dramatically lower when the method is used for ion-selective separations from dilute streams such as nuclear plant cooling water. For this application, which also requires expensive disposal, the method makes economic sense, he says. It also hits both of the team’s targets: dealing with high-value materials and helping to safeguard health. The scale of the application is also significant — a single large nuclear plant can circulate about 10 million cubic meters of water per year through its cooling system, Alkhadra says.</p> <p>For their tests of the system, the researchers used simulated nuclear wastewater based on a recipe provided by Mitsubishi Heavy Industries, which sponsored the research and is a major builder of nuclear plants. In the team’s tests, after a three-stage separation process, they were able to remove 99.5 percent of the cobalt radionuclides in the water while retaining about 43 percent of the water in cleaned-up form so that it could be reused. As much as two-thirds of the water can be reused if the cleanup level is cut back to 98.3 percent of the contaminants removed, the team found.</p> <p>While the overall method has many potential applications, the nuclear wastewater separation, is “one of the first problems we think we can solve [with this method] that no other solution exists for,” Bazant says. No other practical, continuous, economic method has been found for separating out the radioactive isotopes of cobalt and cesium, the two major contaminants of nuclear wastewater, he adds.</p> <p>While the method could be used for routine cleanup, it could also make a big difference in dealing with more extreme cases, such as the millions of gallons of contaminated water at the damaged Fukushima Daichi power plant in Japan, where the accumulation of that contaminated water has threatened to overpower the containment systems designed to prevent it from leaking out into the adjacent Pacific. While the new system has so far only been tested at much smaller scales, Bazant says that such large-scale decontamination systems based on this method might be possible “within a few years.”</p> <p>The research team also included MIT postdocs Kameron Conforti and Tao Gao and graduate student Huanhuan Tian.</p> A small-scale device, seen here, was used in the lab to demonstrate the effectiveness of the new shockwave-based system for removing radioactive contaminants from the cooling water in nuclear powerplants.Image courtesy of the researchers Research, School of Engineering, Chemical engineering, Energy, Water, Desalination, Mathematics, Nuclear science and engineering The race to develop renewable energy technologies Mechanical engineers rush to develop energy conversion and storage technologies from renewable sources such as wind, wave, solar, and thermal. Wed, 18 Dec 2019 11:45:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>In the early 20th century, just as electric grids were starting to transform daily life, an unlikely advocate for renewable energy voiced his concerns about burning fossil fuels. Thomas Edison expressed dismay over using combustion instead of renewable resources in a 1910 interview for Elbert Hubbard’s anthology, “Little Journeys to the Homes of the Great.”</p> <p>“This scheme of combustion to get power makes me sick to think of — it is so wasteful,” Edison said. “You see, we should utilize natural forces and thus get all of our power. Sunshine is a form of energy, and the winds and the tides are manifestations of energy. Do we use them? Oh, no! We burn up wood and coal, as renters burn up the front fence for fuel.”</p> <p>Over a century later, roughly 80 percent of global energy consumption still comes from burning fossil fuels. As the impact of climate change on the environment becomes increasingly drastic, there is a mounting sense of urgency for researchers and engineers to develop scalable renewable energy solutions.</p> <p>“Even 100 years ago, Edison understood that we cannot replace combustion with a single alternative,” adds Reshma Rao PhD '19, a postdoc in MIT’s Electrochemical Energy Lab who included Edison’s quote in her doctoral thesis. “We must look to different solutions that might vary temporally and geographically depending on resource availability.”</p> <p>Rao is one of many researchers across MIT’s Department of Mechanical Engineering who have entered the race to develop energy conversion and storage technologies from renewable sources such as wind, wave, solar, and thermal.</p> <p><strong>Harnessing energy from waves</strong></p> <p>When it comes to renewable energy, waves have other resources beat in two respects. First, unlike solar, waves offer a consistent energy source regardless of time of day. Second, waves provide much greater energy density than wind due to water’s heavier mass.</p> <p>Despite these advantages, wave-energy harvesting is still in its infancy. Unlike wind and solar, there is no consensus in the field of wave hydrodynamics on how to efficiently capture and convert wave energy. Dick K.P. Yue, Philip J. Solondz Professor of Engineering, is hoping to change that.</p> <p>“My group has been looking at new paradigms,” explains Yue. “Rather than tinkering with small improvements, we want to develop a new way of thinking about the wave-energy problem.”</p> <p>One aspect of that paradigm is determining the optimal geometry of wave-energy converters (WECs). Graduate student Emma Edwards has been developing a systematic methodology to determine what kind of shape WECs should be.</p> <p>“If we can optimize the shape of WECs for maximizing extractable power, wave energy could move significantly closer to becoming an economically viable source of renewable energy,” says Edwards.&nbsp;</p> <p>Another aspect of the wave-energy paradigm Yue’s team is working on is finding the optimal configuration for WECs in the water. Grgur Tokić PhD '16, an MIT alum and current postdoc working in Yue’s group, is building a case for optimal configurations of WECs in large arrays, rather than as stand-alone devices.</p> <p>Before being placed in the water, WECs are tuned for their particular environment. This tuning involves considerations like predicted wave frequency and prevailing wind direction. According to Tokić and Yue, if WECs are configured in an array, this tuning could occur in real time, maximizing energy-harvesting potential.</p> <p>In an array, “sentry” WECs could gather measurements about waves such as amplitude, frequency, and direction. Using wave reconstructing and forecasting, these WECs could then communicate information about conditions to other WECs in the array wirelessly, enabling them to tune minute-by-minute in response to current wave conditions.</p> <p>“If an array of WECs can tune fast enough so they are optimally configured for their current environment, now we are talking serious business,” explains Yue. “Moving toward arrays opens up the possibilities of significant advances and gains many-times-over non-interacting, isolated devices.”</p> <p>By examining the optimal size and configuration of WECs using theoretical and computational methods, Yue’s group hopes to develop potentially game-changing frameworks for harnessing the power of waves.</p> <p><strong>Accelerating the discovery of photovoltaics </strong></p> <p>The amount of solar energy that reaches the Earth’s surface offers a tantalizing prospect in the quest for renewable energy. Every hour, an estimated 430 quintillion joules of energy is delivered to Earth from the sun. That’s the equivalent of one year’s worth of global energy consumption by humans.</p> <p>Tonio Buonassisi, professor of mechanical engineering, has dedicated his entire career to developing technologies that harness this energy and convert it into usable electricity. But time, he says, is of the essence. “When you consider what we are up against in terms of climate change, it becomes increasingly clear we are running out of time,” he says.</p> <p>For solar energy to have a meaningful impact, according to Buonassisi, researchers need to develop solar cell materials that are efficient, scalable, cost-effective, and reliable. These four variables pose a challenge for engineers — rather than develop a material that satisfies just one of these factors, they need to create one that ticks off all four boxes and can be moved to market as quickly as possible. “If it takes us 75 years to get a solar cell that does all of these things to market, it’s not going to help us solve this problem. We need to get it to market in the next five years,” Buonassisi adds.</p> <p>To accelerate the discovery and testing of new materials, Buonassisi’s team has developed a process that uses a combination of machine learning and high-throughput experimentation — a type of experimentation that enables a large quantity of materials to be screened at the same time. The result is a 10-fold increase in the speed of discovery and analysis for new solar cell materials.</p> <p>“Machine learning is our navigational tool,” explains Buonassisi. “It can de-bottleneck the cycle of learning so we can grind through material candidates and find one that satisfies all four variables.”</p> <p>Shijing Sun, a research scientist in Buonassisi’s group, used a combination of machine learning and high-throughput experiments to quickly assess and test perovskite solar cells.</p> <p>“We use machine learning to accelerate the materials discovery, and developed an algorithm that directs us to the next sampling point and guides our next experiment,” Sun says. Previously, it would take three to five hours to classify a set of solar cell materials. The machine learning algorithm can classify materials in just five minutes.</p> <p>Using this method, Sun and Buonassisi made 96 tested compositions. Of those, two perovskite materials hold promise and will be tested further.</p> <p>By using machine learning as a tool for inverse design, the research team hopes to assess thousands of compounds that could lead to the development of a material that enables the large-scale adoption of solar energy conversion. “If in the next five years we can develop that material using the set of productivity tools we’ve developed, it can help us secure the best possible future that we can,” adds Buonassisi.</p> <p><strong>New materials to trap heat</strong></p> <p>While Buonassisi’s team is focused on developing solutions that directly convert solar energy into electricity, researchers including Gang Chen, Carl Richard Soderberg Professor of Power Engineering, are working on technologies that convert sunlight into heat. Thermal energy from the heat is then used to provide electricity.</p> <p>“For the past 20 years, I’ve been working on materials that convert heat into electricity,” says Chen. While much of this materials research is on the nanoscale, Chen and his team at the NanoEngineering Group are no strangers to large-scale experimental systems. They previously built a to-scale receiver system that used concentrating solar thermal power (CSP).</p> <p>In CSP, sunlight is used to heat up a thermal fluid, such as oil or molten salt. That fluid is then either used to generate electricity by running an engine, such as a steam turbine, or stored for later use.</p> <p>Over the course of a four-year project funded by the U.S. Department of Energy, Chen’s team built a CSP receiver at MIT’s Bates Research and Engineering Center in Middleton, Massachusetts. They developed the Solar Thermal Aerogel Receiver — nicknamed STAR.</p> <p>The system relied on mirrors known as Fresnel reflectors to direct sunlight to pipes containing thermal fluid. Typically, for fluid to effectively trap the heat generated by this reflected sunlight, it would need to be encased in a high-cost vacuum tube. In STAR, however, Chen’s team utilized a transparent aerogel that can trap heat at incredibly high temperatures — removing the need for expensive vacuum enclosures. While letting in over 95 percent of the incoming sunlight, the aerogel retains its insulating properties, preventing heat from escaping the receiver.</p> <p>In addition to being more efficient than traditional vacuum receivers, the aerogel receivers enabled new configurations for the CSP solar reflectors. The reflecting mirrors were flatter and more compact than conventionally used parabolic receivers, resulting in a savings of material.&nbsp;</p> <p>“Cost is everything with energy applications, so the fact STAR was cheaper than most thermal energy receivers, in addition to being more efficient, was important,” adds Svetlana Boriskina, a research scientist working on Chen’s team.&nbsp;</p> <p>After the conclusion of the project in 2018, Chen's team has continued to explore solar thermal applications for the aerogel material used in STAR. He recently used the aerogel in a device that contained a heat-absorbing material. When placed on a roof on MIT’s campus, the heat-absorbing material, which was covered by a layer of the aerogel, reached an amazingly high temperature of 220 degrees Celsius. The outside air temperature, for comparison, was a chilly 0 C. Unlike STAR, this new system doesn’t require Fresnel reflectors to direct sunlight to the thermal material.</p> <p>“Our latest work using the aerogel enables sunlight concentration without focusing optics to harness thermal energy,” explains Chen. “If you aren’t using focusing optics, you can develop a system that is easier to use and cheaper than traditional receivers.”</p> <p>The aerogel device could potentially be further developed into a system that powers heating and cooling systems in homes.</p> <p><strong>Solving the storage problem</strong></p> <p>While CSP receivers like STAR offer some energy storage capabilities, there is a push to develop more robust energy storage systems for renewable technologies. Storing energy for later use when resources aren’t supplying a consistent stream of energy — for example, when the sun is covered by clouds, or there is little-to-no wind — will be crucial for the adoption of renewable energy on the grid. To solve this problem, researchers are developing new storage technologies.&nbsp;&nbsp;</p> <p>Asegun Henry, Robert N. Noyce Career Development Professor, who like Chen has developed CSP technologies, has created a new storage system that has been dubbed “sun in a box.” Using two tanks, excess energy can be stored in white-hot molten silicon. When this excess energy is needed, mounted photovoltaic cells can be actuated into place to convert the white-hot light from the silicon back into electricity.</p> <p>“It’s a true battery that can work with any type of energy conversion,” adds Henry.</p> <p>Betar Gallant, ABS Career Development Professor, meanwhile, is exploring ways to improve the energy density of today’s electrochemical batteries by designing new storage materials that are more cost-effective and versatile for storing cleanly generated energy. Rather than develop these materials using metals that are extracted through energy-intensive mining, she aims to build batteries using more earth-abundant materials.</p> <p>“Ideally, we want to create a battery that can match the irregular supply of solar or wind energy that peak at different times without degrading, as today’s batteries do” explains Gallant.</p> <p>In addition to working on lithium-ion batteries, like Gallant, Yang Shao-Horn, W.M. Keck Professor of Energy, and postdoc Reshma Rao are developing technologies that can directly convert renewable energy to fuels.</p> <p>“If we want to store energy at scale going beyond lithium ion batteries, we need to use resources that are abundant,” Rao explains. In their electrochemical technology, Rao and Shao-Horn utilize one of the most abundant resources — liquid water.</p> <p>Using an active catalyst and electrodes, water is split into hydrogen and oxygen in a series of chemical reactions. The hydrogen becomes an energy carrier and can be stored for later use in a fuel cell. To convert the energy stored in the hydrogen back into electricity, the reactions are reversed. The only by-product of this reaction is water.&nbsp;&nbsp;</p> <p>“If we can get and store hydrogen sustainably, we can basically electrify our economy using renewables like wind, wave, or solar,” says Rao.</p> <p>Rao has broken down every fundamental reaction that takes place within this process. In addition to focusing on the electrode-electrolyte interface involved, she is developing next-generation catalysts to drive these reactions.&nbsp;&nbsp;</p> <p>“This work is at the frontier of the fundamental understanding of active sites catalyzing water splitting for hydrogen-based fuels from solar and wind to decarbonize transport and industry,” adds Shao-Horn.</p> <p><strong>Securing a sustainable future </strong></p> <p>While shifting from a grid powered primarily by fossil fuels to a grid powered by renewable energy seems like a herculean task, there have been promising developments in the past decade. A report released prior to the UN Global Climate Action Summit in September showed that, thanks to $2.6 trillion of investment, renewable energy conversion has quadrupled since 2010.</p> <p>In a statement after the release of the report, Inger Andersen, executive director of the UN Environment Program, stressed the correlation between investing in renewable energy and securing a sustainable future for humankind. “It is clear that we need to rapidly step up the pace of the global switch to renewables if we are to meet international climate and development goals,” Andersen said.</p> <p>No single conversion or storage technology will be responsible for the shift from fossil fuels to renewable energy. It will require a tapestry of complementary solutions from researchers both here at MIT and across the globe.</p> Postdoc Reshma Rao stands next to a pulsed laser deposition system, which is used to deposit well-defined thin films of catalyst materials. Photo: Tony PulsoneMechanical engineering, School of Engineering, Renewable energy, Alternative energy, Climate change, Energy, Energy storage, Oceanography and ocean engineering, Photovoltaics, Sustainability, Wind, Solar Anoushka Bose: Targeting a career in security studies and diplomacy Nuclear science and engineering and physics met political science to illuminate a new path. Tue, 17 Dec 2019 15:25:01 -0500 Leda Zimmerman | MIT Political Science <div> <p>Anoushka Bose arrived at MIT in 2016 intent on pursuing problems related to climate change and energy. But two years later, she found herself discussing arms control and international security with Russian foreign minister Sergei Lavrov during a policy forum connecting American and Russian students.</p> <p>“It was eye-opening for me,” says Bose, a double major in political science and physics. “I thought it was fascinating to see how politics and diplomacy work between countries that don't share the same motivations.”</p> <p>In the wake of this experience and a set of equally transformative internships, Bose is now on a new trajectory, moving purposefully toward a public-service career in nuclear policy and diplomacy.</p> <p><strong>Passion for policy and science</strong></p> <p>Growing up in the San Diego, California, area, Bose gravitated toward physics and chemistry in her STEM-oriented high school. But the extracurricular project that completely captivated her was her community's yearlong research and writing competition that traditionally focused on a historical topic. Bose's subject: the Clean Air Act.</p> <p>“This project substantively shaped my interests,” she says. Bose found it “enlightening” to study both the science behind air pollution and the political movement that helped nail down the legislation. “I realized I had passions for both the social sciences and science.”</p> <p>Bose inclined initially toward nuclear science and engineering at MIT because she saw “nuclear energy as the pinnacle solution to climate problems.” She later migrated toward physics, where she hoped to gain more latitude to pursue clean-energy policy questions as well.</p> <p>But it was her engagement with political science that propelled Bose on her current academic path.</p> <p>Venturing into 17.581 (Riots, Rebellions and Revolutions), taught by Roger Petersen, the Arthur and Ruth Sloan Professor of Political Science, Bose says “a gate opened for me into national security.” With its hybrid focus on American and international politics, the class “gave me both knowledge and respect for the entire security enterprise of the U.S.”</p> <p>This class, along with 17.482-3 (U.S. Military Power), taught by Barry R. Posen, the Ford International Professor of Political Science, “kicked off several semesters dedicated to security studies,” says Bose. “This area seemed like it might be really fulfilling as a career.” The summer after her sophomore year, she grabbed a chance to test her premise.</p> <p><strong>The Washington experience</strong></p> <p>With the help of the MIT Washington DC Summer Internship Program, and Ernest J. Moniz, former U.S. Secretary of Energy and Cecil and Ida Green Professor of Physics and Engineering Systems, Bose landed an internship at the Nuclear Threat Initiative. Plunging into research about safeguarding nuclear materials in central Asia, protecting against radiological challenges, and the potential impacts of a nuclear winter after a small-scale nuclear exchange, Bose strongly felt, “This is the kind of place where I want to be.”</p> <p>The initiative's mission also made an impact on Bose: “I thought maybe I should be exploring global nuclear safety, proliferation, and security issues, rather than energy,” she says. With this in mind, she seized an opportunity to dive even deeper into this area, applying for one of 20 U.S. spots in the Stanford-U.S. Russia Forum.</p> <p>Running September 2018 through April 2019, this project brought Bose together with a small group of U.S. and Russian students to discuss the Intermediate-Range Nuclear Forces (INF) Treaty, from which the Trump administration had decided to withdraw. Meeting virtually and then in person (in both Moscow and Washington) to present policy ideas, Bose and her partners tried to offer solutions that might prove mutually, politically beneficial.</p> <p>“From the policy-making side, I hadn't understood the power of individuals to shape what gets done,” she says. “It was really interesting working with the Russians, who often spoke bluntly, and who did not routinely view the U.S. as having pure motivations.”</p> <p>While laboring over the research and writing for this policy project, Bose continued to delve deeper into security studies at MIT. “I needed to gain knowledge and confidence in understanding international crises,” says Bose.</p> <p>Increasingly sure that she “wanted to do something involving diplomacy and international relations,” Bose secured another internship in Washington last summer, working on nuclear energy policy at the State Department. Even though she hoped to concentrate on weapons and proliferation, Bose was eager “to learn about the processes of government and bureaucracy.”</p> <p>The internship did not disappoint. Bose worked on bolstering U.S. nuclear energy business in countries around the world seeking nuclear power. “I had not internalized how the State Department on a daily basis uses nuclear energy as a policy thrust,” she says. She also helped develop U.S. nuclear cooperation accords with Argentina and Romania. “I was so excited to see something come out of my advocacy,” she says.</p> <p>These real-world experiences “sealed the deal" for Bose. “After last summer I knew I wanted to work in nuclear policy, focusing on security,” she says. Today, under the direction of political science Associate Professor Vipin Narang, she is delving into the issue of global noncompliance with nuclear materials — work for which she has been named a presidential fellow at the Center for the Study of the Presidency and Congress.</p> <p>She hasn't abandoned energy, though. She serves as president of the MIT Energy Club, devoting considerable time to hosting events as she finishes coursework for her double major. She is applying both to law school, and for a full-time job next year in Washington in policy and/or diplomacy.</p> <p>In a world challenged by nationalism and conflict, Bose retains a sense of optimism and commitment to a larger goal — a safer world. “It's simple for me to believe in the power of cooperation and trust, especially after working alongside Russian students all year,” she says. “I learned that both sides deeply value nuclear security, and neither side wants a much more dangerous world where no one wins,” she says.</p> </div> Anoushka Bose is moving purposefully toward a public-service career in nuclear policy and diplomacy.Political science, School of Humanities Arts and Social Sciences, Physics, Policy, Nuclear security and policy, Energy, International relations, Students, Undergraduate, School of Science, Government, Profile, Nuclear science and engineering, Global The uncertain role of natural gas in the transition to clean energy MIT study finds that challenges in measuring and mitigating leakage of methane, a powerful greenhouse gas, prove pivotal. Mon, 16 Dec 2019 10:43:54 -0500 David L. Chandler | MIT News Office <p>A new MIT study examines the opposing roles of natural gas in the battle against climate change — as a bridge toward a lower-emissions future, but also a contributor to greenhouse gas emissions.</p> <p>Natural gas, which is mostly methane, is viewed as a significant “bridge fuel” to help the world move away from the greenhouse gas emissions of fossil fuels, since burning natural gas for electricity produces about half as much carbon dioxide as burning coal. But methane is itself a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas. Increasing its usage, as a strategy for decarbonizing the electricity supply, will also increase the potential for such “fugitive” methane emissions, although there is great uncertainty about how much to expect. Recent studies have documented the difficulty in even measuring today’s emissions levels.</p> <p>This uncertainty adds to the difficulty of assessing natural gas’ role as a bridge to a net-zero-carbon energy system, and in knowing when to transition away from it. But strategic choices must be made now about whether to invest in natural gas infrastructure. This inspired MIT researchers to quantify timelines for cleaning up natural gas infrastructure in the United States or accelerating a shift away from it, while recognizing the uncertainty about fugitive methane emissions.</p> <p>The study shows that in order for natural gas to be a major component of the nation’s effort to meet greenhouse gas reduction targets over the coming decade, present methods of controlling methane leakage would have to improve by anywhere from 30 to 90 percent. Given current difficulties in monitoring methane, achieving those levels of reduction may be a challenge. Methane is a valuable commodity, and therefore companies producing, storing, and distributing it already have some incentive to minimize its losses. However, despite this, even intentional natural gas venting and flaring (emitting carbon dioxide) continues.</p> <p>The study also finds policies that favor moving directly to carbon-free power sources, such as wind, solar, and nuclear, could meet the emissions targets without requiring such improvements in leakage mitigation, even though natural gas use would still be a significant part of the energy mix.</p> <p>The researchers compared several different scenarios for curbing methane from the electric generation system in order to meet a target for 2030 of a 32 percent cut in carbon dioxide-equivalent emissions relative to 2005 levels, which is consistent with past U.S. commitments to mitigate climate change. The findings appear today in the journal <em>Environmental Research Letters</em>, in a paper by MIT postdoc Magdalena Klemun and Associate Professor Jessika Trancik.</p> <p>Methane is a much stronger greenhouse gas than carbon dioxide, although how much more depends on the timeframe you choose to look at. Although methane traps heat much more, it doesn’t last as long once it’s in the atmosphere — for decades, not centuries. &nbsp;When averaged over a 100-year timeline, which is the comparison most widely used, methane is approximately 25 times more powerful than carbon dioxide. But averaged over a 20-year period, it is 86 times stronger.</p> <p>The actual leakage rates associated with the use of methane are widely distributed, highly variable, and very hard to pin down. Using figures from a variety of sources, the researchers found the overall range to be somewhere between 1.5 percent and 4.9 percent of the amount of gas produced and distributed. Some of this happens right at the wells, some occurs during processing and from storage tanks, and some is from the distribution system. Thus, a variety of different kinds of monitoring systems and mitigation measures may be needed to address the different conditions.</p> <p>“Fugitive emissions can be escaping all the way from where natural gas is being extracted and produced, all the way along to the end user,” Trancik says. “It’s difficult and expensive to monitor it along the way.”</p> <p>That in itself poses a challenge. “An important thing to keep in mind when thinking about greenhouse gases,” she says, “is that the difficulty in tracking and measuring methane is itself a risk.” If researchers are unsure how much there is and where it is, it’s hard for policymakers to formulate effective strategies to mitigate it. This study’s approach is to embrace the uncertainty instead of being hamstrung by it, Trancik says: The uncertainty itself should inform current strategies, the authors say, by motivating investments in leak detection to reduce uncertainty, or a faster transition away from natural gas.</p> <p>“Emissions rates for the same type of equipment, in the same year, can vary significantly,” adds Klemun. “It can vary depending on which time of day you measure it, or which time of year. There are a lot of factors.”</p> <p>Much attention has focused on so-called “super-emitters,” but even these can be difficult to track down. “In many data sets, a small fraction of point sources contributes disproportionately to overall emissions,” Klemun says. “If it were easy to predict where these occur, and if we better understood why, detection and repair programs could become more targeted.” But achieving this will require additional data with high spatial resolution, covering wide areas and many segments of the supply chain, she says.</p> <p>The researchers looked at the whole range of uncertainties, from how much methane is escaping to how to characterize its climate impacts, under a variety of different scenarios. One approach places strong emphasis on replacing coal-fired plants with natural gas, for example; others increase investment in zero-carbon sources while still maintaining a role for natural gas.</p> <p>In the first approach, methane&nbsp;emissions from the U.S. power sector would need to be reduced by 30 to 90 percent from today’s levels by 2030,&nbsp;along with&nbsp;a 20 percent reduction in&nbsp;carbon dioxide.&nbsp;Alternatively,&nbsp;that target could be met through even greater carbon dioxide&nbsp;reductions, such as through faster expansion of low-carbon electricity, without&nbsp;requiring any&nbsp;reductions in natural&nbsp;gas leakage&nbsp;rates. The higher end of the published ranges reflects greater emphasis on methane’s short-term warming contribution.</p> <p>One question raised by the study is how much to invest in developing technologies and infrastructure for safely expanding natural gas use, given the difficulties in measuring and mitigating methane emissions, and given that virtually all scenarios for meeting greenhouse gas reduction targets call for ultimately phasing out natural gas that doesn’t include carbon capture and storage by mid-century. “A certain amount of investment probably makes sense to improve and make use of current infrastructure, but if you’re interested in really deep reduction targets, our results make it harder to make a case for that expansion right now,” Trancik says.</p> <p>The detailed analysis in this study should provide guidance for local and regional regulators as well as policymakers all the way to federal agencies, they say. The insights also apply to other economies relying on natural gas. The best choices and exact timelines are likely to vary depending on local circumstances, but the study frames the issue by examining a variety of possibilities that include the extremes in both directions — that is, toward investing mostly in improving the natural gas infrastructure while expanding its use, or accelerating a move away from it.</p> <p>The research was supported by the MIT Environmental Solutions Initiative. The researchers also received support from MIT’s Policy Lab at the Center for International Studies.</p> Methane is a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas.IDSS, Research, Solar, Energy, Renewable energy, Alternative energy, Climate change, Technology and society, Oil and gas, Economics, Policy, MIT Energy Initiative, Emissions, Sustainability, ESI, Greenhouse gases Hacking into a sustainable energy future The 2019 MIT EnergyHack presented opportunities for students and companies to collaborate and solve problems facing the energy sector today. Wed, 11 Dec 2019 14:45:01 -0500 Taylor Tracy | Energy Club <p>During the third weekend in November, students from MIT and colleges across the globe convened on MIT’s campus to hack real-world challenges in the energy industry at the 2019 MIT EnergyHack. Hackers arrived at the Stata Center that Friday evening and had 36 hours to come up with a solution to the challenge they were assigned with their team members before presenting to company representatives, fellow hackers, and judges Sunday morning.</p> <p>This year, MIT’s only energy-centric hackathon, hosted by the <a href="">MIT Energy Club</a>, focused on transitioning society toward its sustainability goals for a low-carbon energy landscape, with corporate sponsors exclusive to the areas of renewable energy, energy storage, and sustainable materials manufacturing.</p> <p>Staying true to the theme, the leadership team, led by managing directors Supratim Das, a PhD and MBA dual-degree candidate in chemical engineering, and Jane Reed, a senior in physics and nuclear science and engineering, minimized waste by supplying hackers with reusable aluminum water bottles and bamboo utensil kits to decrease the use of plastic, and also communicated electronically instead of through printed materials.</p> <p>“We wanted the participants to come away recognizing the importance of engaging in sustainable actions in day-to-day life while being an agent to propagate the message of sustainability and action on climate change to their home countries,” says Das.</p> <p>Challenges were presented by Customer First Renewables, Iberdrola, Ionic Materials, NICE, Saint-Gobain, Toyota Research Institute, and The Energy Authority. Each challenge had a primary focus on finding ways to harness solar, wind, and energy-storage technologies to meet society’s growing energy demands worldwide.</p> <p>While lithium-ion batteries were a primary topic for several challenges, each challenge offered different core problems to tackle. During his keynote Friday night, Patrick Herring, research scientist at Toyota Research Institute, emphasized the need for collaboration in the battery storage energy sector for a sustainable future — particularly with electric-vehicle batteries. This tied into the Toyota Research Institute’s challenge, which had hackers consider the full lifespan of batteries.</p> <p>“The challenge that we presented for having some kind of second life for batteries grows out of a need that we see coming down the road, but we don’t really have a great solution — there’s not a great solution out there,” said Herring. “It’s good to start people thinking about it before it gets here.”</p> <p>Thinking about the future was shared by many at the event, but not only regarding the future of energy on a global scale. “For us, it was a chance to meet a couple of hundred students and engineers in the world and learn about them and have them learn about us,” said Julia Di-Corleto, director of Saint-Gobain's research and development center in Massachusetts, when asked about what takeaways their company sought to gain from presenting a challenge in the hackathon.</p> <p>The sentiments of collaborating with students beyond the EnergyHack was a common theme. “Something unique is to have the opportunity to really get in touch directly with the students and know what they want for the future, and share our project. I’m sure we're all listening to great ideas and maybe we can move forward [together],” said Roberto Mariscal, head of innovation at Iberdrola Spain. “The diversity of the people, it’s incredible. I have met people from all over the world in just half an hour, it’s fantastic. That’s something unique from MIT.”</p> <p>One team for each challenge advanced from the preliminary poster presentation judging session to the final presentation round, where they pitched their solutions to a crowded auditorium with all the event’s attendees. Team Booth came in third, winning $1,000 for their solution to the Ionic Materials challenge; team Big Decentralized Energy came in second, winning $1,500 for their solution to the Iberdrola challenge; and team Synergy took first place, winning $2,000 for their solution to the Toyota Research Institute challenge. Solutions to the challenges can be viewed on the <a href="">MIT EnergyHack website</a>.</p> <p>The turnout for the event, now in its fifth year, speaks to its own sustainability and the growing attraction to address energy issues. “It is indeed rare that you have over 150 students motivated about energy along with more than 10 corporate sponsors under the same roof, ready to listen to new ideas and make changes happen on a global scale,” says Das. “It is truly what MIT as a university stands for.”</p> Students gather in the Stata Center to hear the 2019 MIT EnergyHack challenges. Participants had 36 hours to come up with a solution to the challenge they were assigned with their team members before presenting to company representatives, fellow hackers, and judges.Photo: MIT EnergyHackSpecial events and guest speakers, Invention, Industry, Energy, Sustainability, Innovation and Entrepreneurship (I&E), Students, Student life, Contests and academic competions Taking the carbon out of construction with engineered wood Substituting lumber for materials such as cement and steel could cut building emissions and costs. Wed, 11 Dec 2019 12:55:01 -0500 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>To meet the long-term goals of the Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius and ideally capping it at 1.5 C — humanity will ultimately need to achieve net-zero emissions of greenhouse gases (GHGs) into the atmosphere. To date, emissions reduction efforts have largely focused on decarbonizing the two economic sectors responsible for the most emissions, electric power and transportation. Other approaches aim to remove carbon from the atmosphere and store it through carbon capture technology, biofuel cultivation, and massive tree planting. &nbsp;</p> <p>As it turns out, planting trees is not the only way forestry can help in climate mitigation; how we use wood harvested from trees may also make a difference. Recent studies have shown that engineered wood products — composed of wood and various types of adhesive to enhance physical strength — involve far fewer carbon dioxide emissions than mineral-based building materials, and at lower cost. Now <a href="" target="_blank">new research</a> in the journal <em>Energy Economics</em> explores the potential environmental and economic impact in the United States of substituting lumber for energy-intensive building materials such as cement and steel, which account for <a href="" target="_blank">nearly 10 percent</a> of human-made GHG emissions and are among the hardest to reduce.</p> <p>“To our knowledge, this study is the first economy-wide analysis to evaluate the economic and emissions impacts of substituting lumber products for more CO<sub>2</sub>-intensive materials in the construction sector,” says the study’s lead author <a href="">Niven Winchester</a>, a research scientist at the MIT Joint Program on the Science and Policy of Global Change and Motu Economic and Public Policy Research. “There is no silver bullet to reduce GHGs, so exploiting a suite of emission-abatement options is required to mitigate climate change.”</p> <p>Comparing the economic and emissions impacts of replacing CO<sub>2</sub>-intensive building materials (e.g., steel and concrete) with lumber products in the United States under an economy-wide cap-and-trade policy consistent with the nation’s Paris Agreement GHG emissions-reduction pledge, the study found that the CO<sub>2</sub> intensity (tons of CO<sub>2</sub> emissions per dollar of output) of lumber production is about 20 percent less than that of fabricated metal products, under 50 percent that of iron and steel, and under 25 percent that of cement. In addition, shifting construction toward lumber products lowers the GDP cost of meeting the emissions cap by approximately $500 million and reduces the carbon price.</p> <p>The authors caution that these results only take into account emissions resulting from the use of fossil fuels in harvesting, transporting, fabricating, and milling lumber products, and neglect potential increases in atmospheric CO<sub>2</sub> associated with tree harvesting or beneficial long-term carbon sequestration provided by wood-based building materials.</p> <p>“The source of lumber, and the conditions under which it is grown and harvested, and the fate of wood products deserve further attention to develop a full accounting of the carbon implications of expanded use of wood in building construction,” they write. “Setting aside those issues, lumber products appear to be advantageous compared with many other building materials, and offer one potential option for reducing emissions from sectors like cement, iron and steel, and fabricated metal products — by reducing the demand for these products themselves.”</p> <p>Funded, in part, by Weyerhaeuser and the Softwood Lumber Board, the study develops and utilizes a customized economy-wide model that includes a detailed representation of energy production and use and represents production of construction, forestry, lumber, and mineral-based construction materials.</p> A 70-unit British Columbia lakeside resort hotel was built with local engineered wood products, including cross-laminated timber. New research explores the potential environmental and economic impact in the United States of substituting lumber for energy-intensive building products such as cement and steel.Photo: Province of British Columbia/FlickrResearch, Climate change, Greenhouse gases, Emissions, Climate, Environment, Energy, Economics, Policy, Carbon dioxide, Building, Sustainability, Materials Science and Engineering, Cement, Joint Program on the Science and Policy of Global Change Toward more efficient computing, with magnetic waves Circuit design offers a path to “spintronic” devices that use little electricity and generate practically no heat. Thu, 28 Nov 2019 13:59:59 -0500 Rob Matheson | MIT News Office <p>MIT researchers have devised a novel circuit design that enables precise control of computing with magnetic waves — with no electricity needed. The advance takes a step toward practical magnetic-based devices, which have the potential to compute far more efficiently than electronics.</p> <p>Classical computers rely on massive amounts of electricity for computing and data storage, and generate a lot of wasted heat. In search of more efficient alternatives, researchers have started designing magnetic-based “spintronic” devices, which use relatively little electricity and generate practically no heat.</p> <p>Spintronic devices leverage the “spin wave” — a quantum property of electrons — in magnetic materials with a lattice structure. This approach involves modulating the spin wave properties to produce some measurable output that can be correlated to computation. Until now, modulating spin waves has required injected electrical currents using bulky components that can cause signal noise and effectively negate any inherent performance gains.</p> <p>The MIT researchers developed a circuit architecture that uses only a nanometer-wide domain wall in layered nanofilms of magnetic material to modulate a passing spin wave, without any extra components or electrical current. In turn, the spin wave can be tuned to control the location of the wall, as needed. This provides precise control of two changing spin wave states, which correspond to the 1s and 0s used in classical computing. A paper describing the circuit design was published today in <em>Science</em>.</p> <p>In the future, pairs of spin waves could be fed into the circuit through dual channels, modulated for different properties, and combined to generate some measurable quantum interference — similar to how photon wave interference is used for quantum computing. Researchers hypothesize that such interference-based spintronic devices, like quantum computers, could execute highly complex tasks that conventional computers struggle with.</p> <p>“People are beginning to look for computing beyond silicon. Wave computing is a promising alternative,” says Luqiao Liu, a professor in the Department of Electrical Engineering and Computer Science (EECS) and principal investigator of the Spintronic Material and Device Group in the Research Laboratory of Electronics. “By using this narrow domain wall, we can modulate the spin wave and create these two separate states, without any real energy costs. We just rely on spin waves and intrinsic magnetic material.”</p> <p>Joining Liu on the paper are Jiahao Han, Pengxiang Zhang, and Justin T. Hou, three graduate students in the Spintronic Material and Device Group; and EECS postdoc Saima A. Siddiqui.</p> <p><strong>Flipping magnons</strong></p> <p>Spin waves are ripples of energy with small wavelengths. Chunks of the spin wave, which are essentially the collective spin of many electrons, are called magnons. While magnons are not true particles, like individual electrons, they can be measured similarly for computing applications.</p> <p>In their work, the researchers utilized a customized “magnetic domain wall,” a nanometer-sized barrier between two neighboring magnetic structures. They layered a pattern of cobalt/nickel nanofilms — each a few atoms thick — with certain desirable magnetic properties that can handle a high volume of spin waves. Then they placed the wall in the middle of a magnetic material with a special lattice structure, and incorporated the system into a circuit.</p> <p>On one side of the circuit, the researchers excited constant spin waves in the material. As the wave passes through the wall, its magnons immediately spin in the opposite direction: Magnons in the first region spin north, while those in the second region — past the wall —&nbsp;spin south. This causes the dramatic shift in the wave’s phase (angle) and slight decrease in magnitude (power).</p> <p>In experiments, the researchers placed a separate antenna on the opposite side of the circuit, that detects and transmits an output signal. Results indicated that, at its output state, the phase of the input wave flipped 180 degrees. The wave’s magnitude — measured from highest to lowest peak —&nbsp;had also decreased by a significant amount.</p> <p><strong>Adding some torque</strong></p> <p>Then, the researchers discovered a mutual interaction between spin wave and domain wall that enabled them to efficiently toggle between two states. Without the domain wall, the circuit would be uniformly magnetized; with the domain wall, the circuit has a split, modulated wave.</p> <p>By controlling the spin wave, they found they could control the position of the domain wall. This relies on a phenomenon called, “spin-transfer torque,” which is when spinning electrons essentially jolt a magnetic material to flip its magnetic orientation.</p> <p>In the researchers’ work, they boosted the power of injected spin waves to induce a certain spin of the magnons. This actually draws the wall toward the boosted wave source. In doing so, the wall gets jammed under the antenna — effectively making it unable to modulate waves and ensuring uniform magnetization in this state.</p> <p>Using a special magnetic microscope, they showed that this method causes a micrometer-size shift in the wall, which is enough to position it anywhere along the material block. Notably, the mechanism of magnon spin-transfer torque was proposed, but not demonstrated, a few years ago. “There was good reason to think this would happen,” Liu says. “But our experiments prove what will actually occur under these conditions.”</p> <p>The whole circuit is like a water pipe, Liu says. The valve (domain wall) controls how the water (spin wave) flows through the pipe (material). “But you can also imagine making water pressure so high, it breaks the valve off and pushes it downstream,” Liu says. “If we apply a strong enough spin wave, we can move the position of domain wall — except it moves slightly upstream, not downstream.”</p> <p>Such innovations could enable practical wave-based computing for specific tasks, such as the signal-processing technique, called “fast Fourier transform.” Next, the researchers hope to build a working wave circuit that can execute basic computations. Among other things, they have to optimize materials, reduce potential signal noise, and further study how fast they can switch between states by moving around the domain wall. “That’s next on our to-do list,” Liu says.</p> An MIT-invented circuit uses only a nanometer-wide “magnetic domain wall” to modulate the phase and magnitude of a spin wave, which could enable practical magnetic-based computing — using little to no electricity.Image courtesy of the researchers, edited by MIT NewsResearch, Computer science and technology, Nanoscience and nanotechnology, Spintronics, electronics, Energy, Quantum computing, Materials Science and Engineering, Design, Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), School of Engineering MIT Energy Initiative report charts pathways for sustainable personal transportation Technological innovations, policies, and behavioral changes will all be needed to reach Paris climate agreement targets. Tue, 19 Nov 2019 00:00:00 -0500 Kathryn Luu | MIT Energy Initiative <p>In our daily lives, we all make choices about how we travel and what type of vehicle we own or use. We consider these choices within the constraints of our current transportation system and weigh concerns including costs, convenience, and — increasingly — carbon emissions. "<a href="" target="_blank">Insights into Future Mobility</a>," a multidisciplinary report released today by the <a href="" target="_blank">MIT Energy Initiative</a> (MITEI), explores how individual travel decisions will be shaped by complex interactions between technologies, markets, business models, government policies, and consumer preferences — and the potential consequences as personal mobility undergoes tremendous changes in the years ahead.</p> <p>The<em> </em>report is the culmination of MITEI’s three-year <a href="" target="_blank">Mobility of the Future</a> study, which is part of <a href="" target="_blank">MIT’s Plan for Action on Climate Change</a>. The report highlights the importance of near-term action to ensure the long-term sustainability of personal mobility. The researchers ultimately find that continued technological innovation is necessary and must be accompanied by cross-sector policies and changes to consumer behavior in order to meet Paris Agreement targets for greenhouse gas emissions reductions.</p> <p>“Understanding the future of personal mobility requires an integrated analysis of technology, infrastructure, consumer choice, and government policy,” says MITEI Director Robert C. Armstrong, a professor of chemical engineering at MIT. “The study team has examined how these different dimensions will develop and interact, and the report offers possible pathways toward achieving a more sustainable personal transportation system.”</p> <p>The study team of MIT faculty, researchers, and students focused on five main areas of inquiry. They investigated the potential impact of global climate policies on fleet composition and fuel consumption, and the outlook for vehicle ownership and travel, with a focus on the U.S. and China. They also researched characteristics and future market share of alternative fuel vehicles, including plug-in electric and hydrogen fuel cell vehicles, and infrastructure considerations for charging and fueling, particularly as they affect future demand. Another main area of focus was the future of urban mobility, especially the potentially disruptive role of ride-hailing services and autonomous vehicles.</p> <p>The researchers find that there is considerable opportunity for reducing emissions from personal mobility by improving powertrain efficiency and deploying alternative fuel vehicles in the coming decades. These changes must be accompanied by decarbonization of the production of the fuels and electricity that power these vehicles in order to reach global emissions mitigation targets and achieve cleaner air and other environmental and human health benefits.</p> <p>“Our analysis shows that reducing the carbon intensity of the light-duty vehicle fleet contributes to climate change mitigation goals, as part of the larger solution,” says Sergey Paltsev, deputy director of the MIT Joint Program on the Science and Policy of Global Change and senior research scientist at MITEI. “If we are to reach international goals for limiting temperature rise and other climate change-related impacts, we will need comprehensive climate policies that promote the adoption of alternative fuel vehicles in the transportation sector and simultaneously decarbonize the electricity sector.”</p> <p>Several factors influence an individual’s decision to adopt an alternative fuel vehicle, such as a battery electric vehicle. The researchers found that the most important, interrelated factors that impact alternative vehicle adoption include cost, driving range, and charging convenience. They conclude that as production volumes increase, battery costs and the purchase price of electric vehicles will decrease, which will in turn drive sales. Improved batteries would extend the vehicle range, reinforcing the attractiveness of alternative fuel vehicles to consumers. Greater deployment of electric vehicles creates a larger market for publicly available charging infrastructure, which is critical for supporting charging convenience. Early government support for alternative fuel vehicles and charging and fueling infrastructure can help launch a self-reinforcing trajectory of adoption — and has already contributed to an increase in alternative fuel vehicle deployment.</p> <p>“We found that substantial uptake of battery electric vehicles is likely and that the extent and speed of this transition to electrification is sensitive to evolving battery costs, availability of charging infrastructure, and policy support,” says <a href="">William H. Green</a>, a professor of chemical engineering at MIT and the study chair. This large-scale deployment of battery electric vehicles is expected to help them reach total cost-of-ownership parity with internal combustion engine vehicles in approximately 10 years in the U.S. It should also lead to new business opportunities, including solutions for developing cost-effective methods of recycling batteries on an industrial scale.</p> <p>The researchers also examined the role of consumer attitudes toward car ownership and use in both established and emerging economies. In the U.S., the researchers analyzed trends in population and socioeconomic factors to estimate future demand for vehicles and vehicle travel. While many have argued that lower car ownership and use among millennials may lead to a reduced personal vehicle fleet in coming decades, the study team found that generational differences could be completely explained by differences in socioeconomics — meaning that there is no significant difference in preferences for vehicle ownership or use between millennials and previous generations. Therefore, the stock of light-duty vehicles and number of vehicle-miles traveled will likely increase by approximately 30 percent by 2050 in the U.S. In addition, the analysis indicates that “car pride” — the attribution of social status and personal image to owning and using a car — has an effect on car ownership as strong as that of income. An analysis of car pride across countries revealed that car pride is higher in emerging vehicle markets; among established markets, car pride is highest in the U.S.</p> <p>The adoption of new technologies and business models for personal mobility at scale will require major shifts in consumer perceptions and behaviors, notes Joanna Moody, research program manager of MITEI’s <a href="">Mobility Systems Center</a> and a coordinating author of the report. “Symbolic and emotional attachments to car ownership and use, particularly among individuals in emerging economies, could pose a significant barrier to the widespread adoption of more sustainable alternatives to privately owned vehicles powered by petroleum-based fuels,” Moody says. “We will need proactive efforts through public policy to establish new social norms to break down these barriers.”</p> <p>The researchers also looked at China, the largest market for new vehicle sales, to analyze how cities form transportation policies and to estimate how those local-level policies might impact the future size of China’s vehicle stock. To date, six major Chinese cities and one province have implemented car ownership restriction policies in response to severe congestion and air pollution. Our researchers found that if the six megacities continue with these restrictions, the country’s light-duty vehicle fleet could be 4 percent (12 million vehicles) smaller by 2030 than it would be without these restrictions. If the policies are adopted in more of China’s cities facing congestion and air pollution challenges, the fleet could be up to 10 percent (32 million vehicles) smaller in 2030 than it would be without those restrictions.</p> <p>Finally, the team explored how the introduction of low-cost, door-to-door autonomous vehicle (AV) mobility services will interact with existing modes of transportation in dense cities with incumbent public transit systems. They find that introducing this low-cost mobility service without restrictions can lead to increased congestion, travel times, and vehicle miles traveled — as well as reduced public transit ridership. However, these negative impacts can be mitigated if low-cost mobility services are introduced alongside policies such as “first/last mile” policies (using AVs to transport riders to and from public transit stations) or policies that reduce private vehicle ownership. The findings apply even to cities with vastly different levels of public transit service.</p> <p>Building on the research started under the&nbsp;Mobility of the Future&nbsp;study, MITEI has now launched a new Low-Carbon Energy Center, the <a href="" target="_blank">Mobility Systems Center</a>. Approaching mobility from a sociotechnical perspective, the center identifies key challenges, investigates current and potential future trends, and analyzes the societal and environmental impacts of emerging solutions for global passenger and freight mobility.</p> <p>The Mobility of the Future study received support from an external consortium of international companies with expertise in various aspects of the transportation sector, including energy, vehicle manufacturing, and infrastructure. The report, its findings, and analyses are solely the work of the MIT researchers.</p> <p>For more information or to access the<em> "</em>Insights into Future Mobility" report, visit&nbsp;<a href=""></a>.</p> MITEI’s "Insights into Future Mobility" report highlights the importance of near-term action to ensure the long-term sustainability of personal mobility.Image courtesy of the MIT Energy InitiativeMIT Energy Initiative (MITEI), Sustainability, Alternative energy, Oil and gas, Electric vehicles, Infrastructure, Policy, Research, Automobiles, Autonomous vehicles, Cities, Emissions, China, Energy, Joint Program on the Science and Policy of Global Change Researchers generate terahertz laser with laughing gas Device may enable “T-ray vision” and better wireless communication. Thu, 14 Nov 2019 13:59:59 -0500 Jennifer Chu | MIT News Office <p>Within the electromagnetic middle ground between microwaves and visible light lies terahertz radiation, and the promise of “T-ray vision.”</p> <p>Terahertz waves have frequencies higher than microwaves and lower than infrared and visible light. Where optical light is blocked by most materials, terahertz waves can pass straight through, similar to microwaves. If they were fashioned into lasers, terahertz waves might enable “T-ray vision,” with the ability to see through clothing, <a href="">book covers</a>, and other thin materials. Such technology could produce crisp, higher-resolution images than microwaves, and be far safer than X-rays.</p> <p>The reason we don’t see T-ray machines in, for instance, airport security lines and medical imaging facilities is that producing terahertz radiation requires very large, bulky setups or devices, many operating at ultracold temperatures, that produce terahertz radiation at a single frequency — not very useful, given that a wide range of frequencies is required to penetrate various materials.</p> <p>Now researchers from MIT, Harvard University, and the U.S. Army have built a compact device, the size of a shoebox, that works at room temperature to produce a terahertz laser whose frequency they can tune over a wide range. The device is built from commercial, off-the-shelf parts and is designed to generate terahertz waves by spinning up the energy of molecules in nitrous oxide, or, as it’s more commonly known, laughing gas.</p> <p>Steven Johnson, professor of mathematics at MIT, says that in addition to T-ray vision, terahertz waves can be used as a form of wireless communication, carrying information at a higher bandwidth than radar, for instance, and doing so across distances that scientists can now tune using the group’s device.</p> <p>“By tuning the terahertz frequency, you can choose how far the waves can travel through air before they are absorbed, from meters to kilometers, which gives precise control over who can ‘hear’ your terahertz communications or ‘see’ your terahertz radar,” Johnson says.&nbsp;“Much like changing the dial on your radio, the ability to easily tune a terahertz source is crucial to opening up new applications in wireless communications, radar, and spectroscopy.”</p> <p>Johnson and his colleagues have published their results today in the journal <em>Science</em>. Co-authors include MIT postdoc Fan Wang, along with Paul Chevalier, Arman Amirzhan, Marco Piccardo, and Federico Capasso of Harvard University, and Henry Everitt of the U.S. Army Combat Capabilities Development Command Aviation and Missile Center.</p> <p><strong>Molecular breathing room</strong></p> <p>Since the 1970s, scientists have experimented with generating terahertz waves using molecular gas lasers — setups in which a high-powered infrared laser is shot into a large tube filled with gas (typically methyl fluoride) whose molecules react by vibrating and eventually rotating. The rotating molecules can jump from one energy level to the next, the difference of which is emitted as a sort of leftover energy, in the form of a photon in the terahertz range. As more photons build up in the cavity, they produce a terahertz laser.</p> <p>Improving the design of these gas lasers has been hampered by unreliable theoretical models, the researchers say. In small cavities at high gas pressures, the models predicted that, beyond a certain pressure, the molecules would be too “cramped” to spin and emit terahertz waves. Partly for this reason, terahertz gas lasers typically used meters-long cavities and large infrared lasers.&nbsp;&nbsp;</p> <p>However, in the 1980s, Everitt found that he was able to produce terahertz waves in his laboratory using a gas laser that was much smaller than traditional devices, at pressures far higher than the models said was possible. This discrepancy was never fully explained, and work on terahertz gas lasers fell by the wayside in favor of other approaches.</p> <p>A few years ago, Everitt mentioned this theoretical mystery to Johnson when the two were collaborating on other work as part of MIT’s Institute for Soldier Nanotechnologies. Together with Everitt, Johnson and Wang took up the challenge, and ultimately formulated a new mathematical theory to describe the behavior of a gas in a molecular gas laser cavity. The theory also successfully explained how terahertz waves could be emitted, even from very small, high-pressure cavities.</p> <p>Johnson says that while gas molecules can vibrate at multiple frequencies and rotational rates in response to an infrared pump, previous theories discounted many of these vibrational states and assumed instead that a handful of vibrations were what ultimately mattered in producing a terahertz wave. If a cavity were too small, previous theories suggested that molecules vibrating in response to an incoming infrared laser would collide more often with each other, releasing their energy rather than building it up further to spin and produce terahertz.</p> <p>Instead, the new model tracked thousands of relevant vibrational and rotational states among millions of groups of molecules within a single cavity, using new computational tricks to make such a large problem tractable on a laptop computer. It then analyzed how those molecules would react to incoming infrared light, depending on their position and direction within the cavity.</p> <p>“We found that when you include all these other vibrational states that people had been throwing out, they give you a buffer,” Johnson says. “In simpler models, the molecules are rotating, but when they bang into other molecules they lose everything. Once you include all these other states, that doesn’t happen anymore. These collisions can transfer energy to other vibrational states, and sort of give you more breathing room to keep rotating and keep making terahertz waves.”</p> <p><strong>Laughing, dialed up</strong></p> <p>Once the team found that their new model accurately predicted what Everitt observed decades ago, they collaborated with Capasso’s group at Harvard to design a new type of compact terahertz generator by combining the model with new gases and a new type of infrared laser.</p> <p>For the infrared source, the researchers used a quantum cascade laser, or QCL — a more recent type of laser that is compact and also tunable.</p> <p>“You can turn a dial, and it changes the frequency of the input laser, and the hope was that we could use that to change the frequency of the terahertz coming out,” Johnson says.</p> <p>The researchers teamed up with Capasso, a pioneer in the development of QCLs, who provided a laser that produced a range of power that their theory predicted would work with a cavity the size of a pen (about 1/1,000 the size of a conventional cavity). The researchers then looked for a gas to spin up.</p> <p>The team searched through libraries of gases to identify those that were known to rotate in a certain way in response to infrared light, eventually landing on nitrous oxide, or laughing gas, as an ideal and accessible candidate for their experiment.</p> <p>They ordered laboratory-grade nitrous oxide, which they pumped into a pen-sized cavity. When they sent infrared light from the QCL into the cavity, they found they could produce a terahertz laser. As they tuned the QCL, the frequency of terahertz waves also shifted, across a wide range.</p> <p>“These demonstrations confirm the universal concept of a terahertz molecular laser source which can be broadly tunable across its entire rotational states when pumped by a continuously tunable QCL,” Wang says.</p> <p>Since these initial experiments, the researchers have extended their mathematical model to include a variety of other gas molecules, such as carbon monoxide and ammonia, providing scientists with a menu of different terahertz generation options with different frequencies and tuning ranges, paired with a QCL matched to each gas. The group’s theoretical tools also enable scientists to tailor the cavity design to different applications. They are now pushing toward more focused beams and higher powers, with commercial development on the horizon.</p> <p>Johnson says scientists can refer to the group’s mathematical model to design new, compact and tunable terahertz lasers, using other gases and experimental parameters.</p> <p>“These gas lasers were for a long time seen as old technology, and people assumed these were huge, low-power, nontunable things, so they looked to other terahertz sources,” Johnson says. “Now we’re saying they can be small, tunable, and much more efficient. You could fit this in your backpack, or in your vehicle for wireless communication or high-resolution imaging. Because you don’t want a cyclotron in your car.”</p> <p>This research was supported in part by the U.S. Army Research Office and the National Science Foundation.</p> A new shoebox-sized laser produces terahertz waves (green squiggles) by using a special infrared laser (red) to rotate molecules of nitrous oxide, or laughing gas, packed in a pen-sized cavity (grey).Courtesy of Chad Scales, US Army Futures CommandEnergy, Mathematics, Physics, Research, School of Science, Wireless, National Science Foundation (NSF), Department of Defense (DOD) Historian of the hinterlands In overlooked spots on the map, MIT Professor Kate Brown examines the turbulence of the modern world. Tue, 12 Nov 2019 23:59:59 -0500 Peter Dizikes | MIT News Office <p>History can help us face hard truths. The places Kate Brown studies are particularly full of them. &nbsp;</p> <p>Brown, a historian in MIT’s Program in Science, Technology, and Society, has made a career out of studying what she calls “modernist wastelands” — areas suffering after years of warfare, social conflict, and even radioactive fallout from atomic accidents.&nbsp;</p> <p>Brown has spent years conducting research in the former Soviet Union, often returning to a large region stretching across the Poland-Ukraine border, which has been beset by two world wars, ethnic cleansing, purges, famine, and changes in power. It’s the setting for her acclaimed first book, “A Biography of No Place” (2004), a chronicle of the region’s conflicts and their consequences.</p> <p>The same region includes the site of the Chernobyl nuclear-reactor explosion, subject of Brown’s fourth and most recent book, “Manual for Survival: A Chernobyl Guide to the Future” (2019), which uncovers extensive new evidence about the effects of the disaster on the area and its people.&nbsp;</p> <p>“Progress [often] occurs in big capitals, but if you go to the hinterlands, you see what’s left in the wake of progress, and it’s usually a lot of destruction,” says Brown, speaking of areas that have suffered due to technological or economic changes.&nbsp;&nbsp;</p> <p>That does not apply only to the former Soviet Union and its former satellite states, to be sure. Brown, who considers herself an transnational historian, is also the author of 2013’s “Plutopia,” reconstructing life in and around the plutonium-producing plants in Richland, Washington, and Ozersk, Russia, which have both left a legacy of nuclear contamination.</p> <p>With a record of innovative and award-winning research over more than two decades in academia, Brown joined MIT with tenure, as a professor of science, technology, and society, in early 2019.</p> <p><strong>When “no place” is like home</strong></p> <p>The lesson that life can be tough in less-glamorous locales is one Brown says she learned early on. Brown grew up in Elgin, Illinois, once headquarters of the famous Elgin National Watch Company — although that changed.</p> <p>“The year I was born, 1965, the Elgin watch factory was shuttered, and they blew up the watch tower,” Brown says. “It was a company town, and that was the main business. I grew up watching the supporting businesses close, and then regular clothing stores and grocery stores went bankrupt.”</p> <p>And while the changes in Elgin were very different (and less severe) than those in the places she has studied professionally, Brown believes her hometown milieu has shaped her work.</p> <p>“It was nothing near what I describe in wartime Ukraine, or Chernobyl, or one of plutonium plants, but I finally realized I was so interested in modernist wastelands because of my own background,” Brown says.</p> <p>Indeed, Brown notes, her mother moved four times in her life because of the “deindustrialized landscape,” from places like Aliquippa, Pennsylvania, and Detroit. And her parents, she says, “moved to Elgin thinking it was healthy, small-town America. So how many times do they have to jump? … What if you care about your family and community? What if you’re loyal?”</p> <p>As it happens, part of the direct impetus for Brown’s career came from her mother. One day in the 1980s, Brown recalls, she was talking to her parents and criticizing the superficial culture surrounding U.S.-Soviet relations. To which Brown’s mother responded, “Do something about it. Study Russian, change the world.”</p> <p>As an undergraduate at the University of Wisconsin, Brown soon “took everything Russian, Russian lit and translation, grammar, history, politics, and I just got hooked. Then I thought I should go study there.” In 1987, she spent a year abroad in Leningrad (now St. Petersburg). After graduating, Brown worked for a study-abroad program in the Soviet Union for three more years, helping students troubleshoot “pretty major problems, with housing and food and medical care,” as well as some cases where students had run afoul of Soviet authorities.&nbsp;</p> <p>Returning to the U.S., Brown entered the graduate program in history at the University of Washington while working as a journalist. She kept returning to the Ukraine borderlands region, collecting archival and observational material, and writing it up, for her dissertation “in the narrative mode of a first-person travelogue.”</p> <p>That did not fit the model of a typical PhD thesis. But Richard White, a prominent American historian with an openness toward innovative work, who was then at the University of Washington, advocated to keep the form of Brown’s work largely intact. She received her PhD, and more: Her thesis formed the basis of “A Biography of No Place,” which won the George Louis Beer Prize for International European History from the American Historical Association (AHA). Brown joined the faculty at the University of Maryland at Baltimore County before joining MIT.</p> <p><strong>A treasure island for research</strong></p> <p>In all of Brown’s books, a significant portion of the work, a bit atypically for academia, has continued to incorporate first-person material about her travels, experiences, and research, something she also regards as crucial.</p> <p>“Because these places are rarely visited, they’re hard to imagine for the readers,” Brown says. “That puts me in the narrative, though not for all of it.”</p> <p>Brown’s approach to history is also highly archival: She has unearthed key documents in all manner of local, regional, and national repositories. When she entered the profession, in the 1990s, many Soviet archives were just opening up, providing some rich opportunities for original research.&nbsp;</p> <p>“It’s amazing,” Brown says. “Over and over again I’ve been one of the first persons to walk into an archive and see what’s there. And that is just sort of a treasure island quality of historical research. Being a Soviet historian in the early 1990s, there was nothing else like it.”</p> <p>The archives continue to be profitable for Brown, yielding some of her key new insights in “Manual for Survival.” In assessing Chernobyl, Brown shows, local and regional studies of the disaster’s effects were often extensive and candid, but the official record became sanitized as it moved up the Soviet bureaucratic hierarchy.</p> <p>Brown’s combination of approaches to writing history has certainly produced extensive professional success. “Plutopia” was awarded the AHA’s Albert J. Beveridge and John H. Dunning prizes as the best book in American history and the Organization of American Historians’ Ellis H. Hawley Award, among others. Brown has also received Guggenheim Foundation and Carnegie Foundation fellowships.</p> <p>Brown is currently working on a new research project, examining overlooked forms of human knowledge about plants and the natural environment. She notes that there are many types of “indigenous knowledge and practices we have missed or rejected,” which could foster a more sustainable relationship between human society and the environment.</p> <p>It is a different type of topic than Brown’s previous work, although, like her other projects, this one recognizes that we have spent too long mishandling the environment, rather than prioritizing its care — another hard truth to consider.</p> Kate Brown is a professor in MIT's Program in Science, Technology, and Society.Image: Allegra BovermanSchool of Humanities Arts and Social Sciences, Faculty, Profile, Technology and society, Energy, History, Program in STS, Nuclear power and reactors, History of science, Science communications Enhanced nuclear energy online class aims to inform and inspire Revamped version of MITx MOOC includes new modules on nuclear security, nuclear proliferation, and quantum engineering. Thu, 24 Oct 2019 14:30:01 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering <p>More than 3,000 users hailing from 137 countries signed up for the MIT Department of Nuclear Energy's debut massive open online course (MOOC), Nuclear Energy: Science, Systems and Society, which debuted last year on <em>MITx. </em>Now, after roaring success, the course will be <a href="" target="_blank">offered again</a> in spring 2020, with key upgrades.</p> <p>“We had hoped there was an appetite in the general public for information about nuclear energy and technology,” says Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering and one of the course instructors. “We were fully confirmed by this first offering.”</p> <p>Unfolding over nine weeks, the MOOC provides a primer on nuclear energy and radiation and the wide-ranging applications of nuclear technology in medicine, security, energy, and research. It aims not just to educate, but to capture the interest of a distance-learning audience not necessarily well acquainted with physics and mathematics.</p> <p>“The MOOC builds on a tradition in our department of a first-year seminar that exposes students to a broad overview of the field,” says another instructor, Anne White, professor and head, Department of Nuclear Science and Engineering. “We set ourselves the challenge of translating the experience of being MIT first-years, who jump into something they know nothing about, and come out with excitement for the foundations of the field and its frontiers.”</p> <p>Before setting out to tackle this problem, the creative team — which also includes Michael Short, the Class of ’42 Career Development Assistant Professor of Nuclear Science and Engineering, and John Parsons, senior lecturer in the Finance Group at MIT Sloan School of Management — carefully reviewed existing online nuclear science offerings.</p> <p>“When we looked at MOOCs out in the world, a lot of them are wonderful, but highly technical,” says White. “We had a different vision of what MIT could accomplish, and that was reaching a big audience of virtual first-years.”</p> <p>For last year’s launch, the MOOC was structured around three modules. The first, taught by Short, introduced nuclear science at the atomic level. “We focused on the basics — the nucleus and particles, and the technologies that naturally emerge out of the study of the discipline,” says Buongiorno. This included a close look at ionizing radiation and how to measure it, with an invitation for online users to build a simple Geiger counter to measure radiation in their own backyards.</p> <p>The second module, led by Buongiorno and Parsons, delved into how nuclear reactors function, what makes nuclear energy attractive, issues of safety and waste, and questions of nuclear power plant economics and policy.</p> <p>The third module, taught by White, discussed magnetic fusion energy research, with a look at pioneering work at MIT and elsewhere dealing with high-magnetic-field fusion. “We lay the foundation first for fission power, and see a lot of enthusiasm about decarbonizing the grid in the short term,” says White. “We then present fusion power and MIT’s SPARC experiment, which really captures students’ imagination with its potential as a future energy source.”</p> <p>Translating key elements of nuclear science and technology syllabi from the MIT classroom setting to prerecorded video segments, slides, and online assessments for the MOOC proved a significant effort for instructors.</p> <p>“Much of the material was drawn from classes we collectively taught, and it took nearly a year to develop this curriculum and make sure it was the right content, at the right level,” says Buongiorno. “It was a huge challenge to make this intelligible and attractive to a much broader audience than usual, people without a science background, or who might not be on the same page around energy.” It was, he adds, “more difficult than a typical class I teach.”</p> <p>The MOOC included opportunities for students to interact with each other and the instructors at key junctures, through the means of online write-in forums. Buongiorno and his colleagues had hoped to duplicate online the vibrant interactions of residential classrooms, and even offer office hours, but it proved infeasible. “Because of the geographic distribution of participants, it made no sense; half of the students would be excluded because the event would be taking place in the middle of the night.”</p> <p>The team, not content to rest on its laurels, is adding elements for the MOOC’s second run: R. Scott Kemp, the MIT Class of ’43 Associate Professor of Nuclear Science and Engineering, will teach a new module on nuclear security and nuclear proliferation, and Paola Cappellaro, the Esther and Harold E. Edgerton Associate Professor of Nuclear Science and Engineering, will offer a module on quantum engineering.</p> <p>In addition to this expansion, White envisions an eventual residential version of the course, where first-years could take the MOOC online and attend seminars on campus to receive MIT credit. “Our goal as a department is not just educating majors in nuclear science and engineering, but creating classes appealing to students outside the major,” she says. “It’s in the pipeline.”</p> <p>Given rising concern about climate change, and the emergence of new technologies in fission and fusion, the timing of this MOOC seems propitious to its founding team.</p> <p>“We’d like to have an impact with the course on the greater debate about the use of nuclear energy as part of the solution for climate change,” says Buongiorno. “The public in this debate needs science-based input and facts about different technologies, which is one of our major objectives.” Adds White, “We believe the course will appeal to folks working in government, policy, industry, as well as to those who are simply curious about what’s happening at the frontiers of our field.”</p> “We’d like to have an impact with the course on the greater debate about the use of nuclear energy as part of the solution for climate change,” says Professor Jacopo Buongiorno.Nuclear science and engineering, School of Engineering, Sloan School of Management, Classes and programs, Education, teaching, academics, Design, Energy, Environment, Nuclear power and reactors, EdX, Physics, Fusion, Massive open online courses (MOOCs), Climate change, MITx Scientists observe a single quantum vibration under ordinary conditions Studying a common material at room temperature, researchers bring quantum behavior “closer to our daily life.” Sun, 06 Oct 2019 23:59:59 -0400 Jennifer Chu | MIT News Office <p>When a guitar string is plucked, it vibrates as any vibrating object would, rising and falling like a wave, as the laws of classical physics predict. But under the laws of quantum mechanics, which describe the way physics works at the atomic scale, vibrations should behave not only as waves, but also as particles. The same guitar string, when observed at a quantum level, should vibrate as individual units of energy known as phonons.</p> <p>Now scientists at MIT and the Swiss Federal Institute of Technology have for the first time created and observed a single phonon in a common material at room temperature.</p> <p>Until now, single phonons have only been observed at ultracold temperatures and in precisely engineered, microscopic materials that researchers must probe in a vacuum. In contrast, the team has created and observed single phonons in a piece of diamond sitting in open air at room temperature. The results, the researchers write in a paper published today in <em>Physical Review X</em>, “bring quantum behavior closer to our daily life.”</p> <p>“There is a dichotomy between our daily experience of what a vibration is — a wave — and what quantum mechanics tells us it must be — a particle,” says Vivishek Sudhir, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “Our experiment, because it is conducted at very tangible conditions, breaks this tension between our daily experience and what physics tells us must be the case.”</p> <p>The technique the team developed can now be used to probe other common materials for quantum vibrations. This may help researchers characterize the atomic processes in solar cells, as well as identify why certain materials are superconducting at high temperatures. From an engineering perspective, the team’s technique can be used to identify common phonon-carrying materials that may make ideal interconnects, or transmission lines, between the quantum computers of the future.</p> <p>“What our work means is that we now have access to a much wider palette of systems to choose from,” says Sudhir, one of the paper’s lead authors.</p> <p>Sudhir’s co-authors are Santiago Tarrago Velez, Kilian Seibold, Nils Kipfer, Mitchell Anderson, and Christophe Galland, of the Swiss Federal Institute of Technology.</p> <p><strong>“Democratizing quantum mechanics”</strong></p> <p>Phonons, the individual particles of vibration described by quantum mechanics, are also associated with heat. For instance, when a crystal, made from orderly lattices of interconnected atoms, is heated at one end, quantum mechanics predicts that heat travels through the crystal in the form of phonons, or individual vibrations of the bonds between molecules.</p> <p>Single phonons have been extremely difficult to detect, mainly because of their sensitivity to heat. Phonons are susceptible to any thermal energy that is greater than their own. If phonons are inherently low in energy, then exposure to higher thermal energies could trigger a material’s phonons to excite en masse, making detection of a single photon a needle-in-a-haystack endeavor.</p> <p>The first efforts to observe single phonons did so with materials specially engineered to harbor very few phonons, at relatively high energies. These researchers then submerged the materials in near-absolute-zero refrigerators Sudhir describes as “brutally, aggressively cold,” to ensure that the surrounding thermal energy was lower than the energy of the phonons in the material.</p> <p>“If that’s the case, then the [phonon] vibration cannot borrow energy from the thermal environment to excite more than one phonon,” Sudhir explains.</p> <p>The researchers then shot a pulse of photons (particles of light) into the material, hoping that one photon would interact with a single phonon. When that happens, the photon, in a process known as Raman scattering, should reflect back out at a different energy imparted to it by the interacting phonon. In this way, researchers were able to detect single phonons, though at ultracold temperatures, and in carefully engineered materials.</p> <p>“What we’ve done here is to ask the question, how do you get rid of this complicated environment you’ve created around this object, and bring this quantum effect to our setting, to see it in more common materials,” Sudhir says. “It’s like democratizing quantum mechanics in some sense.”</p> <p><strong>One in a million</strong></p> <p>For the new study, the team looked to diamond as a test subject. In diamond, phonons naturally operate at high frequencies, of tens of terahertz — so high that, at room temperature, the energy of a single phonon is higher than the surrounding thermal energy.</p> <p>“When this crystal of diamond sits at room temperature, phonon motion does not even exist, because there’s no energy at room temperature to excite anything,” Sudhir says.</p> <p>Within this vibrationally quiet mix of phonons, the researchers aimed to excite just a single phonon. They sent high-frequency laser pulses, consisting of 100 million photons each, into the diamond —&nbsp;a crystal made up of carbon atoms — on the off chance that one of them would interact and reflect off a phonon. The team would then measure the decreased frequency of the photon involved in the collision — confirmation that it had indeed hit upon a phonon, though this operation wouldn’t be able to discern whether one or more phonons were excited in the process.</p> <p>To decipher the number of phonons excited, the researchers sent a second laser pulse into the diamond, as the phonon’s energy gradually decayed. For each phonon excited by the first pulse, this second pulse can de-excite it, taking away that energy in the form of a new, higher-energy photon. If only one phonon was initially excited, then one new, higher-frequency photon should be created.</p> <p>To confirm this, the researchers placed a semitransparent glass through which this new, higher-frequency photon would exit the diamond, along with two detectors on either side of the glass. Photons do not split, so if multiple phonons were excited then de-excited, the resulting photons should pass through the glass and scatter randomly into both detectors. If just one detector “clicks,” indicating the detection of a single photon, the team can be sure that that photon interacted with a single phonon.</p> <p>“It’s a clever trick we play to make sure we are observing just one phonon,” Sudhir says.</p> <p>The probability of a photon interacting with a phonon is about one in 10 billion. In their experiments, the researchers blasted the diamond with 80 million pulses per second — what Sudhir describes as a “train of millions of billions of photons” over several hours, in order to detect about 1 million photon-phonon interactions. In the end, they found, with statistical significance, that they were able to create and detect a single quantum of vibration.</p> <p>“This is sort of an ambitious claim, and we have to be careful the science is rigorously done, with no room for reasonable doubt,” Sudhir says.</p> <p>When sending in their second laser pulse to verify that single phonons were indeed being created, the researchers delayed this pulse, sending in into the diamond as the excited phonon was beginning to ebb in energy. In this way, they were able to glean the manner in which the phonon itself decayed.</p> <p>“So, not only are we able to probe the birth of a single phonon, but also we’re able to probe its death,” Sudhir says. “Now we can say, ‘go use this technique to study how long it takes for a single phonon to die out in your material of choice.’ That number is very useful. If the time it takes to die is very long, then that material can support coherent phonons. If that’s the case, you can do interesting things with it, like thermal transport in solar cells, and interconnects between quantum computers.”</p> MIT researchers detect a single quantum vibration within a diamond sample (shown here) at room temperature.Image: Sabine GallandEnergy, Physics, Kavli Institute, Mechanical engineering, Materials Science and Engineering, Quantum computing, Research, School of Engineering President Reif speaks at MIT Climate Symposium Wed, 02 Oct 2019 13:27:14 -0400 MIT News Office <p><em>President L. Rafael Reif delivered the below introductory remarks at today’s “Progress in Climate Science” symposium.</em></p> <p>Good afternoon!&nbsp; I am delighted to be here with all of you.</p> <p>At MIT, persuading people to leave their labs and their classrooms to attend a daytime event is notoriously difficult. Attracting a crowd to fill Kresge Auditorium can feel almost impossible! So, the full house we have this afternoon is deeply significant. It is a sobering mark of the urgency and importance of the subject matter and an inspiring sign of the breadth, depth, and passionate commitment of MIT’s climate action community.</p> <p>Also: A warm hello to everyone joining us via livestream! It is wonderful and fitting that the knowledge and ideas from this session are being shared around the world.</p> <p>This is the first in a series of six symposia.&nbsp; For the tremendous effort it took over many months to create this outstanding series, I want to express my thanks and admiration to the Climate Action Symposia Organizing Committee – and especially to its chair, Professor Paul Joskow. The challenges of dealing with climate change will take all of our collective talents and the best work of countless MIT minds and hands, so I hope we can maintain this terrific level of interest and attendance for all six in the series!</p> <p>These six symposia will help us take stock of all that the people of MIT have accomplished through MIT’s Climate Action Plan, and they will inform and inspire our plans going forward. In this work, I am grateful to Vice President for Research Maria Zuber for her leadership in creating the plan four years ago, in tracking our progress ever since, and in raising our sights for the future.</p> <p>I would also like to express my profound admiration for today’s keynote speaker, Professor Susan Solomon. Susan has an incomparable record of producing superb science on subjects from the depletion of the ozone layer to global warming: superb science that formed the springboard for policies that have literally changed the world. We were fortunate in 2012 when she joined our faculty. We are certainly fortunate to have her with us today. And, we could not ask for a more powerful and inspiring voice in our drive to increase fundamental knowledge and to accelerate progress towards a sustainable human society.</p> <p>Before we begin, I would like to acknowledge that this is a serious moment in the life of the MIT community. It is a moment for engaging intensely with each other on many urgent questions, including how we should raise funds for the work of the Institute and what principles should guide us. It is a time for serious debate – and for serious listening.</p> <p>In this era of growing fortunes and shrinking federal funds, it is clear that as a community, we need to consider many questions. We need to understand the changing nature of the donor population. We need to decide how to weigh the political, cultural and economic impacts of donors’ behavior – and much more.</p> <p>Questions like these are certainly relevant to how we fund MIT’s work on energy and the environment, the work of the people in this room.</p> <p>As members of MIT’s climate action community, we need to have serious conversations with one another about the best way to move forward. Our capacity for respectful argument has always been a signature strength of MIT. So I hope you will begin those conversations with each other in the days ahead, especially with the people who disagree with you.</p> <p>Considering those who will speak today, and looking out at all of you, I am conscious that this is a room full of climate experts – and that, as a former electrical engineer, I am not one of them. So I will offer just a few comments based on my observations and conversations here, in Washington, and in philanthropic circles, as I have been striving to build support for the work of climate science and solutions at MIT.</p> <p>Last June, our MIT Commencement speaker was the prominent philanthropist and former mayor of New York City Michael Bloomberg. On the day of his remarks, he announced a remarkable personal commitment: $500 million to launch a new national climate initiative, which he calls “Beyond Carbon.”</p> <p>He described an ambitious agenda of political action: taking necessary steps to close coal plants, to block the creation of new gas plants, to support the leadership of state and local politicians, to create incentives like a carbon tax and, in his words, to take the climate challenge “directly to the people.”</p> <p>As he explained to our MIT audience, in his view (and I quote), “At least for the foreseeable future, winning the battle against climate change will depend less on scientific advancement and more on political activism.” And just ten days ago, former US Vice President and climate action pioneer Al Gore published a piece in the New York Times. He made a similar argument about the need for political action, because, in his words, “We have the technology we need.”</p> <p>I am a big admirer of both Mayor Bloomberg and Vice President Gore. I am profoundly grateful for their early leadership and relentless activism. I appreciate their faith in the kind of technologies that have already been developed – some of them invented and advanced on this campus. I agree that unless and until society demands a change in policy, priorities, and behaviors, technology alone can’t save us. And I share their view that it is absolutely vital to build popular political support for climate action.</p> <p>But with the greatest respect, I would like to propose an additional perspective, because I am convinced that we also need to do a great many other important things, at the same time.</p> <ul> <li>We need to dramatically improve our ability to predict the localized impacts of climate change and to design solutions that allow coastal cities and other vulnerable areas to adapt to and survive them.</li> <li>We need to make solar cells and wind turbines more efficient and to produce them with less reliance on rare or costly materials.</li> <li>And, we need even better grid-scale storage options to handle intermittency.</li> <li>We need to find ways to expand the nation’s transmission infrastructure to support the efficient deployment of solar and wind.</li> <li>We need to make car batteries and carbon-free hydrogen cheaper and more efficient.</li> <li>We need better mass transit – and not just here in Boston!</li> <li>We need smaller, safer, more modern and less costly nuclear plants to supplement intermittent renewable sources.</li> <li>And, we need to address not only electricity and transportation, but also agriculture, manufacturing, buildings, and much more.</li> </ul> <p>In short, I am convinced that building broad and deep popular support for climate action would be much, much easier, and much more likely to succeed, if we could offer to society much more fine-grained scientific models and much less costly technological solutions. To break the impasse, I believe that, as a society, we must find ways to invest aggressively in advancing climate science and in making climate mitigation and adaptation technologies dramatically less expensive: inexpensive enough to win widespread political support, to be affordable for every society, and to deploy on a planetary scale.</p> <p>Many climate activists argue that the best path lies in political will. They note, for example, that the cost of renewables has been dropping for years and that once we put a tax on carbon, market incentives will keep pushing prices down and make non-carbon alternatives more attractive. That is clearly true. Less clear, however, is whether the carbon-cost hammer is enough to drive the nail of global societal change.&nbsp;</p> <p>In my view, it is crucial to understand that while passing a carbon tax would surely spur the development of cheaper low- and zero-carbon energy, developing cheaper low- and zero-carbon energy sources would make it much easier to pass a carbon tax! So, we need to do both, as fast as we can!</p> <p>Ordinarily, funding at the necessary scale would come with government leadership. Certainly, when we developed our Climate Action Plan in 2015, we expected to encounter reliable, long-term federal support.&nbsp;</p> <p>In the current political environment, I believe the answer, until government leadership becomes available, is private philanthropy – a conclusion that brings us back to the questions for our community that I highlighted at the start. I believe that those of us committed to this cause need to come together to seek out new ways to support the advanced science and technology that will enable political action to succeed on the path to a sustainable future for us all.</p> <p>I look forward to joining you in this urgent work. Thank you.</p> Community, Faculty, Staff, Students, Climate change, Alternative energy, Energy, Greenhouse gases, President L. Rafael Reif Helping lower-income households reap the benefits of solar energy Solstice makes community solar projects more accessible for people unable to invest in rooftop panels. Thu, 26 Sep 2019 00:00:00 -0400 Zach Winn | MIT News Office <p>Rooftop solar panels are a great way for people to invest in renewable energy while saving money on electricity. Unfortunately, the rooftop solar industry only serves a fraction of society.</p> <p>Many Americans are unable to invest in rooftop solar; they may be renters or lack the upfront money required for installations or live in locations that don’t get enough sun. Some states have tried to address these limitations with community solar programs, which allow residents to invest in portions of large, remote solar projects and enjoy savings on their electricity bills each month.</p> <p>But as community solar projects have exploded in popularity in the last few years, higher-income households have been the main beneficiaries. That’s because most developers of community solar arrays require residents to have high credit scores and sign long-term contracts.</p> <p>Now the community solar startup Solstice is changing the system. The company recruits and manages customers for community solar projects while pushing developers for simpler, more inclusive contract terms. Solstice has also developed the EnergyScore, a proprietary customer qualification metric that approves a wider pool of residents for participation in community solar projects, compared to the credit scores typically used by developers.</p> <p><strong>“</strong>We’re always pushing our developer partners to be more inclusive and customer-friendly,” says Solstice co-founder Sandhya Murali MBA ’15, who co-founded the company with Stephanie Speirs MBA ’17. “We want them to design contracts that will be appealing to the customer and kind of a no-brainer.”</p> <p>To date, Solstice has helped about 6,400 households sign up for community solar projects. The founders say involving a more diverse pool of residents will be essential to continue the industry’s breakneck growth.</p> <p>“We think it’s imperative that we figure out how to make this model of residential solar, which can save people money and has the power to impact millions of people across the country, scale quickly,” Murali says.</p> <p><strong>A more inclusive system</strong></p> <p>In 2014, Speirs had been working on improving access to solar energy in Pakistan and India as part of a fellowship with the global investment firm Acumen. But she realized developing countries weren’t the only areas that dealt with energy inequalities.</p> <p>“There are problems with solar in America,” Speirs says. “Eighty percent of people are locked out of the solar market because they can’t put solar on their rooftop. People who need solar savings the most in this country, low- to moderate-income Americans, are the least likely to get it.”</p> <p>Speirs was planning to come to MIT’s Sloan School of Management to pursue her MBA the following year, so she used a Sloan email list to see if anyone was interested in joining the early-stage venture. Murali agreed to volunteer, and although she graduated in 2015 as Speirs entered Sloan, Murali spent a lot of time on campus helping Speirs get the company off the ground. Speirs also received a fellowship from MIT's Legatum Center.</p> <p><strong>“</strong>Steph’s time at Sloan was focused on Solstice, so we kind of became an MIT startup,” Murali says. “I would say MIT sort of adopted Solstice, and we’ve grown since then with support from the school.”</p> <p>Community solar is an effective way to include residents in solar projects who might not have the resources to invest in traditional rooftop solar panels. Speirs says there are no upfront costs associated with community solar projects, and residents can participate by investing in a portion of the planned solar array whether they own a home or not.</p> <p>When a developer has enough resident commitments for a project, they build a solar array in another location and the electricity it generates is sent to the grid. Residents receive a credit on their monthly electric bills for the solar power produced by their portion of the project.</p> <p>Still, there are aspects of the community solar industry that discourage many lower-income residents from participating. Solar array developers have traditionally required qualified customers to sign long contracts, sometimes lasting 30 years, and to agree to cancellation fees if they leave the contract prematurely.</p> <p>Solstice, which began as a nonprofit to improve access to solar energy for low-income Americans, advocates for customers, working with developers to reduce contract lengths, lower credit requirements, and eliminate cancellation fees.</p> <p>As they engaged with developers, Solstice’s founders realized the challenges associated with recruiting and managing customers for community solar projects were holding the industry back, so they decided to start a for-profit arm of the company to work with customers of all backgrounds and income levels.</p> <p>“Solstice’s obsession is how do we make it so easy and affordable to sign up for community solar such that everyone does it,” Speirs says.&nbsp;</p> <p>In 2016, Solstice was accepted into The Martin Trust Center for MIT Entrepreneurship’s delta v accelerator, where the founders began helping developers find customers for large solar projects. The founders also began developing a web-based customer portal to make participation in projects as seamless as possible.</p> <p>But they realized those solutions didn’t directly address the biggest factor preventing lower-income Americans from investing in solar power.</p> <p>“To get solar in this country, you either have to be able to afford to put solar on your rooftop, which costs $10,000 to $30,000, or you have to have the right FICO score for community solar,” Speirs says, referring to a credit score used by community solar developers to qualify customers. “Your FICO score is your destiny in this country, yet FICO doesn’t measure whether you pay your utility bills on time, or your cell phone bills, or rental bills.”</p> <p>With this in mind, the founders teamed up with data scientists from MIT and Stanford University, including Christopher Knittle, the George P. Shultz Professor at MIT Sloan, to create a new qualification metric, the EnergyScore. The EnergyScore uses a machine learning system trained on data from nearly 875,000 consumer records, including things like utility payments, to predict payment behavior in community solar contracts. Solstice says it predicts future payment behavior more accurately than FICO credit scores, and it qualifies a larger portion of low-to-moderate income customers for projects.</p> <p><strong>Driving change</strong></p> <p>Last year, Solstice began handling the entire customer experience, from the initial education and sales to ongoing support during the life of contracts. To date, the company has helped find customers for solar projects that have a combined output of 100 megawatts of electricity in New York and Massachusetts.</p> <p>And later this year, Solstice will begin qualifying customers with its EnergyScore, enabling a whole new class of Americans to participate in community solar projects. One of the projects using the EnergyScore will put solar arrays on the rooftops of public housing buildings in New York City in partnership with the NYC Housing Authority.</p> <p>Ultimately, the founders believe including a broader swath of American households in community solar projects isn’t just the right thing to do, it’s also an essential part of the fight against climate change.</p> <p>“[Community solar] is a huge, untapped market, and we’re unnecessarily restricting ourselves by creating some of these contract barriers that make community solar remain in the hands of the wealthy,” Murali says. “We’re never going to scale community solar and make the impact on climate change we need to make if we don’t figure out how to make this form of solar work for everyone.”</p> Solstice works with solar developers to fund large, remote solar farms that communities can invest in.Image courtesy of SolsticeInnovation and Entrepreneurship (I&E), Startups, Alumni/ae, Technology and society, Depression, Martin Trust Center for MIT Entrepreneurship, Sloan School of Management, Energy, Solar, Renewable energy Greener and fairer: Balancing pollution, energy prices, and household income New research looks at how environmental taxes can work for everyone, in Spain and beyond. Wed, 25 Sep 2019 12:00:02 -0400 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>Governments that impose taxes on carbon dioxide and other greenhouse gas emissions can benefit from a cleaner, more climate-friendly environment and a revenue stream that can be tapped to lower other taxes and create jobs. But environmental taxes may also exact an excessive financial burden on low-income households, which spend a much greater fraction of their budgets than richer households do on heating oil, natural gas, and electricity. This concern has limited the use of green taxes in Spain, where emissions are taxed at levels far below average for the European Union, which seeks to lower emissions across the continent to fulfill its 2015 Paris Agreement climate pledge.</p> <p>Now a new <a href=";id=283">study</a> by researchers at the MIT Joint Program on the Science and Policy of Global Change, the University of Oldenburg in Germany, and the Basque Center for Climate Change in Spain shows that low-income households in Spain can actually benefit from environmental taxes if revenues are redistributed to all taxpayers. Using a computational model to assess the environmental and economic impacts of a green tax reform policy in which revenues are recycled in equal amounts to households in annual lump-sum payments, the researchers found that the policy significantly reduces emissions without imposing economic hardship on any segment of the population. The study appears in the journal <em>Economics of Energy and Environmental Policy.</em></p> <p>“There may be a tradeoff between efficiency and equity in climate policy design,” says <a href="">Xaquin Garcia-Muros</a>, a co-author of the study and postdoctoral associate at the MIT Joint Program. Noting the perfect can be the enemy of the good, as indicated by the <a href="">November 2018 Yellow Vest protests</a> against fuel tax hikes in France, he adds, “Governments that seek to introduce environmental policies need to show they can cut emissions equitably in order for the public to support them. Otherwise, climate mitigation measures will be rejected by public opinion, and attempts to tackle climate change will be unsuccessful.”</p> <p>The proposed policy includes a tax on carbon dioxide (CO<sub>2</sub>) of 40 euros per metric ton in all sectors (except transportation) not covered by the EU emissions trading system, tax increases on fossil fuels to match the EU average of 1.5 percent of GDP, and economy-wide taxes on air pollutants — nitrogen oxides, (NOx) and sulfur dioxide (SO<sub>2</sub>) emissions at 1,000 euros/metric ton. In addition, it provides annual lump-sum rebates to private households based on household income.</p> <p>Combining a “computable general equilibrium” model of the Spanish economy with a “micro-simulation” sub-model that characterizes households of different income levels, the researchers determined the tax reform policy’s impact on pollution levels, energy prices, and household net income. They found that the policy would significantly reduce emissions of CO<sub>2</sub> (10 percent), NOx (13 percent) and SO<sub>2</sub> (20 percent); produce an estimated 7.3 billion euros in annual revenues; and enable annual lump-sum rebates of 400 euros. Most importantly, the rebates would offset the cost of the green taxes for the bottom half of income levels, with the poorest households receiving an average annual net benefit of 203 euros and the richest paying a net cost of 599 euros.</p> <p>“We expect similar results in other southern European and public transit-oriented countries,” says Garcia-Muros. “But while results will differ for each country, all can benefit by ensuring that green tax policies accommodate economic inequality.” An earlier MIT Joint Program <a href="">study</a> showed how this principle can be applied in the design of carbon pricing policies in the United States.</p> Traffic in Madrid, SpainJoint Program on the Science and Policy of Global Change, Climate change, Greenhouse gases, Emissions, Environment, Energy, Economics, Policy, Carbon dioxide, Research, Sustainability Bridging the information gap in solar energy PhD student Elise Harrington studies the ways rural communities in Kenya and India learn about solar energy products and their options as consumers. Thu, 19 Sep 2019 23:59:59 -0400 Daysia Tolentino <p>Just 30 seconds into their walk to the town center of Kitale, in Kenya, where they would later conduct a focus group about locally available solar energy options, Elise Harrington and her research partner came across a vendor selling a counterfeit solar lantern. Because they had been studying these very products, they knew immediately it was a fake. But the seller assured them it was authentic and came with a warranty.</p> <p>They bought the lantern and presented it, along with a genuine version, to the members of focus groups. Few of them were able to tell the difference. It was an “eye-opening” discovery says Harrington, a doctoral student in the Department of Urban Studies and Planning who has been studying the ways that people in Kenya and India learn about solar products and make decisions about buying and maintaining them.</p> <p>While consumers in developed countries generally assume that a product such as a solar panel will come with a reliable warranty — and wouldn’t purchase the product if it didn’t — Harrington has learned through her fieldwork that this type of information isn’t necessarily communicated to consumers in the countries she’s studied. So far, her research indicates that people’s social relationships, for example with friends, family members, or trusted shop owners, play a critical role in the adoption of solar products, but that gaps remain in household knowledge when it comes to the more complex ideas of standards and after-sales services.</p> <p>“My research looks at not just whether solar energy products are available, but if they’re high quality and have services associated with them that will allow people to use them for a longer period of time,” Harrington says. She hopes that her findings can provide policymakers with information that will help them expand the use of clean energy while also serving communities that lack affordable, reliable electricity.</p> <p>The research combines Harrington’s interests in sustainability (she’s been involved in environmental issues “forever,” she says) with her love of travel (she’s learning Swahili and Hindi in her spare time). She’s also dedicated to her local community at MIT. As a graduate resident advisor in Simmons Hall, she can be found spending time with her undergraduate residents and even brewing them butterbeer during the occasional Harry Potter-themed event.</p> <p><strong>Equitable, reliable access</strong></p> <p>In many parts of the world including Kenya, a variety of different products provide electricity generated by solar power. These range from the ubiquitous solar lanterns that can power an LED light or charge a cell phone, to other types of solar home systems or microgrids that each provide varying amounts of power for different types of household devices.</p> <p>Advised by Associate Professor David Hsu, Harrington has studied how rural communities, first in India and now Kenya, can transition from a centralized electricity grid to these various types of home solar systems. During her recent trip to Kenya, this past summer, she fielded two surveys focusing on solar “intermediaries” who interface between end-users, companies, and policymakers.</p> <p>“As adoption of solar products grows in rural areas, so does the need for energy services accessible by rural communities, and consumer protections that result in equitable and durable electricity service models,” she says.</p> <p>Harrington, who majored in architecture as an undergraduate at the University of Pennsylvania, has studied solar energy in several different contexts during her time at MIT. In her first year, she focused on electricity planning for rooftop solar systems in the United States, specifically the growth of distributed solar in Hawaii. Then, as a fellow at the MIT Tata Center for Technology and Design, she investigated household decision-making on solar microgrids in rural India, looking at how communities could use these small-scale systems as alternatives to the state-sanctioned electricity grid, which is often unreliable in rural areas.</p> <p>“One of the faculty members in our department said to us during our first year that if we came in doing exactly the same thing we wrote in our statement, then we have not been pushed enough. This idea really set the trajectory on the risks I was willing to take in my research,” says Harrington.</p> <p>Harrington is also a recent fellow in the Martin Family Society of Fellows for Sustainability, a community at MIT dedicated to environmental sustainability. Martin fellows are selected every academic year from across the Institute’s departments. “We get the opportunity to interact with each other, learn about each other’s research, and be a part of this network of people who can learn from one another and contribute to environmental and sustainable work inside and out of MIT,” says Harrington.</p> <p><strong>A GReAt way to find community</strong></p> <p>Harrington is a graduate resident advisor (GRA) in Simmons Hall, which she describes as one of the best things about her experience at MIT. As a GRA, she acts as a resource for residents whenever they have questions, challenges, or want to talk about exciting opportunities or events in their lives.</p> <p>She says being a GRA has increased the depth of her connections to the MIT community, and she appreciates that she can come back to that after a long, hard day of work and spend time with the Simmons community.</p> <p>“From my perspective, so much of MIT’s entrepreneurial and creative spirit is housed in the undergraduate population here. Without being a GRA, I don’t think I would get to know that side of MIT as much,” says Harrington.</p> <p>She says she learns as much from her undergraduate residents as they do from her, especially about thinking ahead and managing stress. As a GRA, she hosts a range of events for them, her favorite being the aforementioned annual Harry Potter gathering where she and her partner dress up in costumes in addition to brewing up beverages for Simmons residents.</p> <p><strong>The benefits of downtime</strong></p> <p>In her spare time, Harrington likes to stay active, physically and mentally. She takes yoga classes in Boston and says it's one of the best ways to end a difficult day. She also enjoys going for runs, walks, and hikes in the outdoors.</p> <p>One of Harrington’s favorite activities is playing cards and board games with friends, which she says is a fantastic way to take her mind off of research. On the weekends, she likes to try out new games; her current favorite is Mission to Mars, which she describes as a Settlers of Catan-type board game in space, but with a bit more randomness. In general, she loves games that are accessible for everyone, so that players can just sit down with a group of people and figure out as they go.</p> <p>“Games, hiking, different things that get you out, they help. I find when I take a true break like that, I can work so much better when I go back to it,” Harrington says.</p> Elise HarringtonImage: Jake BelcherProfile, Urban studies and planning, School of Architecture and Planning, Graduate, postdoctoral, Students, Energy, Developing countries, India, Africa, Sustainability, Renewable energy, Solar Study: Even short-lived solar panels can be economically viable Research shows that, contrary to accepted rule of thumb, a 10- or 15-year lifetime can be good enough. Thu, 19 Sep 2019 11:00:00 -0400 David L. Chandler | MIT News Office <p>A new study shows that, contrary to widespread belief within the solar power industry, new kinds of solar cells and panels don’t necessarily have to last for 25 to 30 years in order to be economically viable in today’s market.</p> <p>Rather, solar panels with initial lifetimes of as little as 10 years can sometimes make economic sense, even for grid-scale installations — thus potentially opening the door to promising new solar photovoltaic technologies that have been considered insufficiently durable for widespread use.</p> <p>The new findings are described in <a href="" target="_blank">a paper</a> in the journal <em>Joule</em>, by Joel Jean, a former MIT postdoc and CEO of startup company <a href="">Swift Solar</a>; Vladimir Bulović, professor of electrical engineering and computer science and director of MIT.nano; and Michael Woodhouse of the National Renewable Energy Laboratory (NREL) in Colorado.</p> <p>“When you talk to people in the solar field, they say any new solar panel has to last 25 years,” Jean says. “If someone comes up with a new technology with a 10-year lifetime, no one is going to look at it. That’s considered common knowledge in the field, and it’s kind of crippling.”</p> <p>Jean adds that “that’s a huge barrier, because you can’t prove a 25-year lifetime in a year or two, or even 10.” That presumption, he says, has left many promising new technologies stuck on the sidelines, as conventional crystalline silicon technologies overwhelmingly dominate the commercial solar marketplace. But, the researchers found, that does not need to be the case.</p> <p>“We have to remember that ultimately what people care about is not the cost of the panel; it’s the levelized cost of electricity,” he says. In other words, it’s the actual cost per kilowatt-hour delivered over the system’s useful lifetime, including the cost of the panels, inverters, racking, wiring, land, installation labor, permitting, grid interconnection, and other system components, along with ongoing maintenance costs.</p> <p>Part of the reason that the economics of the solar industry look different today than in the past is that the cost of the panels (also known as modules) has plummeted so far that now, the “balance of system” costs — that is, everything except the panels themselves —&nbsp; exceeds that of the panels. That means that, as long as newer solar panels are electrically and physically compatible with the racking and electrical systems, it can make economic sense to replace the panels with newer, better ones as they become available, while reusing the rest of the system.</p> <p>“Most of the technology is in the panel, but most of the cost is in the system,” Jean says. “Instead of having a system where you install it and then replace everything after 30 years, what if you replace the panels earlier and leave everything else the same? One of the reasons that might work economically is if you’re replacing them with more efficient panels,” which is likely to be the case as a wide variety of more efficient and lower-cost technologies are being explored around the world.</p> <p>He says that what the team found in their analysis is that “with some caveats about financing, you can, in theory, get to a competitive cost, because your new panels are getting better, with a lifetime as short as 15 or even 10 years.”</p> <p>Although the costs of solar cells have come down year by year, Bulović says, “the expectation that one had to demonstrate a 25-year lifetime for any new solar panel technology has stayed as a tautology. In this study we show that as the solar panels get less expensive and more efficient, the cost balance significantly changes.”</p> <p>He says that one aim of the new paper is to alert the researchers that their new solar inventions can be cost-effective even if relatively short lived, and hence may be adopted and deployed more rapidly than expected. At the same time, he says, investors should know that they stand to make bigger profits by opting for efficient solar technologies that may not have been proven to last as long, knowing that periodically the panels can be replaced by newer, more efficient ones.&nbsp;</p> <p>“Historical trends show that solar panel technology keeps getting more efficient year after year, and these improvements are bound to continue for years to come,” says Bulović. Perovskite-based solar cells, for example, when first developed less than a decade ago, had efficiencies of only a few percent. But recently their record performance exceeded 25 percent efficiency, compared to 27 percent for the record silicon cell and about 20 percent for today’s standard silicon modules, according to Bulović. Importantly, in novel device designs, a perovskite solar cell can be stacked on top of another perovskite, silicon, or thin-film cell, to raise the maximum achievable efficiency limit to over 40 percent, which is well above the 30 percent fundamental limit of today’s silicon solar technologies. But perovskites have issues with longevity of operation and have not yet been shown to be able to come close to meeting the 25-year standard.</p> <p>Bulović hopes the study will “shift the paradigm of what has been accepted as a global truth.” Up to now, he says, “many promising technologies never even got a start, because the bar is set too high” on the need for durability.</p> <p>For their analysis, the team looked at three different kinds of solar installations: a typical 6-kilowatt residential system, a 200-kilowatt commercial system, and a large 100-megawatt utility-scale system with solar tracking. They used NREL benchmark parameters for U.S. solar systems and a variety of assumptions about future progress in solar technology development, financing, and the disposal of the initial panels after replacement, including recycling of the used modules. The models were validated using four independent tools for calculating the levelized cost of electricity (LCOE), a standard metric for comparing the economic viability of different sources of electricity.</p> <p>In all three installation types, they found, depending on the particulars of local conditions, replacement with new modules after 10 to 15 years could in many cases provide economic advantages while maintaining the many environmental and emissions-reduction benefits of solar power. The basic requirement for cost-competitiveness is that any new solar technology that is to be installed in the U.S should start with a module efficiency of at least 20 percent, a cost of no more than 30 cents per watt, and a lifetime of at least 10 years, with the potential to improve on all three.</p> <p>Jean points out that the solar technologies that are considered standard today, mostly silicon-based but also thin-film variants such as cadmium telluride, “were not very stable in the early years. The reason they last 25 to 30 years today is that they have been developed for many decades.” The new analysis may now open the door for some of the promising newer technologies to be deployed at sufficient scale to build up similar levels of experience and improvement over time and to make an impact on climate change earlier than they could without module replacement, he says.</p> <p>“This could enable us to launch ideas that would have died on the vine” because of the perception that greater longevity was essential, Bulović says.</p> <p>The study was supported by the Tata-MIT GridEdge Solar research program.</p> A new study shows that replacing new solar panels after just 10 or 15 years, using the existing mountings and control systems, can make economic sense, contrary to industry expectations that a 25-year lifetime is necessary.Research, Photovoltaics, Solar, Energy, School of Engineering, Electrical Engineering & Computer Science (eecs) Collaboration adds an extra dimension to undergraduate research Students on UROP teams agree that teamwork speeds up the research. Wed, 18 Sep 2019 10:25:01 -0400 Kathryn O'Neill | MIT Energy Initiative <p>Grace Bryant is a junior at MIT, but it wasn’t until this summer that she got a chance to team up with students outside her major through the Undergraduate Research Opportunities Program (UROP), supported by the MIT Energy Initiative (MITEI). She says she found the experience eye-opening.</p> <p>“I rarely interact with people doing something different from what I study,” says Bryant, who is majoring in urban studies and planning with computer science. “Talking to people with other majors about what they think their careers will look like was pretty cool, and something I don’t think I would have had without this experience.”</p> <p>Every summer, UROP students work with faculty on groundbreaking, real-world research; roughly 90 percent of MIT undergraduates will do a UROP before they graduate. Most undertake individual projects, but for those who team up with other undergraduates there are often added benefits — the chance to collaborate, learn from peers, and literally lend a hand — reflecting the kind of experience they’re likely to find in the workplace.</p> <p>“You never know who is going to change your perspective on your own work,” says Rachel Shulman, the undergraduate academic coordinator for MITEI, which funded 22 UROP students this summer, including multiple teams. “Energy is by definition multidisciplinary.”</p> <p>“It's a realistic working environment,” says William Lynch, a research specialist in the Research Laboratory of Electronics (RLE) who supervised two MITEI UROP students on a project focused on extending battery life. “In industry, people work together in teams.”</p> <p><strong>A helping hand</strong></p> <p>Some of the payoffs of collaboration are obvious. One of Lynch’s advisees, PJ Hernandez, was at work this summer and suddenly noticed their lab partner, Jackson Gray, struggling to wire a circuit with one hand; he’d recently broken his wrist. Hernandez had often turned to Gray for help on their project because he had a stronger background in electronics. Helping him build the circuit provided a chance to return the favor.</p> <p>“I’m really lucky there is another UROP,” says Hernandez, a senior majoring in electrical engineering. “Jackson has been helping me understand a lot.”</p> <p>Gray says working with Hernandez was great for him too — and not just because of his bad wrist. “We can work through the math together to be sure we’re not doing something fundamentally wrong,” says Gray, a junior in electrical engineering. “It’s useful just to have someone to question you and make you justify your ideas.”</p> <p>James Kirtley, professor of electrical engineering and principal investigator for the RLE project, says he likes to team up students for just this reason. “The very best teachers are students, so it is reasonable to expect that the experienced student will teach the less experienced students what he or she knows,” he says. “And the ambitious but less experienced student will, by asking questions, prod the more experienced student to think more broadly about the problem.”</p> <p>For Hernandez and Gray, the problem was how to develop an improved cell voltage balancer, a device used to extend the life of batteries by working to ensure that cells remain evenly charged as the battery cycles (charges and discharges current). They were hoping to improve on existing designs, since most balancers today work by dissipating extra charge as heat. As Gray explains, “If the battery management system sees that some cells are more charged than others, it will just waste that energy.”</p> <p>Gray says he hopes to find a way to balance batteries more efficiently — perhaps by moving charge from one cell to another — in part because batteries are so important to his hobbies. “I enjoy working on electric vehicles and small robots, both of which use lithium ion batteries,” a major focus of the project, he says.</p> <p>Hernandez’s interest in the project stems more from an interest in environmentalism, since making batteries more efficient should reduce waste: “Reducing our carbon footprint, reducing energy consumption, is really important,” they say.</p> <p><strong>Learning from others</strong></p> <p>Hernandez and Gray bolstered each other coming from the same field, but UROPs from different majors gain additional benefits from teaming up — as Bryant discovered by working with Yeva Yin, a junior in business analytics, and Luis Garcia, a senior math major, on a project for David Hsu, associate professor of urban and environmental planning.</p> <p>Hsu’s project follows up on research conducted over a decade ago that showed that electricity rates are higher in areas where the local utility has spent money on lobbying. Hsu hypothesizes that this connection has grown in the wake of the Citizens United ruling by the U.S. Supreme Court, which declared corporate spending on political candidates to be protected free speech — a decision that has led to a huge increase in such spending.</p> <p>Hsu employed the UROP team to gather data on state and federal campaign contributions, examine the voting patterns of utility regulators, and dig into the biographies of regulators to see what industries and companies they came from and went to after their service. The team also gathered information about the rates requested by companies, the cases presented for those rates, and the rates ultimately set for electricity—all public information.</p> <p>Hsu divvied up tasks so that each student took a different dive through the material, and says each individual’s work really complemented the others’. “I like to give each student a piece to be responsible for and make it overlap with the larger project,” Hsu says. “It gives students more independence and more ownership … They can learn more than they would by themselves.”</p> <p>“We all have different ideas and strengths, and that helps in coming up with different ways to approach topics,” says Yin. For example, she says she often uses applied skills in business analytics but knows less about the underlying theory; Garcia has had almost the exact opposite experience as a math major.</p> <p>“Studying math, there’s a lot of theory,” Garcia says. “So it’s easier for me to come up with a plan and visualize it. But when it comes time to implement the plan, that’s a newer experience.”</p> <p>Garcia investigated lobbying data — the amount of money donated by whom and to whom — and he says he learned a lot. “Working with real-world data … you have to decide what you won’t need, what’s actually important,” he says. By contrast, in math, “nothing is a strong judgment call,” he says.</p> <p><strong>Expanding horizons</strong></p> <p>All the students on UROP teams agree that collaboration speeds up the research. As Bryant remarks, “If you have a lot of work on your plate, you can redistribute the work, which is super useful.”</p> <p>Bryant also says the UROP gave her new insight into American government and finance. “I just really wasn’t aware of how the energy system was regulated. I get electricity in my house, and that’s it. It’s really exciting to have that insight into how that system works and how it plays into the larger economy.”</p> <p>Garcia says the lessons he’s learned about utility lobbying and regulation are helping him decide his next career steps. “I’m maybe going into public policy or political science, so I feel like having exposure to this type of work could be really helpful,” he says.</p> <p>Teaming up on a UROP isn’t just valuable in terms of research and education, as Bryant discovered. In her case, talking about Hsu’s project led to a discussion about how government works and how big corporations behave. This, in turn, led to a thoughtful conversation about career options.</p> <p>“We talked about careers, and it’s a conversation I haven’t had with people outside my major,” Bryant says, noting that she and her fellow UROPs discussed the trade-offs of going into well-paid jobs in industry versus focusing on a career that gives back to one’s community. “There was this whole ethical portion of the discussion,” she says. “It was pretty influential in how I think about jobs now.”</p> <p>According to Shulman, this kind of experience is just what MITEI hopes to foster by sponsoring team-based undergraduate research. “I’m a big believer in serendipity,” she says. “How can we engender serendipity? You throw people together who might not otherwise have met each other.”</p> PJ Hernandez (left) and Jackson Gray (right) build circuits in the lab of Professor James Kirtley (background) as part of their summer energy UROP. One of the problems they worked on together was how to develop an improved cell voltage balancer, a device used to extend the life of batteries by working to ensure that cells remain evenly charged as the battery cycles.Photo: Kelley TraversUndergraduate Research Opportunities Program (UROP), Research Laboratory of Electronics, School of Engineering, School of Architecture and Planning, Energy, Electrical engineering and computer science (EECS), MIT Energy Initiative (MITEI), Classes and programs, Urban studies and planning Cody Friesen PhD ’04 awarded $500,000 Lemelson-MIT Prize Materials scientist recognized for social, economic, and environmentally-sustaining inventions that impact millions of people around the world. Wed, 18 Sep 2019 10:10:01 -0400 Stephanie Martinovich | Lemelson-MIT Program <p>Cody Friesen PhD ’04, an associate professor of materials science at Arizona State University and founder of both Fluidic Energy and Zero Mass Water, was awarded the 2019 $500,000 Lemelson-MIT Prize for invention. Friesen has dedicated his career to inventing solutions that address two of the biggest challenges to social and economic advancement in the developing world: access to fresh water and reliable energy. His renewable water and energy technologies help fight climate change while providing valuable resources to underserved communities.</p> <p>Friesen’s first company, Fluidic Energy, was formed to commercialize and deploy the world’s first, and only, rechargeable metal-air battery, which can withstand many thousands of discharges. The technology has provided backup power during approximately 1 million long-duration outages, while simultaneously offsetting thousands of tons of carbon dioxide emissions. The batteries are currently being used as a secondary energy source on four continents at thousands of critical load sites and in dozens of microgrids. Several million people have benefited from access to reliable energy as a result of the technology. Fluidic Energy has been renamed NantEnergy, with Patrick Soon-Shiong investing significantly in the continued global expansion of the technology.</p> <p>Currently, Friesen’s efforts are focused on addressing the global water crisis through his company, Zero Mass Water. Friesen invented SOURCE Hydropanels, which are solar panels that make drinking water from sunlight and air. The invention is a true leapfrog technology and can make drinking water in dry conditions with as low as 5 percent relative humidity. SOURCE has been deployed in 33 countries spanning six continents. The hydropanels are providing clean drinking water in communities, refugee camps, government offices, hotels, hospitals, schools, restaurants, and homes around the world.</p> <p>“As inventors, we have a responsibility to ensure our technology serves all of humanity, not simply the elite,” says Friesen. “At the end of the day, our work is about impact, and this recognition propels us forward as we deploy SOURCE Hydropanels to change the human relationship to water across the globe.”</p> <p>Friesen joins a long lineage of inventors to receive the Lemelson-MIT Prize, the largest cash prize for invention in the United States for 25 years. He will be donating his prize to a project with Conservation International to provide clean drinking water via SOURCE Hydropanels to the Bahia Hondita community in Colombia.</p> <p>“Cody’s inventive spirit, fueled by his strong desire to help improve the lives of people everywhere, is an inspiring role model for future generations,” says Michael Cima, faculty director for the Lemelson-MIT Program and associate dean of innovation for the MIT School of Engineering. “Water scarcity is a prominent global issue, which Cody is combating through technology and innovation. We are excited that the use of this award will further elevate his work.”</p> <p>“Cody Friesen embodies what it means to be an impact inventor,” notes Carol Dahl, executive director at the Lemelson Foundation. “His inventions are truly improving lives, take into account environmental considerations, and have become the basis for companies that impact millions of people around the world each year. We are honored to recognize Dr. Friesen as this year’s LMIT Prize winner.”&nbsp;</p> <p>Friesen will speak at EmTech MIT, the annual conference on emerging technologies hosted by <em>MIT Technology Review</em> at the MIT Media Lab on Sept. 18 at 5 p.m.</p> Cody Friesen is the winner of the 2019 Lemelson-MIT Prize for invention. Photo: Zero Mass WaterLemelson-MIT, School of Engineering, DMSE, Alumni/ae, Awards, honors and fellowships, Batteries, Energy, Water, Solar, Materials Science and Engineering, Global New approach suggests path to emissions-free cement MIT researchers find a way to eliminate carbon emissions from cement production — a major global source of greenhouse gases. Mon, 16 Sep 2019 14:59:59 -0400 David L. Chandler | MIT News Office <p>It’s well known that the production of cement — the world’s leading construction material — is a major source of greenhouse gas emissions, accounting for about 8 percent of all such releases. If cement production were a country, it would be the world’s third-largest emitter.</p> <p>A team of researchers at MIT has come up with a new way of manufacturing the material that could eliminate these emissions altogether, and could even make some other useful products in the process.</p> <p>The findings are being reported today in the journal <em>PNAS</em> in <a href="" target="_blank">a paper</a> by Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering at MIT, with postdoc Leah Ellis, graduate student Andres Badel, and others.</p> <p>“About 1 kilogram of carbon dioxide is released for every kilogram of cement made today,” Chiang says. That adds up to 3 to 4 gigatons (billions of tons) of cement, and of carbon dioxide emissions, produced annually today, and that amount is projected to grow. The number of buildings worldwide is expected to double by 2060, which is equivalent to “building one new New York City every 30 days,” he says. And the commodity is now very cheap to produce: It costs only about 13 cents per kilogram, which he says makes it cheaper than bottled water.</p> <p>So it’s a real challenge to find ways of reducing the material’s carbon emissions without making it too expensive. Chiang and his team have spent the last year searching for alternative approaches, and hit on the idea of using an electrochemical process to replace the current fossil-fuel-dependent system.</p> <p>Ordinary Portland cement, the most widely used standard variety, is made by grinding up limestone and then cooking it with sand and clay at high heat, which is produced by burning coal. The process produces carbon dioxide in two different ways: from the burning of the coal, and from gases released from the limestone during the heating. Each of these produces roughly equal contributions to the total emissions. The new process would eliminate or drastically reduce both sources, Chiang says. Though they have demonstrated the basic electrochemical process in the lab, the process will require more work to scale up to industrial scale.</p> <p>First of all, the new approach could eliminate the use of fossil fuels for the heating process, substituting electricity generated from clean, renewable sources. “In many geographies renewable electricity is the lowest-cost electricity we have today, and its cost is still dropping,” Chiang says. In addition, the new process produces the same cement product. The team realized that trying to gain acceptance for a new type of cement — something that many research groups have pursued in different ways — would be an uphill battle, considering how widely used the material is around the world and how reluctant builders can be to try new, relatively untested materials.</p> <p>The new process centers on the use of an electrolyzer, something that many people have encountered as part of high school chemistry classes, where a battery is hooked up to two electrodes in a glass of water, producing bubbles of oxygen from one electrode and bubbles of hydrogen from the other as the electricity splits the water molecules into their constituent atoms. Importantly, the electrolyzer’s oxygen-evolving electrode produces acid, while the hydrogen-evolving electrode produces a base.</p> <p>In the new process, the pulverized limestone is dissolved in the acid at one electrode and high-purity carbon dioxide is released, while calcium hydroxide, generally known as lime, precipitates out as a solid at the other. The calcium hydroxide can then be processed in another step to produce the cement, which is mostly calcium silicate.</p> <p>The carbon dioxide, in the form of a pure, concentrated stream, can then be easily sequestered, harnessed to produce value-added products such as a liquid fuel to replace gasoline, or used for applications such as oil recovery or even in carbonated beverages and dry ice. The result is that no carbon dioxide is released to the environment from the entire process, Chiang says. By contrast, the carbon dioxide emitted from conventional cement plants is highly contaminated with nitrogen oxides, sulfur oxides, carbon monoxide and other material that make it impractical to “scrub” to make the carbon dioxide usable.</p> <p>Calculations show that the hydrogen and oxygen also emitted in the process could be recombined, for example in a fuel cell, or burned to produce enough energy to fuel the whole rest of the process, Ellis says, producing nothing but water vapor.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 348px;" /></p> <p><em><span style="font-size:10px;">In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO<sub>3</sub>) into calcium hydroxide (Ca(OH)<sub>2</sub>), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions.</span></em></p> <p>In their laboratory demonstration, the team carried out the key electrochemical steps required, producing lime from the calcium carbonate, but on a small scale. The process looks a bit like shaking a snow-globe, as it produces a flurry of suspended white particles inside the glass container as the lime precipitates out of the solution.</p> <p>While the technology is simple and could, in principle, be easily scaled up, a typical cement plant today produces about 700,000 tons of the material per year. “How do you penetrate an industry like that and get a foot in the door?” asks Ellis, the paper’s lead author. One approach, she says, is to try to replace just one part of the process at a time, rather than the whole system at once, and “in a stepwise fashion” gradually add other parts.</p> <p>The initial proposed system the team came up with is “not because we necessarily think we have the exact strategy” for the best possible approach, Chiang says, “but to get people in the electrochemical sector to start thinking more about this,” and come up with new ideas. “It’s an important first step, but not yet a fully developed solution.”</p> <p>The research was partly supported by the Skolkovo Institute of Science and Technology.</p> In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO3) into calcium hydroxide (Ca(OH)2), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions.Image: Felice FrankelCement, Research, School of Engineering, Materials Science and Engineering, DMSE, Civil and environmental engineering, Energy, Emissions, Sustainability, Cities, Concrete, Climate change, Greenhouse gases, Manufacturing Department of Nuclear Science and Engineering spreads its wings New 22-ENG undergraduate degree provides expansive vision of nuclear studies and nuclear careers. Fri, 13 Sep 2019 12:40:01 -0400 Leda Zimmerman | Nuclear science and engineering <p>After a nearly five-year effort, fueled by the passionate persistence of faculty and students, the Department of Nuclear Science and Engineering (NSE) began offering a new degree this fall: 22-ENG, a program that offers the same fundamentals in the discipline as Course 22, but with considerably more flexibility in course selection. Institute faculty approved the new degree in April.</p> <p>“I’m very relieved,” says junior Colt Hermesch, who is in the naval ROTC program. “I discovered I really liked quantum physics late in my academic career, and by switching to the new degree, I can take what I’m interested in and still major in nuclear.”</p> <p>This is precisely the kind of response anticipated by Michael Short ’05, SM and PhD ’10, undergraduate chair of the department, and the Class of ’42 Associate Professor of Nuclear Science and Engineering. In 2014, the department tasked Short with reforming the undergraduate curriculum. He says that 22-ENG was motivated largely by mounting student demand for a path of study in nuclear science and engineering that doesn’t lead exclusively to traditional jobs in the nuclear industry.</p> <p>“Every year I’ve been advising, increasing numbers of students have expressed interest in focusing their nuclear studies on topics like materials, robotics, policy, or sustainability,” says Short. “These are hybrid fields, fields of the future, but in spite of this growing demand, until now we have had no mechanism in the department for helping them.”</p> <p>The new flex degree will allow students, once they have completed a cluster of required courses, to focus in such areas as nuclear medicine, clean energy technologies, policy, fusion, plasma science, nuclear computation, nuclear materials, and modeling/simulation. Working with their advisors, undergraduates will be able to map out a customized suite of classes in a nuclear-relevant discipline of their creation.</p> <p>Sophomore Analyce Hernandez is wasting no time in taking advantage of this new degree option. “I was so excited to hear about it, and rushed to lay out a course map with my adviser,” she says. Hernandez was worried about double majoring in Course 22 and physics because of the formidable class requirements. “With 22-ENG, I don’t have to choose one over the other.”</p> <p>Short says he has seen too many students forced to make comparably tough choices. Some, due to personal choices, depart from their true passion in nuclear science for other majors that offer opportunities which they believe can more easily secure them jobs at Google or Facebook. Others “were leaving our major because they can’t pursue subjects they become deeply invested in, often through lab work,” says Short. “People should not be penalized for having gigantic passions that don’t fit into one of our boxes.”</p> <p>He believes with its larger menu of topics, trimmed requirements, and connection to real-world applications of NSE, 22-ENG will both retain students who might be on the fence about majoring in NSE, and capture new candidates to the field.</p> <p><strong>Seeking alternatives</strong></p> <p>Junior Daniel Korsun personifies the kind of passionate student Short hopes to persuade to commit to NSE.</p> <p>“I became interested in NSE as a freshman, and quickly realized I wanted to pursue fusion, both for my education and as a career,” Korsun says. But it became clear to him that Course 22 would not allow him break out and explore fusion at the depth he desired. “The current degree is rigorous and demanding, which is great, but it primarily prepares students for traditional nuclear careers or doctorates in fission,” he says.</p> <p>In conversations with fellow undergrads, Korsun discovered that he was not alone in yearning for alternatives: “A lot of my friends were also interested in pursuing different subfields within NSE, such as materials science and sustainability, but the coursework just didn’t support them.”</p> <p>Last spring, Korsun decided to act. Working with NSE academic administrator Brandy Baker, Korsun developed suggestions for a flexible degree parallel to those offered by mechanical engineering and physics. “We laid out a reasonable course load, retaining the core requirements, but added choices for specialization,” says Korsun. “We sent the proposal off to Professor Short, and he loved the idea.”</p> <p>The flex degree idea resonated powerfully for Short because he had been pressing to create something like it for years. “When I started here as an undergraduate in 2001, I worked in Ron Ballinger’s nuclear materials lab, and I loved it — I knew immediately it was my calling,” he says. “But when I began looking for NSE classes that could help get me further into this research, there weren’t any.”</p> <p>Short’s solution was to major in both nuclear engineering and materials science. “It was intellectually stimulating, and miserable in terms of work/life balance,” he says. Others in his cohort who wanted a deep immersion in a subdiscipline of nuclear would rather change majors and minor in nuclear engineering. So when Short joined the NSE faculty in 2013, he sought opportunities to make the curriculum more welcoming to undergraduates.</p> <p>That moment arrived the next year, when he was charged with rethinking the undergraduate curriculum. “I said great, I’ve got stuff I’ve wanted to do for a decade,” says Short.</p> <p>Some of Short’s initiatives were implemented quickly. He found ways to bring hands-on learning to early classes, ensuring multiple modes of engagement to the fundamentals of NSE. The previous theory-first, applications-later curricular framework was viewed by the students and Short as “boring.” Short also sought to trim certain classes from Course 22 that he felt were not essential to mastering the central tenets of nuclear engineering. Eliminating what he calls “dangling ends in the curriculum” — advanced courses like waves and vibrations, and analog electronics — could make room for electives that offered students the chances for immersion in nuclear domains that link more directly to careers. Short had the outlines for the new flex degree.</p> <p>With the impetus of students like Korsun, and “after much collegial debate,” according to Short, the NSE faculty added 22-ENG to its curriculum.</p> <p><strong>In sync with institutional reforms</strong></p> <p>With its debut right around the corner, the new degree promises to position NSE at the vanguard of other large-scale changes at the Institute.</p> <p>For one, 22-ENG will feature a track for computation “to use the latest advances in computing to solve problems related to nuclear,” says Short. This focus area was suggested by Anantha P. Chandrakasan, dean of the School of Engineering, who wanted to create an explicit bridge not just to computer science, but to the new Schwarzman College of Computing.</p> <p>“We’ll be first at the front door, since we’ll launch our track before the new college even starts,” says Short. “When students come for computer science, they will know they can direct their studies toward nuclear.”</p> <p>Adds Dennis Whyte, Hitachi America Professor of Engineering and former head of nuclear science and engineering, “As MIT implements new opportunities for our undergraduates, such as the new college of computing, 22-ENG will serve to grow the evolving demands of our undergraduate student population for a multi-disciplinary education.”</p> <p>The new major explicitly sets out to span fields. It will, for instance, create a focus area in policy and economics, tying NSE more closely to the School of Humanities, Arts, and Social Sciences. “More than any other engineering discipline, nuclear is inseparable from the social sciences, because when you switch on a nuclear plant, everyone takes notice,” says Short. “Every step we take is ultra-scrutinized by ethicists and political scientists, as it should be.” Beyond the specialty track, Short sees “the social problems of nuclear as inseparable from the program as basic nuclear physics,” and will be working to integrate humanities and social sciences into some of the department’s core courses.</p> <p>Benchmarks for the success of NSE curriculum changes will emerge not just in the form of higher enrollment, Short anticipates, but in feedback from future employers of NSE students.</p> <p>“As we send out students with blended skillsets, capable of working in multidisciplinary ways, employers will say, ‘Wow, they’re sending us students both technically expert and well-rounded,’” says Short.</p> <p>This is the type of graduate, he notes, who could save the nuclear industry, which is sorely challenged economically. “Whether they select advanced reactors or fusion reactors, utilities, policy, or advocacy, our students could move the industry beyond the 1960s,” says Short. “They could give new meaning and substance to the department’s motto, ‘science, systems and society.’”</p> "Whether they select advanced reactors or fusion reactors, utilities, policy, or advocacy, our students could move the industry beyond the 1960s,” says MIT Professor Michael Short. “They could give new meaning and substance to the department’s motto, ‘science, systems and society.’”Photo: Gretchen ErtlNuclear science and engineering, School of Engineering, Classes and programs, Education, teaching, academics, Design, Energy, Environment, Nuclear power and reactors, Physics Exotic physics phenomenon is observed for first time Observation of the predicted non-Abelian Aharonov-Bohm Effect may offer step toward fault-tolerant quantum computers. Thu, 05 Sep 2019 13:59:59 -0400 David L. Chandler | MIT News Office <p>An exotic physical phenomenon, involving optical waves, synthetic magnetic fields, and time reversal, has been directly observed for the first time, following decades of attempts. The new finding could lead to realizations of what are known as topological phases, and eventually to advances toward fault-tolerant quantum computers, the researchers say.</p> <p>The new finding involves the non-Abelian Aharonov-Bohm Effect and is <a href="" target="_blank">reported today</a> in the journal <em>Science</em> by MIT graduate student Yi Yang, MIT visiting scholar Chao Peng (a professor at Peking University), MIT graduate student Di Zhu, Professor Hrvoje Buljan at University of Zagreb in Croatia, Francis Wright Davis Professor of Physics John Joannopoulos at MIT, Professor Bo Zhen at the University of Pennsylvania, and MIT professor of physics Marin Soljačić.</p> <p>The finding relates to gauge fields, which describe transformations that particles undergo. Gauge fields fall into two classes, known as Abelian and non-Abelian. The Aharonov-Bohm Effect, named after the theorists who predicted it in 1959, confirmed that gauge fields — beyond being a pure mathematical aid — have physical consequences.</p> <p>But the observations only worked in Abelian systems, or those in which gauge fields are commutative — that is, they take place the same way both forward and backward in time. &nbsp;In 1975, Tai-Tsun Wu and Chen-Ning Yang generalized the effect to the non-Abelian regime as a thought experiment. Nevertheless, it remained unclear whether it would even be possible to ever observe the effect in a non-Abelian system. Physicists lacked ways of creating the effect in the lab, and also lacked ways of detecting the effect even if it could be produced. Now, both of those puzzles have been solved, and the observations carried out successfully.</p> <p>The effect has to do with one of the strange and counterintuitive aspects of modern physics, the fact that virtually all fundamental physical phenomena are time-invariant. That means that the details of the way particles and forces interact can run either forward or backward in time, and a movie of how the events unfold can be run in either direction, so there’s no way to tell which is the real version. But a few exotic phenomena violate this time symmetry.</p> <p>Creating the Abelian version of the Aharonov-Bohm effects requires breaking the time-reversal symmetry, a challenging task in itself, Soljačić says. But to achieve the non-Abelian version of the effect requires breaking this time-reversal multiple times, and in different ways, making it an even greater challenge.</p> <p>To produce the effect, the researchers use photon polarization. Then, they produced two different kinds of time-reversal breaking. They used fiber optics to produce two types of gauge fields that affected the geometric phases of the optical waves, first by sending them through a crystal biased by powerful magnetic fields, and second by modulating them with time-varying electrical signals, both of which break the time-reversal symmetry. They were then able to produce interference patterns that revealed the differences in how the light was affected when sent through the fiber-optic system in opposite directions, clockwise or counterclockwise. Without the breaking of time-reversal invariance, the beams should have been identical, but instead, their interference patterns revealed specific sets of differences as predicted, demonstrating the details of the elusive effect.</p> <p>The original, Abelian version of the Aharonov-Bohm effect “has been observed with a series of experimental efforts, but the non-Abelian effect has not been observed until now,” Yang says. The finding “allows us to do many things,” he says, opening the door to a wide variety of potential experiments, including classical and quantum physical regimes, to explore variations of the effect.</p> <p>The experimental approach devised by this team “might inspire the realization of exotic topological phases in quantum simulations using photons, polaritons, quantum gases, and superconducting qubits,” Soljačić says. For photonics itself, this could be useful in a variety of optoelectronic applications, he says. In addition, the non-Abelian gauge fields that the group was able to synthesize produced a non-Abelian Berry phase, and “combined with interactions, it may potentially one day serve as a platform for fault-tolerant topological quantum computation,” he says.</p> <p>At this point, the experiment is primarily of interest for fundamental physics research, with the aim of gaining a better understanding of some basic underpinnings of modern physical theory. The many possible practical applications “will require additional breakthroughs going forward,” Soljačić says.</p> <p>For one thing, for quantum computation, the experiment would need to be scaled up from one single device to likely a whole lattice of them. And instead of the beams of laser light used in their experiment, it would require working with a source of single individual photons. But even in its present form, the system could be used to explore questions in topological physics, which is a very active area of current research, Soljačić says.</p> <p>“The non-Abelian Berry phase is a theoretical gem that is the doorway to understanding many intriguing ideas in contemporary physics,” says Ashvin Vishwanath, a professor of physics at Harvard University, who was not associated with this work. “I am glad to see it getting the experimental attention it deserves in the current work, which reports a well-controlled and characterized realization. I expect this work to stimulate progress both directly as a building block for more complex architectures, and also indirectly in inspiring other realizations.”</p> Images showing interference patterns (top) and a Wilson loop (bottom) were produced by the researchers to confirm the presence of non-Abelian gauge fields created in the research.Image courtesy of the researchersResearch, School of Science, Physics, Light, Energy, Quantum computing, Research Laboratory of Electronics Letter regarding the first of six climate change symposia Wed, 04 Sep 2019 16:18:14 -0400 MIT News Office <p><em>The following letter was sent to the MIT community by president L. Rafael Reif.</em></p> <p>To the members of the MIT community,</p> <p>In keeping with MIT’s broad and intensive efforts outlined in our <a href="">Plan for Action on Climate Change</a>, last spring I wrote to let the community know that, this academic year, we will host six symposia focused on climate change and its urgent global challenges.</p> <p><em>The symposia topics, times and locations appear at the end of this short note, and future details will be available at <a href=""></a>.</em></p> <p>Featuring leading experts from MIT and elsewhere, the six symposia will explore the frontier of climate science and policy, highlight innovative efforts to decarbonize everything from electricity to transportation and consider how research universities can best accelerate progress.</p> <p>The challenge of dramatically stepping up the pace of decarbonization while making sure this transition is sustainable and equitable across society will take all of our collective talents – and the best work of countless MIT minds and hands. We are eager for the symposia to help galvanize our community and plant the seeds for future research, policy and innovation in climate solutions. To that end, I hope many of you will make time to attend.​</p> <p>I write now to invite you to the first symposium in the series. So we can estimate attendance, please <a href="">register here</a>.</p> <p><strong>Progress in Climate Science</strong><br /> Wednesday, October 2<br /> 1:00–4:00 pm<br /> Kresge Auditorium (<a href="">Building W16</a>)</p> <p>Thanks to the leadership of Professor Kerry Emanuel, himself an expert on the science of climate change, this first symposium will include two panels – one on Frontiers in Climate Science, one on Climate Risks – and will begin with keynote remarks from a pioneering climate researcher and eminent member of the MIT faculty, Professor Susan Solomon. It will be my honor to provide the opening remarks.</p> <p>I look forward to seeing many of you on October 2.</p> <p>Sincerely,</p> <p>L. Rafael Reif</p> Community, Faculty, Staff, Climate change, Students, Alternative energy, Energy, Greenhouse gases, Special events and guest speakers, President L. Rafael Reif What’s the best way to cut vehicle greenhouse-gas emissions? Study finds that in some locations, lightweight gas-powered cars could have a bigger emissions-reducing impact than electric ones. Mon, 26 Aug 2019 09:45:43 -0400 David L. Chandler | MIT News Office <p>Policies to encourage reductions in greenhouse gas emissions tend to stress the need to switch as many vehicles as possible to electric power. But a new study by MIT and the Ford Motor Company finds that depending on the location, in some cases an equivalent or even bigger reduction in emissions could be achieved by switching to lightweight conventional (gas-powered) vehicles instead — at least in the near term.</p> <p>The study looked at a variety of factors that can affect the relative performance of these vehicles, including the role of low temperatures in reducing battery performance, regional differences in average number of miles driven annually, and the different mix of generating sources in different parts of the U.S. The results are being published today in the journal <em>Environmental Science &amp; Technology, </em>in a paper by MIT Principal Research Scientist Randolph Kirchain, recent graduate Di Wu PhD ’18, graduate student Fengdi Guo, and three researchers from Ford.</p> <p>The study combined a variety of datasets to examine the relative impact of different vehicle choices down to a county-by-county level across the nation. It showed that while electric vehicles provide the greatest impact in reducing greenhouse gas emissions for most of the country, especially on both coasts and in the south, significant parts of the Midwest had the opposite result, with lightweight gasoline-powered vehicles achieving a greater reduction.</p> <p>The biggest factor leading to that conclusion was the mix of generating sources going into the grid in different regions, Kirchain says. That mix is “cleaner” on both the East and West coasts, with higher usage of renewable energy sources and relatively low-emissions natural gas, while in the upper Midwest there is still a much higher proportion of coal-burning power plants. That means that even though electric vehicles produce no greenhouse emissions while they are being driven, the process of recharging the car’s batteries results in significant emissions.</p> <p>In those locations, buying a lightweight car, defined as one whose structure is built largely from aluminum or specialized lightweight steel, would actually result in fewer emissions than buying a comparable electric car, the study found.</p> <p>The research was made possible by Ford’s collection of vehicle-performance data from about 30,000 cars, over a total of about 300 million miles of driving. They come from conventional midsize conventional gasoline cars, and the researchers used standard modeling techniques to calculate the performance of equivalent vehicles that were either hybrid-electric, battery-electric, or lightweight versions of conventional cars.</p> <p>&nbsp;“We tried to add as much spatial resolution as possible, compared to other studies in the literature, to try to get a sense of the combined effects” of the various factors of temperature, the grid, and driving conditions, Kirchain explains. That combination of data showed, among other things, that “some of the areas with more carbon-heavy grids also happen to be colder, and somewhat more rural,” he says. “All three of those things can tilt emissions in a negative way for electric vehicles” in terms of their impact on reducing emissions. The combined effects are strongest in parts of Wisconsin and Michigan, where lightweight cars would have a significant advantage over EVs in reducing emissions, the study showed.</p> <p>The impact of cold weather on battery performance, he says, “is something that is discussed in the EV literature, but not as much in the popular discussions of the topic.” Conversely, gasoline-powered vehicles suffer an efficiency penalty in urban driving, but they have lower emissions in regions that are more rural and spread out.</p> <p>The data on car performance the team had to work with thanks to their collaboration with Ford researchers “was unique,” Kirchain says. “In the past, a ‘large’ study of this type would be a few dozen vehicles,” and those would mainly come from people who volunteered to share their data and therefore were more likely to be concerned about environmental impact. The extensive Ford data, by contrast, provide “a broader cross-section of drivers and driving conditions.”</p> <p>Kirchain stresses that the intent of this study is not in any way to minimize the importance of switching over ground transportation to electric power in order to curb greenhouse emissions. “We’re not trying to undermine the fact that electrification is the long-term solution — and the short-term solution for most of the country,” he says. But over the next few decades, which is considered a critical period in determining the planet’s climate outcomes, it’s important to know what measures will actually be most effective in reducing carbon emissions in order to set policies and incentives that will produce the best outcomes, he says.</p> <p>The relative advantage of lightweight vehicles compared to electric ones, according to their modeling, “goes down over time, as the grid improves,” he says. “But it doesn’t go away completely until you get to close to 2050 or so.”</p> <p>Lightweight aluminum is now used in the Ford F-150 pickup truck, and in the all-electric Tesla sedans. Currently, there are no high-volume lightweight gasoline-powered midsize cars on the market in the U.S., but they could be built if incentives similar to those used to encourage the production of electric cars were in place, Kirchain suggests.</p> <p>Right now, he says, the U.S. has “a patchwork of regulations and incentives that are providing extra incentives for electrification.” But there are certain parts of the country, he says, where it would make more sense to provide incentives “for any option that provides sufficient fuel savings, not just for electrification,” he says.</p> <p>“At least for the north central part of the country, policymakers should consider a more nuanced approach,” he adds.</p> <p>“This is a significant advance,” says Heather MacLean, professor of civil and mineral engineering at the University of Toronto, who was not associated with this work. This study, she says, “illustrates the importance of the regional disaggregation in the analysis, and that if it were absent results would be incorrect. This is an unequivocal call for regional policies that use the latest research to build rational agendas, rather than prescribing overarching global solutions.”</p> <p>This study “demonstrates the complexity in elucidating the electrification benefits for lightweighted vehicles,” says Gregory Keoleian, director of the Center for Sustainable Systems&nbsp; at the University of Michigan, who was not connected to this study. He adds that “The contributions of regional effects such as climate, grid carbon intensities and driving characteristics were carefully mapped to inform carbon reduction strategy for the auto sector.”&nbsp;&nbsp;</p> <p>The research team included Robert De Kleine, Hyung Chul Kim, and Timothy Wallington of the Research and Innovation Center of Ford Motor Company, in Dearborn, Michigan.</p> A new study finds that depending on the location, in some cases an equivalent or even bigger reduction in emissions could be achieved by switching to lightweight conventional (gas-powered) vehicles instead of electric power vehicles.Image: MIT NewsEmissions, Research, Climate change, Efficiency, Energy, Industry, Sustainability, Transportation, Materials Science and Engineering, Automobiles, School of Engineering Artificial intelligence could help data centers run far more efficiently MIT system “learns” how to optimally allocate workloads across thousands of servers to cut costs, save energy. Wed, 21 Aug 2019 16:31:11 -0400 Rob Matheson | MIT News Office <p>A novel system developed by MIT researchers automatically “learns” how to schedule data-processing operations across thousands of servers — a task traditionally reserved for imprecise, human-designed algorithms. Doing so could help today’s power-hungry data centers run far more efficiently.</p> <p>Data centers can contain tens of thousands of servers, which constantly run data-processing tasks from developers and users. Cluster scheduling algorithms allocate the incoming tasks across the servers, in real-time, to efficiently utilize all available computing resources and get jobs done fast.</p> <p>Traditionally, however, humans fine-tune those scheduling algorithms, based on some basic guidelines (“policies”) and various tradeoffs. They may, for instance, code the algorithm to get certain jobs done quickly or split resource equally between jobs. But workloads&nbsp;—&nbsp;meaning groups of combined tasks — come in all sizes. Therefore, it’s virtually impossible for humans to optimize their scheduling algorithms for specific workloads and, as a result, they often fall short of their true efficiency potential.</p> <p>The MIT researchers instead offloaded all of the manual coding to machines. In a paper being presented at SIGCOMM, they describe a system that leverages “reinforcement learning” (RL), a trial-and-error machine-learning technique, to tailor scheduling decisions to specific workloads in specific server clusters.</p> <p>To do so, they built novel RL techniques that could train on complex workloads. In training, the system tries many possible ways to allocate incoming workloads across the servers, eventually finding an optimal tradeoff in utilizing computation resources and quick processing speeds. No human intervention is required beyond a simple instruction, such as, “minimize job-completion times.”</p> <p>Compared to the best handwritten scheduling algorithms, the researchers’ system completes jobs about 20 to 30 percent faster, and twice as fast during high-traffic times. Mostly, however, the system learns how to compact workloads efficiently to leave little waste. Results indicate the system could enable data centers to handle the same workload at higher speeds, using fewer resources.</p> <p>“If you have a way of doing trial and error using machines, they can try different ways of scheduling jobs and automatically figure out which strategy is better than others,” says Hongzi Mao, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “That can improve the system performance automatically. And any slight improvement in utilization, even 1 percent, can save millions of dollars and a lot of energy in data centers.”</p> <p>“There’s no one-size-fits-all to making scheduling decisions,” adds co-author Mohammad Alizadeh, an EECS professor and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In existing systems, these are hard-coded parameters that you have to decide up front. Our system instead learns to tune its schedule policy characteristics, depending on the data center and workload.”</p> <p>Joining Mao and Alizadeh on the paper are: postdocs Malte Schwarzkopf and Shaileshh Bojja Venkatakrishnan, and graduate research assistant Zili Meng, all of CSAIL.</p> <p><br /> <strong>RL for scheduling </strong></p> <p>Typically, data processing jobs come into data centers represented as graphs of “nodes” and “edges.” Each node represents some computation task that needs to be done, where the larger the node, the more computation power needed. The edges connecting the nodes link connected tasks together. Scheduling algorithms assign nodes to servers, based on various policies.</p> <p>But traditional RL systems are not accustomed to processing such dynamic graphs. These systems use a software “agent” that makes decisions and receives a feedback signal as a reward. Essentially, it tries to maximize its rewards for any given action to learn an ideal behavior in a certain context. They can, for instance, help robots learn to perform a task like picking up an object by interacting with the environment, but that involves processing video or images through an easier set grid of pixels.</p> <p>To build their RL-based scheduler, called Decima, the researchers had to develop a model that could process graph-structured jobs, and scale to a large number of jobs and servers. Their system’s “agent” is a scheduling algorithm that leverages a graph neural network, commonly used to process graph-structured data. To come up with a graph neural network suitable for scheduling, they implemented a custom component that aggregates information across paths in the graph — such as quickly estimating how much computation is needed to complete a given part of the graph. That’s important for job scheduling, because “child” (lower) nodes cannot begin executing until their “parent” (upper) nodes finish, so anticipating future work along different paths in the graph is central to making good scheduling decisions.</p> <p>To train their RL system, the researchers simulated many different graph sequences that mimic workloads coming into data centers. The agent then makes decisions about how to allocate each node along the graph to each server. For each decision, a component computes a reward based on how well it did at a specific task —&nbsp;such as minimizing the average time it took to process a single job. The agent keeps going, improving its decisions, until it gets the highest reward possible.</p> <p><strong>Baselining workloads</strong></p> <p>One concern, however, is that some workload sequences are more difficult than others to process, because they have larger tasks or more complicated structures. Those will always take longer to process — and, therefore, the reward signal will always be lower — than simpler ones. But that doesn’t necessarily mean the system performed poorly: It could make good time on a challenging workload but still be slower than an easier workload. That variability in difficulty makes it challenging for the model to decide what actions are good or not.</p> <p>To address that, the researchers adapted a technique called “baselining” in this context. This technique takes averages of scenarios with a large number of variables and uses those averages as a baseline to compare future results. During training, they computed a baseline for every input sequence. Then, they let the scheduler train on each workload sequence multiple times. Next, the system took the average performance across all of the decisions made for the same input workload. That average is the baseline against which the model could then compare its future decisions to determine if its decisions are good or bad. They refer to this new technique as “input-dependent baselining.”</p> <p>That innovation, the researchers say, is applicable to many different computer systems. “This is general way to do reinforcement learning in environments where there’s this input process that effects environment, and you want every training event to consider one sample of that input process,” he says. “Almost all computer systems deal with environments where things are constantly changing.”</p> <p>Aditya Akella, a professor of computer science at the University of Wisconsin at Madison, whose group has designed several high-performance schedulers, found the MIT system could help further improve their own policies. “Decima can go a step further and find opportunities for [scheduling] optimization that are simply too onerous to realize via manual design/tuning processes,” Akella says. “The schedulers we designed achieved significant improvements over techniques used in production in terms of application performance and cluster efficiency, but there was still a gap with the ideal improvements we could possibly achieve. Decima shows that an RL-based approach can discover [policies] that help bridge the gap further. Decima improved on our techniques by a [roughly] 30 percent, which came as a huge surprise.”</p> <p>Right now, their model is trained on simulations that try to recreate incoming online traffic in real-time. Next, the researchers hope to train the model on real-time traffic, which could potentially crash the servers. So, they’re currently developing a “safety net” that will stop their system when it’s about to cause a crash. “We think of it as training wheels,” Alizadeh says. “We want this system to continuously train, but it has certain training wheels that if it goes too far we can ensure it doesn’t fall over.”</p> A novel system by MIT researchers automatically “learns” how to allocate data-processing operations across thousands of servers.Research, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Internet, Networks, Data, Energy, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering Using Wall Street secrets to reduce the cost of cloud infrastructure “Risk-aware” traffic engineering could help service providers such as Microsoft, Amazon, and Google better utilize network infrastructure. Sun, 18 Aug 2019 23:59:59 -0400 Rob Matheson | MIT News Office <p>Stock market investors often rely on financial risk theories that help them maximize returns while minimizing financial loss due to market fluctuations. These theories help investors maintain a balanced portfolio to ensure they’ll never lose more money than they’re willing to part with at any given time.</p> <p>Inspired by those theories, MIT researchers in collaboration with Microsoft have developed a “risk-aware” mathematical model that could improve the performance of cloud-computing networks across the globe. Notably, cloud infrastructure is extremely expensive and consumes a lot of the world’s energy.</p> <p>Their model takes into account failure probabilities of links between data centers worldwide — akin to predicting the volatility of stocks. Then, it runs an optimization engine to allocate traffic through optimal paths to minimize loss, while maximizing overall usage of the network.</p> <p>The model could help major cloud-service providers — such as Microsoft, Amazon, and Google — better utilize their infrastructure. The conventional approach is to keep links idle to handle unexpected traffic shifts resulting from link failures, which is a waste of energy, bandwidth, and other resources. The new model, called TeaVar, on the other hand, guarantees that for a target percentage of time — say, 99.9 percent — the network can handle all data traffic, so there is no need to keep any links idle. During that 0.01 percent of time, the model also keeps the data dropped as low as possible.</p> <p>In experiments based on real-world data, the model supported three times the traffic throughput as traditional traffic-engineering methods, while maintaining the same high level of network availability. A <a href="" target="_blank">paper</a> describing the model and results will be presented at the ACM SIGCOMM conference this week.</p> <p>Better network utilization can save service providers millions of dollars, but benefits will “trickle down” to consumers, says co-author Manya Ghobadi, the TIBCO Career Development Assistant Professor in the MIT Department of Electrical Engineering and Computer Science and a researcher at the Computer Science and Artificial Intelligence Laboratory (CSAIL).</p> <p>“Having greater utilized infrastructure isn’t just good for cloud services — it’s also better for the world,” Ghobadi says. “Companies don’t have to purchase as much infrastructure to sell services to customers. Plus, being able to efficiently utilize datacenter resources can save enormous amounts of energy consumption by the cloud infrastructure. So, there are benefits both for the users and the environment at the same time.”</p> <p>Joining Ghobadi on the paper are her students Jeremy Bogle and Nikhil Bhatia, both of CSAIL; Ishai Menache and Nikolaj Bjorner of Microsoft Research; and Asaf Valadarsky and Michael Schapira of Hebrew University. &nbsp;</p> <p><strong>On the money</strong></p> <p>Cloud service providers use networks of fiber optical cables running underground, connecting data centers in different cities. To route traffic, the providers rely on “traffic engineering” (TE) software that optimally allocates data bandwidth —&nbsp;amount of data that can be transferred at one time — through all network paths.</p> <p>The goal is to ensure maximum availability to users around the world. But that’s challenging when some links can fail unexpectedly, due to drops in optical signal quality resulting from outages or lines cut during construction, among other factors. To stay robust to failure, providers keep many links at very low utilization, lying in wait to absorb full data loads from downed links.</p> <p>Thus, it’s a tricky tradeoff between network availability and utilization, which would enable higher data throughputs. And that’s where traditional TE methods fail, the researchers say. They find optimal paths based on various factors, but never quantify the reliability of links. “They don’t say, ‘This link has a higher probability of being up and running, so that means you should be sending more traffic here,” Bogle says. “Most links in a network are operating at low utilization and aren’t sending as much traffic as they could be sending.”</p> <p>The researchers instead designed a TE model that adapts core mathematics from “conditional value at risk,” a risk-assessment measure that quantifies the average loss of money. With investing in stocks, if you have a one-day 99 percent conditional value at risk of $50, your expected loss of the worst-case 1 percent scenario on that day is $50. But 99 percent of the time, you’ll do much better. That measure is used for investing in the stock market — which is notoriously difficult to predict.</p> <p>“But the math is actually a better fit for our cloud infrastructure setting,” Ghobadi says. “Mostly, link failures are due to the age of equipment, so the probabilities of failure don’t change much over time. That means our probabilities are more reliable, compared to the stock market.”</p> <p><strong>Risk-aware model</strong></p> <p>In networks, data bandwidth shares are analogous to invested “money,” and the network equipment with different probabilities of failure are the “stocks” and their uncertainty of changing values. Using the underlying formulas, the researchers designed a “risk-aware” model that, like its financial counterpart, guarantees data will reach its destination 99.9 percent of time, but keeps traffic loss at minimum during 0.1 percent worst-case failure scenarios. That allows cloud providers to tune the availability-utilization tradeoff.</p> <p>The researchers statistically mapped three years’ worth of network signal strength from Microsoft’s networks that connects its data centers to a probability distribution of link failures. The input is the network topology in a graph, with source-destination flows of data connected through lines (links) and nodes (cities), with each link assigned a bandwidth.</p> <p>Failure probabilities were obtained by checking the signal quality of every link every 15 minutes. If the signal quality ever dipped below a receiving threshold, they considered that a link failure. Anything above meant the link was up and running. From that, the model generated an average time that each link was up or down, and calculated a failure probability —&nbsp;or “risk” — for each link at each 15-minute time window. From those data, it was able to predict when risky links would fail at any given window of time.</p> <p>The researchers tested the model against other TE software on simulated traffic sent through networks from Google, IBM, ATT, and others that spread across the world. The researchers created various failure scenarios based on their probability of occurrence. Then, they sent simulated and real-world data demands through the network and cued their models to start allocating bandwidth.</p> <p>The researchers’ model kept reliable links working to near full capacity, while steering data clear of riskier links. Over traditional approaches, their model ran three times as much data through the network, while still ensuring all data got to its destination. The code is <a href="" target="_blank">freely available on GitHub</a>.</p> MIT researchers have developed a “risk-aware” model that could improve the performance of cloud-computing networks across the U.S.Research, Computer science and technology, Algorithms, Energy, Data, Internet, Networks, Finance, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering Removing carbon dioxide from power plant exhaust MIT researchers are developing a battery that could both capture carbon dioxide in power plant exhaust and convert it to a solid ready for safe disposal. Mon, 29 Jul 2019 13:30:01 -0400 Nancy W. Stauffer | MIT Energy Initiative <div> <p>Reducing carbon dioxide (CO<sub>2</sub>) emissions from power plants is widely considered an essential component of any climate change mitigation plan. Many research efforts focus on developing and deploying carbon capture and sequestration (CCS) systems to keep CO<sub>2</sub>&nbsp;emissions from power plants out of the atmosphere. But separating the captured CO<sub>2</sub>&nbsp;and converting it back into a gas that can be stored can consume up to 25 percent of a plant’s power-generating capacity. In addition, the CO<sub>2</sub>&nbsp;gas is generally injected into underground geological formations for long-term storage — a disposal method whose safety and reliability remain unproven.&nbsp;</p> </div> <div> <p>A better approach would be to convert the captured CO<sub>2</sub>&nbsp;into useful products such as value-added fuels or chemicals. To that end, attention has focused on electrochemical processes — in this case, a process in which chemical reactions release electrical energy, as in the discharge of a battery. The ideal medium in which to conduct electrochemical conversion of CO<sub>2</sub>&nbsp;would appear to be water. Water can provide the protons (positively charged particles) needed to make fuels such as methane. But running such “aqueous” (water-based) systems requires large energy inputs, and only a small fraction of the products formed are typically those of interest.&nbsp;</p> </div> <div> <p><a class="Hyperlink SCXW146141016 BCX0" href="" rel="noreferrer" style="margin: 0px; padding: 0px; user-select: text; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; text-decoration-line: none; color: inherit;" target="_blank">Betar Gallant</a>, an assistant professor of mechanical engineering, and her group at MIT have therefore been focusing on non-aqueous (water-free) electrochemical reactions — in particular, those that occur inside lithium-CO<sub>2</sub>&nbsp;batteries.&nbsp;</p> </div> <div> <p>Research into lithium-CO<sub>2</sub>&nbsp;batteries&nbsp;is&nbsp;in its very early stages, according to Gallant, but interest in them is growing because CO<sub>2</sub>&nbsp;is used up in the chemical reactions that occur on one of the electrodes as the battery is being discharged. However, CO<sub>2</sub>&nbsp;isn’t very reactive. Researchers have tried to speed things up by using different electrolytes and electrode materials. Despite such efforts, the need to use expensive metal catalysts to elicit electrochemical activity has persisted.&nbsp;</p> </div> <div> <p>Given the lack of progress, Gallant wanted to try something different. “We were interested in trying to bring a new chemistry to bear on the problem,” she says. And enlisting the help of the sorbent molecules that so effectively capture CO<sub>2</sub>&nbsp;in CCS seemed like a promising way to go.&nbsp;</p> </div> <div> <p><strong>Rethinking amine&nbsp;</strong></p> </div> <div> <p>The sorbent molecule used in CCS is an amine, a derivative of ammonia. In CCS, exhaust is bubbled through an amine-containing solution, and the amine chemically binds the CO<sub>2</sub>, removing it from the exhaust gases. The CO<sub>2</sub> — now in liquid form — is then separated from the amine and converted back to a gas for disposal.&nbsp;</p> </div> <div> <p>In CCS, those last steps require high temperatures, which are attained using some of the electrical output of the power plant. Gallant wondered whether her team could instead use electrochemical reactions to separate the CO<sub>2</sub>&nbsp;from the amine — and then continue the reaction to make a solid, CO<sub>2</sub>-containing product. If so, the disposal process would be simpler than it is for gaseous CO<sub>2</sub>. The CO<sub>2</sub>&nbsp;would be more densely packed, so it would take up less space, and it couldn’t escape, so it would be safer. Better still, additional electrical energy could be extracted from the device as it discharges and forms the solid material. “The vision was to put a battery-like device into the power plant waste stream to sequester the captured CO<sub>2</sub>&nbsp;in a stable solid, while harvesting the energy released in the process,” says Gallant.&nbsp;</p> </div> <div> <p>Research on CCS technology has generated a good understanding of the carbon-capture process that takes place inside a CCS system. When CO<sub>2</sub>&nbsp;is added to an amine solution, molecules of the two species spontaneously combine to form an “adduct,” a new chemical species in which the original molecules remain largely intact. In this case, the adduct forms when a carbon atom in a CO<sub>2</sub>&nbsp;molecule chemically bonds with a nitrogen atom in an amine molecule. As they combine, the CO<sub>2</sub>&nbsp;molecule is reconfigured: It changes from its original, highly stable, linear form to a “bent” shape with a negative charge — a highly reactive form that’s ready for further reaction.&nbsp;</p> </div> <div> <p>In her scheme, Gallant proposed using electrochemistry to break apart the CO<sub>2</sub>-amine adduct — right at the carbon-nitrogen bond. Cleaving the adduct at that bond would separate the two pieces: the amine in its original, unreacted state, ready to capture more CO<sub>2</sub>, and the bent, chemically reactive form of CO<sub>2</sub>, which might then react with the electrons and positively charged lithium ions that flow during battery discharge. The outcome of that reaction could be the formation of lithium carbonate (Li<sub>2</sub>CO<sub>3</sub>), which would deposit on the carbon electrode.&nbsp;<br /> &nbsp;<br /> At the same time, the reactions on the carbon electrode should promote the flow of electrons during battery discharge — even without a metal catalyst. “The discharge of the battery would occur spontaneously,” Gallant says. “And we’d break the adduct in a way that allows us to renew our CO<sub>2</sub>&nbsp;absorber while taking CO<sub>2</sub>&nbsp;to a stable, solid form.”&nbsp;</p> </div> <div> <p><strong>A process of discovery</strong>&nbsp;</p> </div> <div> <p>In 2016, Gallant and mechanical engineering doctoral student Aliza Khurram began to explore that idea.&nbsp;</p> </div> <div> <p>Their first challenge was to develop a novel electrolyte. A lithium-CO<sub>2</sub>&nbsp;battery consists of two electrodes — an anode made of lithium and a cathode made of carbon — and an electrolyte, a solution that helps carry charged particles back and forth between the electrodes as the battery is charged and discharged. For their system, they needed an electrolyte made of amine plus captured CO<sub>2</sub>&nbsp;dissolved in a solvent — and it needed to promote chemical reactions on the carbon cathode as the battery discharged.&nbsp;</p> </div> <div> <p>They started by testing possible solvents. They mixed their CO<sub>2</sub>-absorbing amine with a series of solvents frequently used in batteries and then bubbled CO<sub>2</sub>&nbsp;through the resulting solution to see if CO<sub>2</sub>&nbsp;could be dissolved at high concentrations in this unconventional chemical environment. None of the amine-solvent solutions exhibited observable changes when the CO<sub>2</sub>&nbsp;was introduced, suggesting that they might all be viable solvent candidates.&nbsp;</p> </div> <div> <p>However, for any electrochemical device to work, the electrolyte must be spiked with a salt to provide positively charged ions. Because it’s a lithium battery, the researchers started by adding a lithium-based salt — and the experimental results changed dramatically. With most of the solvent candidates, adding the salt instantly caused the mixture either to form solid precipitates or to become highly viscous — outcomes that ruled them out as viable solvents. The sole exception was the solvent dimethyl sulfoxide, or DMSO. Even when the lithium salt was present, the DMSO could dissolve the amine and CO<sub>2</sub>.&nbsp;</p> </div> <div> <p>“We found that — fortuitously — the lithium-based salt was important in enabling the reaction to proceed,” says Gallant. “There’s something about the positively charged lithium ion that chemically coordinates with the amine-CO<sub>2</sub>&nbsp;adduct, and together those species make the electrochemically reactive species.”&nbsp;</p> </div> <div> <p><strong>Exploring battery behavior during discharge</strong>&nbsp;</p> </div> <div> <p>To examine the discharge behavior of their system, the researchers set up an electrochemical cell consisting of a lithium anode, a carbon cathode, and their special electrolyte — for simplicity, already loaded with CO<sub>2</sub>. They then tracked discharge behavior at the carbon cathode.&nbsp;</p> </div> <div> <p>As they had hoped, their special electrolyte actually promoted discharge reaction in the test cell. “With the amine incorporated into the DMSO-based electrolyte along with the lithium salt and the CO<sub>2</sub>, we see very high capacities and significant discharge voltages — almost three volts,” says Gallant. Based on those results, they concluded that their system functions as a lithium-CO<sub>2</sub>&nbsp;battery with capacities and discharge voltages competitive with those of state-of-the-art lithium-gas batteries.&nbsp;</p> </div> <div> <p>The next step was to confirm that the reactions were indeed separating the amine from the CO<sub>2</sub>&nbsp;and further continuing the reaction to make CO<sub>2</sub>-derived products. To find out, the researchers used a variety of tools to examine the products that formed on the carbon cathode.&nbsp;</p> </div> <div> <p>In one test, they produced images of the post-reaction cathode surface using a scanning electron microscope (SEM). Immediately evident were spherical formations with a characteristic size of 500 nanometers, regularly distributed on the surface of the cathode. According to Gallant, the observed spherical structure of the discharge product was similar to the shape of Li<sub>2</sub>CO<sub>3</sub>&nbsp;observed in other lithium-based batteries. Those spheres were not evident in SEM images of the “pristine” carbon cathode taken before the reactions occurred.&nbsp;<br /> &nbsp;<br /> Other analyses confirmed that the solid deposited on the cathode was Li<sub>2</sub>CO<sub>3</sub>. It included only CO<sub>2</sub>-derived materials; no amine molecules or products derived from them were present. Taken together, those data provide strong evidence that the electrochemical reduction of the CO<sub>2</sub>-loaded amine occurs through the selective cleavage of the carbon-nitrogen bond.&nbsp;</p> </div> <div> <p>“The amine can be thought of as effectively switching on the reactivity of the CO<sub>2</sub>,” says Gallant. “That’s exciting because the amine commonly used in CO<sub>2</sub>&nbsp;capture can then perform two critical functions. It can serve as the absorber, spontaneously retrieving CO<sub>2</sub>&nbsp;from combustion gases and incorporating it into the electrolyte solution. And it can activate the CO<sub>2</sub>&nbsp;for further reactions that wouldn’t be possible if the amine were not there.”&nbsp;<br /> &nbsp;<br /> <strong>Future directions</strong>&nbsp;</p> </div> <div> <p>Gallant stresses that the work to date represents just a proof-of-concept study. “There’s a lot of fundamental science still to understand,” she says, before the researchers can optimize their system.&nbsp;</p> </div> <div> <p>She and her team are continuing to investigate the chemical reactions that take place in the electrolyte as well as the chemical makeup of the adduct that forms — the “reactant state” on which the subsequent electrochemistry is performed. They are also examining the detailed role of the salt composition.&nbsp;</p> </div> <div> <p>In addition, there are practical concerns to consider as they think about device design. One persistent problem is that the solid deposit quickly clogs up the carbon cathode, so further chemical reactions can’t occur. In one configuration they’re investigating — a rechargeable battery design — the cathode is uncovered during each discharge-charge cycle. Reactions during discharge deposit the solid Li<sub>2</sub>CO<sub>3</sub>, and reactions during charging lift it off, putting the lithium ions and CO<sub>2</sub>&nbsp;back into the electrolyte, ready to react and generate more electricity. However, the captured CO<sub>2</sub>&nbsp;is then back in its original gaseous form in the electrolyte. Sealing the battery would lock that CO<sub>2</sub>&nbsp;inside, away from the atmosphere — but only so much CO<sub>2</sub>&nbsp;can be stored in a given battery, so the overall impact of using batteries to capture CO<sub>2</sub>&nbsp;emissions would be limited in this scenario.&nbsp;</p> </div> <div> <p>The second configuration the researchers are investigating — a discharge-only setup — addresses that problem by never allowing the gaseous CO<sub>2</sub>&nbsp;to re-form. “We’re mechanical engineers, so what we’re really keen on doing is developing an industrial process where you can somehow mechanically or chemically harvest the solid as it forms,” Gallant says. “Imagine if by mechanical vibration you could gently remove the solid from the cathode, keeping it clear for sustained reaction.” Placed within an exhaust stream, such a system could continuously remove CO<sub>2</sub>&nbsp;emissions, generating electricity and perhaps producing valuable solid materials at the same time.&nbsp;</p> </div> <div> <p>Gallant and her team are now working on both configurations of their system. “We don’t know which is better for applications yet,” she says. While she believes that practical lithium-CO<sub>2</sub>&nbsp;batteries are still years away, she’s excited by the early results, which suggest that developing novel electrolytes to pre-activate CO<sub>2</sub>&nbsp;could lead to alternative CO<sub>2</sub>&nbsp;reaction pathways. And she and her group are already working on some.&nbsp;</p> </div> <div> <p>One goal is to replace the lithium with a metal that’s less costly and more earth-abundant, such as sodium or calcium. With&nbsp;<a class="Hyperlink SCXW146141016 BCX0" href="" rel="noreferrer" style="margin: 0px; padding: 0px; user-select: text; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; text-decoration-line: none; color: inherit;" target="_blank">seed funding</a>&nbsp;from the MIT Energy Initiative, the team has already begun looking at a system based on calcium, a material that’s not yet well-developed for battery applications. If the calcium-CO<sub>2</sub>&nbsp;setup works as they predict, the solid that forms would be calcium carbonate — a type of rock now widely used in the construction industry.&nbsp;</p> </div> <div> <p>In the meantime, Gallant and her colleagues are pleased that they have found what appears to be a new class of reactions for capturing and sequestering CO<sub>2</sub>. “CO<sub>2</sub>&nbsp;conversion has been widely studied over many decades,” she says, “so we’re excited to think we may have found something that’s different and provides us with a new window for exploring this topic.”&nbsp;</p> </div> <div> <p>This research was supported by startup funding from the&nbsp;<a href="">MIT Department of Mechanical Engineering</a>.&nbsp;<a class="Hyperlink SCXW146141016 BCX0" href="" rel="noreferrer" style="margin: 0px; padding: 0px; user-select: text; -webkit-user-drag: none; -webkit-tap-highlight-color: transparent; text-decoration-line: none; color: inherit;" target="_blank">Mingfu He</a>, a postdoc in mechanical engineering, also contributed to the research. Work on a calcium-based battery is being supported by the MIT Energy Initiative&nbsp;<a href="">Seed Fund Program</a>.</p> </div> <div> <p><em>This article appears in the&nbsp;<a href="">Spring 2019</a>&nbsp;issue of </em>Energy Futures<em>, the magazine of the MIT Energy Initiative.</em></p> </div> MIT Assistant Professor Betar Gallant (left) and graduate student Aliza Khurram are developing a novel battery that could both capture carbon dioxide in power plant exhaust and convert it to a solid ready for safe disposal. Photo: Stuart DarschMIT Energy Initiative (MITEI), Mechanical engineering, School of Engineering, Carbon dioxide, Carbon Emissions, Carbon sequestration, Climate change, Research, Batteries, Sustainability, Carbon, Energy, Faculty, Students, Graduate, postdoctoral A vision of nuclear energy buoyed by molten salt NSE graduate student Kieran Dolan tackles a critical technical challenge to fluoride-salt-cooled high-temperature nuclear reactors. Wed, 24 Jul 2019 14:00:01 -0400 Leda Zimmerman | Department of Nuclear Science and Engineering <p>Years before he set foot on the MIT campus, Kieran P. Dolan participated in studies conducted at MIT's Nuclear Reactor Laboratory (NRL). As an undergraduate student majoring in nuclear engineering at the University of Wisconsin at Madison, Dolan worked on components and sensors for MIT Reactor (MITR)-based experiments integral to designing fluoride-salt-cooled high-temperature nuclear reactors, known as FHRs.</p> <p>Today, as a second-year doctoral student in MIT's Department of Nuclear Science and Engineering, Dolan is a hands-on investigator at the NRL, deepening his research engagement with this type of next-generation reactor.</p> <p>"I've been interested in advanced reactors for a long time, so it's been really nice to stay with this project and learn from people working here on-site," says Dolan.&nbsp;</p> <p>This series of studies on FHRs is part of a multiyear collaboration among MIT, the University of Wisconsin at Madison, and the University of California at Berkeley, funded by an Integrated Research Project (IRP) Grant from the U.S. Department of Energy (DOE). The nuclear energy community sees great promise in the FHR concept because molten salt transfers heat very efficiently, enabling such advanced reactors to run at higher temperatures and with several unique safety features compared to the current fleet of water-cooled commercial reactors.<br /> &nbsp;<br /> "Molten salt reactors offer an approach to nuclear energy that is both economically viable and safe," says Dolan.</p> <p>For the purposes of the FHR project, the MITR reactor simulates the likely operating environment of a working advanced reactor, complete with high temperatures in the experimental capsules. The FHR concept Dolan has been testing envisions billiard-ball-sized composites of fuel particles suspended within a circulating flow of molten salt — a special blend of lithium fluoride and beryllium fluoride called flibe. This salt river constantly absorbs and distributes the heat produced by the fuel's fission reactions.&nbsp;<br /> &nbsp;<br /> But there is a formidable technical challenge to the salt coolants used in FHRs. "The salt reacts with the neutrons released during fission, and produces tritium," explains Dolan. "Tritium is one of hydrogen’s isotopes, which are notorious for permeating metal." Tritium is a potential hazard if it gets into water or air. "The worry is that tritium might escape as a gas through an FHR's heat exchanger or other metal components."</p> <p>There is a potential workaround to this problem: graphite, which can trap fission products and suck up tritium before it escapes the confines of a reactor. "While people have determined that graphite can absorb a significant quantity of hydrogen, no one knows with certainty where the tritium is going to end up in the reactor,” says Dolan. So, he is focusing his doctoral research on MITR experiments to determine how effectively graphite performs as a sponge for tritium — a critical element required to model tritium transport in the complete reactor system.&nbsp;&nbsp;</p> <p>"We want to predict where the tritium goes and find the best solution for containing it and extracting it safely, so we can achieve optimal performance in flibe-based reactors," he says.</p> <p>While it's early, Dolan has been analyzing the results of three MITR experiments subjecting various types of specialized graphite samples to neutron irradiation in the presence of molten salt. "Our measurements so far indicate a significant amount of tritium retention by graphite," he says. "We're in the right ballpark."</p> <p>Dolan never expected to be immersed in the electrochemistry of salts, but it quickly became central to his research portfolio. Enthused by math and physics during high school in Brookfield, Wisconsin, he swiftly oriented toward nuclear engineering in college. "I liked the idea of making useful devices, and I was especially interested in nuclear physics with practical applications, such as power plants and energy," he says.</p> <p>At UW Madison, he earned a spot in an engineering physics material research group engaged in the FHR project, and he assisted in purifying flibe coolants, designing and constructing probes for measuring salt's corrosive effect on reactor parts, and experimenting on the electrochemical properties of molten fluoride salts. Working with&nbsp;<a href="">Exelon Generation</a>&nbsp;as a reactor engineer after college convinced him he was more suited for research in next-generation projects than in the day-to-day maintenance and operation of a commercial nuclear plant.&nbsp;</p> <p>"I was interested in innovation and improving things," he says. "I liked being part of the FHR IRP, and while I didn't have a passion for electrochemistry, I knew it would be fun working on a solution that could advance a new type of reactor."</p> <p>Familiar with the goals of the FHR project, MIT facilities, and personnel, Dolan was able to jump rapidly into studies analyzing MITR's irradiated graphite samples. Under the supervision of&nbsp;<a href="">Lin-wen Hu</a>, his advisor and NRL research director, as well as MITR engineers&nbsp;<a href="">David Carpenter</a>&nbsp;and&nbsp;<a href="">Gordon Kohse</a>, Dolan came up to speed in reactor protocol. He's found on-site participation in experiments thrilling.</p> <p>"Standing at the top of the reactor as it starts and the salt heats up, anticipating when the tritium comes out, manipulating the system to look at different areas, and then watching the measurements come in — being involved with that is really interesting in a hands-on way," he says.&nbsp;</p> <p>For the immediate future, "the main focus is getting data," says Dolan. But eventually "the data will predict what happens to tritium in different conditions, which should be the main driving force determining what to do in actual commercial FHR reactor designs."</p> <p>For Dolan, contributing to this next phase of advanced reactor development would prove the ideal next step following his doctoral work. This past summer, Dolan interned at&nbsp;<a href="">Kairos Power</a>, a nuclear startup company formed by the UC Berkeley collaborators on two DOE-funded FHR IRPs. Kairos Power continues to develop FHR technology by leveraging major strategic investments that the DOE has made at universities and national laboratories, and has recently started collaborating with MIT.&nbsp;&nbsp;</p> <p>"I've built up a lot of experience in FHRs so far, and there's a lot of interest at MIT and beyond in reactors using molten salt concepts," he says. "I will be happy to apply what I've learned to help accelerate a new generation of safe and efficient reactors."</p> "I've been interested in advanced reactors for a long time, so it's been really nice to stay with this project and learn from people working here on site," says Kieran Dolan. Photo: Gretchen ErtlNuclear science and engineering, School of Engineering, Profile, Students, Nuclear power and reactors, Energy, graduate, Graduate, postdoctoral, Nuclear Reactor Lab, Renewable energy Enriching solid-state batteries MIT researchers demonstrate a method to make a smaller, safer, and faster lithium-rich ceramic electrolyte. Thu, 11 Jul 2019 13:05:01 -0400 Denis Paiste | Materials Research Laboratory <p>Researchers at MIT have come up with a new pulsed laser deposition technique to make thinner lithium electrolytes using less heat, promising faster charging and potentially higher-voltage solid-state lithium ion batteries.</p> <p>Key to the new technique for processing the solid-state battery electrolyte is alternating layers of the active electrolyte lithium garnet component (chemical formula, Li<sub>6.25</sub>Al<sub>0.25</sub>La<sub>3</sub>Zr<sub>2</sub>O<sub>12</sub>, or LLZO) with layers of lithium nitride (chemical formula <a href=";p_value=26134-62-3">Li<sub>3</sub>N</a>). First, these layers are built up like a wafer cookie using a pulsed laser deposition process at about 300 degrees Celsius (572 degrees Fahrenheit). Then they are heated to 660 C and slowly cooled, a process known as annealing.</p> <p>During the annealing process, nearly all of the nitrogen atoms burn off into the atmosphere and the lithium atoms from the original nitride layers fuse into the lithium garnet, forming a single lithium-rich, ceramic thin film. The extra lithium content in the garnet film allows the material to retain the cubic structure needed for positively charged lithium ions (cations) to move quickly through the electrolyte. The findings were reported in a&nbsp;<em>Nature Energy&nbsp;</em><a href="">paper</a>&nbsp;published online recently by MIT Associate Professor Jennifer L. M. Rupp and her students Reto Pfenninger, Michal M. Struzik, Inigo Garbayo, and collaborator Evelyn Stilp.</p> <p>“The really cool new thing is that we found a way to bring the lithium into the film at deposition by using lithium nitride as an internal lithiation source,” Rupp, the work's senior author, says. Rupp holds joint MIT appointments in the departments of Materials Science and Engineering and Electrical Engineering and Computer Science.</p> <p>“The second trick to the story is that we use lithium nitride, which is close in bandgap to the laser that we use in the deposition, whereby we have a very fast transfer of the material, which is another key factor to not lose lithium to evaporation during a pulsed laser deposition,” Rupp explains.</p> <p><strong>Safer technology</strong></p> <p>Lithium batteries with commonly used electrolytes made by combining a liquid and a polymer can pose a fire risk when the liquid is exposed to air. Solid-state batteries are desirable because they replace the commonly used liquid polymer electrolytes in consumer lithium batteries with a solid material that is safer. “So we can kick that out, bring something safer in the battery, and decrease the electrolyte component in size by a factor of 100 by going from the polymer to the ceramic system,” Rupp explains.</p> <p>Although other methods to produce lithium-rich ceramic materials on larger pellets or tapes, heated using a process called sintering, can yield a dense microstructure that retains a high lithium concentration, they require higher heat and result in bulkier material. The new technique pioneered by Rupp and her students produces a thin film that is about 330 nanometers thick (less than 1.5 hundred-thousandths of an inch). “Having a thin film structure instead of a thick ceramic is attractive for battery electrolyte in general because it allows you to have more volume in the electrodes, where you want to have the active storage capacity. So the holy grail is be thin and be fast,” she says.</p> <p>Compared to the classic ceramic coffee mug, which under high magnification shows metal oxide particles with a grain size of tens to hundreds of microns, the lithium (garnet) oxide thin films processed using Rupp’s methods show nanometer scale grain structures that are one-thousandth to one-ten-thousandth the size. That means Rupp can engineer thinner electrolytes for batteries. “There is no need in a solid-state battery to have a large electrolyte,” she says.</p> <p><strong>Faster ionic conduction</strong></p> <p>Instead, what is needed is an electrolyte with faster conductivity. The unit of measurement for lithium ion conductivity is expressed in Siemens. The new multilayer deposition technique produces a lithium garnet (LLZO) material that shows the fastest ionic conductivity yet for a lithium-based electrolyte compound, about 2.9 x 10<sup>-5</sup> Siemens (0.000029 Siemens) per centimeter. This ionic conductivity is competitive with solid-state lithium battery thin film electrolytes based on LIPON (lithium phosphorus oxynitride electrolytes) and adds a new film electrolyte material to the landscape.</p> <p>“Having the lithium electrolyte as a solid-state very fast conductor allows you to dream out loud of anything else you can do with fast lithium motion,” Rupp says.</p> <p>A battery’s negatively charged electrode stores power. The work points the way toward higher-voltage batteries based on lithium garnet electrolytes, both because its lower processing temperature opens the door to using materials for higher voltage cathodes that would be unstable at higher processing temperatures, and its smaller electrolyte size allows physically larger cathode volume in the same battery size.</p> <p>Co-authors Michal Struzik and Reto Pfenninger carried out processing and Raman spectroscopy measurements on the lithium garnet material. These measurements were key to showing the material’s fast conduction at room temperature, as well as understanding the evolution of its different structural phases.</p> <p>“One of the main challenges in understanding the development of the crystal structure in LLZO was to develop appropriate methodology. We have proposed a series of experiments to observe development of the crystal structure in the [LLZO] thin film from disordered or 'amorphous' phase to fully crystalline, highly conductive phase utilizing Raman spectroscopy upon thermal annealing under controlled atmospheric conditions,” says co-author&nbsp;<a href="">Struzik</a>, who was a postdoc working at ETH Zurich and MIT with Rupp’s group, and is now a professor at Warsaw University of Technology in Poland. “That allowed us to observe and understand how the crystal phases are developed and, as a consequence, the ionic conductivity improved,” he explains.</p> <p>Their work shows that during the annealing process, lithium garnet evolves from the amorphous phase in the initial multilayer processed at 300 C through progressively higher temperatures to a low conducting tetragonal phase in a temperature range from about 585 C to 630 C, and to the desired highly conducting cubic phase after annealing at 660 C. Notably, this temperature of 660 C to achieve the highly conducting phase in the multilayer approach is nearly 400 C lower than the 1,050 C needed to achieve it with prior sintering methods using pellets or tapes.</p> <p>“One of the greatest challenges facing the realization of solid-state batteries lies in the ability to fabricate such devices. It is tough to bring the manufacturing costs down to meet commercial targets that are competitive with today's liquid-electrolyte-based lithium-ion batteries, and one of the main reasons is the need to use high temperatures to process the ceramic solid electrolytes,” says Professor Peter Bruce, the Wolfson Chair of the Department of Materials at Oxford University, who was not involved in this research.</p> <p>“This important paper reports a novel and imaginative approach to addressing this problem by reducing the processing temperature of garnet-based solid-state batteries by more than half — that is, by hundreds of degrees,” Bruce adds. “Normally, high temperatures are required to achieve sufficient solid-state diffusion to intermix the constituent atoms of ceramic electrolyte. By interleaving lithium layers in an elegant nanostructure the authors have overcome this barrier.”</p> <p>After demonstrating the novel processing and high conductivity of the lithium garnet electrode, the next step will be to test the material in an actual battery to explore how the material reacts with a battery cathode and how stable it is. “There is still a lot to come,” Rupp predicts.</p> <p><strong>Understanding aluminum dopant sites</strong></p> <p>A small fraction of aluminum is added to the lithium garnet formulation because aluminum is known to stabilize the highly conductive cubic phase in this high-temperature ceramic. The researchers complemented their Raman spectroscopy analysis with another technique, known as negative-ion time-of-flight secondary ion mass spectrometry (TOF-SIMS), which shows that the aluminum retains its position at what were originally the interfaces between the lithium nitride and lithium garnet layers before the heating step expelled the nitrogen and fused the material.</p> <p>“When you look at large-scale processing of pellets by sintering, then everywhere where you have a grain boundary, you will find close to it a higher concentration of aluminum. So we see a replica of that in our new processing, but on a smaller scale at the original interfaces,” Rupp says. “These little things are what adds up, also, not only to my excitement in engineering but my excitement as a scientist to understand phase formations, where that goes and what that does,” Rupp says.</p> <p>“Negative TOF-SIMS was indeed challenging to measure since it is more common in the field to perform this experiment with focus on positively charged ions,” explains&nbsp;<a href="">Pfenninger</a>, who worked at ETH Zurich and MIT with Rupp’s group. “However, for the case of the negatively charged nitrogen atoms we could only track it in this peculiar setup. The phase transformations in thin films of LLZO have so far not been investigated in temperature-dependent Raman spectroscopy — another insight towards the understanding thereof.”</p> <p>The paper’s other authors are&nbsp;<a href="">Inigo Garbayo</a>, who is now at&nbsp;<a href="">CIC EnergiGUNE</a>&nbsp;in Minano, Spain, and Evelyn Stilp, who was then with&nbsp;<a href="">Empa</a>, Swiss Federal Laboratories for Materials Science and Technology, in Dubendorf, Switzerland.</p> <p>Rupp began this research while serving as a professor of electrochemical materials at&nbsp;<a href="">ETH Zurich</a>&nbsp;(the Swiss Federal Institute of Technology) before she joined the MIT faculty in February 2017. MIT and ETH have jointly filed for two&nbsp;<a href="">patents</a>&nbsp;on the multi-layer lithium garnet/lithium nitride processing. This new processing method, which allows precise control of lithium concentration in the material, can also be applied to other lithium oxide films such as lithium titanate and lithium cobaltate that are used in battery electrodes. “That is something we invented. That’s new in ceramic processing,” Rupp says.</p> <p>“It is a smart idea to use Li<sub>3</sub>N as a lithium source during preparation of the garnet layers, as lithium loss is a critical issue during thin film preparation otherwise,” comments University Professor&nbsp;<a href="">Jürgen Janek</a>&nbsp;at Justus Liebig University Giessen in Germany. Janek, who was not involved in this research, adds that “the quality of the data and the analysis is convincing.”&nbsp;<br /> <br /> “This work is an exciting first step in preparing one of the best oxide-based solid electrolytes in an intermediate temperature range,” Janek says. “It will be interesting to see whether the intermediate temperature of about 600 degrees C is sufficient to avoid side reactions with the electrode materials.”</p> <p>Oxford Professor Bruce notes the novelty of the approach, adding “I'm not aware of similar nanostructured approaches to reduce diffusion lengths in solid-state synthesis.”<br /> <br /> “Although the paper describes specific application of the approach to the formation of lithium-rich and therefore highly conducting garnet solid electrolytes, the methodology has more general applicability, and therefore significant potential beyond the specific examples provided in the paper,” Bruce says. Commercialization may be needed to be demonstrate this approach at larger scale, he suggests.</p> <p>While the immediate impact of this work is likely to be on batteries, Rupp predicts another decade of exciting advances based on applications of her processing techniques to devices for neuromorphic computing, artificial intelligence, and fast gas sensors. “The moment the lithium is in a small solid-state film, you can use the fast motion to trigger other electrochemistry,” she says.</p> <p>Several companies have already expressed interest in using the new electrolyte approach.&nbsp;“It’s good for me to work with strong players in the field so they can push out the technology faster than anything I can do,” Rupp says.</p> <p>This work was funded by the MIT Lincoln Laboratory, the Thomas Lord Foundation,&nbsp;<a href="">Competence Center Energy and Mobility</a>, and Swiss Electrics.</p> MIT Associate Professor Jennifer Rupp stands in front of a pulsed laser deposition chamber, in which her team developed a new lithium garnet electrolyte material with the fastest reported ionic conductivity of its type. The technique produces a thin film about 330 nanometers thick. “Having the lithium electrolyte as a solid-state very fast conductor allows you to dream out loud of anything else you can do with fast lithium motion,” Rupp says. Photo: Denis Paiste/Materials Research LaboratoryMaterials Research Laboratory, Materials Science and Engineering, Electrical Engineering & Computer Science (eecs), Lincoln Laboratory, School of Engineering, Nanoscience and nanotechnology, Energy, Batteries, Lithium-ion, Research, DMSE Pathways to a low-carbon China Study projects a key role for carbon capture and storage. Mon, 08 Jul 2019 15:50:01 -0400 Mark Dwortzan | Joint Program on the Science and Policy of Global Change <p>Fulfilling the ultimate goal of the 2015 Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius, if not 1.5 C — will be impossible without dramatic action from the world’s largest emitter of greenhouse gases, China. Toward that end, China began in 2017 developing an emissions trading scheme (ETS), a national carbon dioxide market designed to enable the country to meet its initial Paris pledge with the greatest efficiency and at the lowest possible cost. China’s pledge, or nationally determined contribution (NDC), is to reduce its CO<sub>2</sub>&nbsp;intensity of gross domestic product (emissions produced per unit of economic activity)&nbsp;by 60 to 65 percent in 2030 relative to 2005, and to peak CO<sub>2</sub>&nbsp;emissions around 2030.</p> <p>When it’s rolled out, China’s carbon market will initially cover the electric power sector (which currently produces more than 3 billion tons of CO<sub>2</sub>) and likely set CO<sub>2</sub>&nbsp;emissions intensity targets (e.g., grams of CO<sub>2</sub> per kilowatt hour) to ensure that its short-term NDC is fulfilled. But to help the world achieve the long-term 2 C and 1.5 C Paris goals, China will need to continually decrease these targets over the course of the century.</p> <p>A new study of China’s long-term power generation mix under the nation’s ETS projects that until 2065, renewable energy sources will likely expand to meet these targets; after that, carbon capture and storage (CCS) could be deployed to meet the more stringent targets that follow. Led by researchers at the MIT Joint Program on the Science and Policy of Global Change, the <a href="">study</a> appears in the journal <em>Energy Economics.</em></p> <p>“This research provides insight into the level of carbon prices and mix of generation technologies needed for China to meet different CO<sub>2</sub> intensity targets for the electric power sector,” says <a href="">Jennifer Morris</a>, lead author of the study and a research scientist at the MIT Joint Program. ”We find that coal CCS has the potential to play an important role in the second half of the century, as part of a portfolio that also includes renewables and possibly nuclear power.”</p> <p>To evaluate the impacts of multiple potential ETS pathways — different starting carbon prices and rates of increase — on the deployment of CCS technology, the researchers enhanced the MIT Economic Projection and Policy Analysis (<a href="">EPPA</a>) model to include the joint program’s latest assessments of the costs of low-carbon power generation technologies in China. Among the technologies included in the model are natural gas, nuclear, wind, solar, coal with CCS, and natural gas with CCS. Assuming that power generation prices are the same across the country for any given technology, the researchers identify different ETS pathways in which CCS could play a key role in lowering the emissions intensity of China’s power sector, particularly for targets consistent with achieving the long-term 2 C and 1.5 C Paris goals by 2100.</p> <p>The study projects a two-stage transition — first to renewables, and then to coal CCS. The transition from renewables to CCS is driven by two factors. First, at higher levels of penetration, renewables incur increasing costs related to accommodating the intermittency challenges posed by wind and solar. This paves the way for coal CCS. Second, as experience with building and operating CCS technology is gained, CCS costs decrease, allowing the technology to be rapidly deployed at scale after 2065 and replace renewables as the primary power generation technology.</p> <p>The study shows that carbon prices of $35-40 per ton of CO<sub>2</sub>&nbsp;make CCS technologies coupled with coal-based generation cost-competitive against other modes of generation, and that carbon prices higher than $100 per ton of CO<sub>2</sub>&nbsp;allow for a significant expansion of CCS.</p> <p>“Our study is at the aggregate level of the country,” says Sergey Paltsev, deputy director of the joint program. “We recognize that the cost of electricity varies greatly from province to province in China, and hope to include interactions between provinces in our future modeling to provide deeper understanding of regional differences. At the same time, our current results provide useful insights to decision-makers in designing more substantial emissions mitigation pathways.”</p> Coal-fired electric plant, Henan Province, China Photo: V.T. Polywoda/FlickrJoint Program on the Science and Policy of Global Change, MIT Energy Initiative, Climate change, Alternative energy, Energy, Environment, Economics, Greenhouse gases, Carbon dioxide, Research, Policy, Emissions, China, Technology and society Making wireless communication more energy efficient Along with studying theory, &quot;it&#039;s also important to me that the work we are doing will help to solve real-world problems,” says LIDS student Omer Tanovic. Wed, 03 Jul 2019 13:50:01 -0400 Greta Friar | Laboratory for Information and Decision Systems <p>Omer Tanovic, a PhD candidate in the Department of Electrical Engineering and Computer Science, joined the Laboratory for Information and Decision Systems (LIDS) because he loves studying theory and turning research questions into solvable math problems. But Omer says that his engineering background — before coming to MIT he received undergraduate and master’s degrees in electrical engineering and computer science at the University of Sarajevo in Bosnia-Herzegovina — has taught him never to lose sight of the intended applications of his work, or the practical parameters for implementation.</p> <p>“I love thinking about things on the abstract math level, but it’s also important to me that the work we are doing will help to solve real-world problems,” Omer says. “Instead of building circuits, I am creating algorithms that will help make better circuits.”</p> <p>One real-world problem that captured Omer’s attention during his PhD is power efficiency in wireless operations. The success of wireless communications has led to massive infrastructure expansion in the United States and around the world. This has included many new cell towers and base stations. As these networks and the volume of information they handle grow, they consume an increasingly hefty amount of power, some of which goes to powering the system as it’s supposed to, but much of which is lost as heat due to energy inefficiency. This is a problem both for companies such as mobile network operators, which have to pay large utility bills to cover their operational costs, and for society at large, as the sector’s greenhouse gas emissions rise.</p> <p>These concerns are what motivate Omer in his research. Most of the projects that he has worked on at MIT seek to design signal processing systems, optimized to different measures, that will increase power efficiency while ensuring that the output signal (what you hear when talking to someone on the phone, for instance) is true to the original input (what was said by the person on the other end of the call).</p> <p>His latest project seeks to address the power efficiency problem by decreasing the peak-to-average power ratio (PAPR) of wireless communication signals. In the broadest sense, PAPR is an indirect indicator of how much power is required to send and receive a clear signal across a network. The lower this ratio is, the more energy-efficient the transmission. Namely, much of the power consumed in cellular networks is dedicated to power amplifiers, which collect low-power electronic input and convert it to a higher-power output, such as picking up a weak radio signal generated inside a cell phone and amplifying it so that, when emitted by an antenna it is strong enough to reach a cell tower. This ensures that the signal is robust enough to maintain adequate signal-to-noise ratio over the communication link. Power amplifiers are at their most efficient when operating near their saturation level, at maximum output power. However, because cellular network technology has evolved in a way that accommodates a huge volume and variety of information across the network — resulting in far less uniform signals than in the past — modern communication standards require signals with big peak-to-average power ratios. This means that a radio frequency transmitter must be designed such that the underlying power amplifier can handle peaks much higher than the average power being transmitted, and therefore, most of the time, the power amplifier is working inefficiently — far from its saturation level.</p> <p>“Every cell tower has to have some kind of PAPR reduction algorithm in place in order to operate. But the algorithms they use are developed with little or no guaranties on improving system performance,” Omer says. “A common conception is that optimal algorithms, which would certainly improve system performance, are either too expensive to implement — in terms of power or computational capacity — or cannot be implemented at all.”</p> <p>Omer, who is supervised by LIDS Professor Alexandre Megretski, designed an algorithm that can decrease the PAPR of a modern communication signal, which would allow the power amplifier to operate closer to its maximum efficiency, thus reducing the amount of energy lost in the process. To create this system he first considered it as an optimization problem, the conditions of which meant that any solution would not be implementable, as it would require infinite latency, meaning an infinite delay before transmitting the signal. However, Omer showed that the underlying optimal system, even though of infinite latency, has a desirable fading-memory property, and so he could create an approximation with finite latency — an acceptable lag time. From this, he developed a way to best approximate the optimal system. The approximation, which is implementable, allows tradeoffs between precision and latency, so that real-time realizations of the algorithm can improve power efficiency without adding too much transmission delay or too much distortion to the signal. Omer applied this system using standardized test signals for 4G communication and found that, on average, he could get around 50 percent reduction in the peak-to-average power ratio while satisfying standard measures of quality of digital communication signals.</p> <p>Omer’s algorithm, along with improving power efficiency, is also computationally efficient. “This is important in order to ensure that the algorithm is not just theoretically implementable, but also practically implementable,” Omer says, once again stressing that abstract mathematical solutions are only valuable if they cohere to real-world parameters. Microchip real estate in communications is a limited commodity, so the algorithm cannot take up much space, and its mathematical operations have to be executed quickly, as latency is a critical factor in wireless communications. Omer believes that the algorithm could be adapted to solve other engineering problems with similar frameworks, including envelope tracking and model predictive control.</p> <p>While he has been working on this project, Omer has made a home for himself at MIT. Two of his three sons were born here in Cambridge — in fact, the youngest was born on campus, in the stairwell of Omer and his wife’s graduate housing building. “The neighbors slept right through it,” Omer says with a laugh.</p> <p>Omer quickly became an active member of the LIDS community when he arrived at MIT. Most notably, he was part of the LIDS student conference and student social committees, where, in addition to helping run the annual LIDS Student Conference, a signature lab event now in its 25th year, he also helped to organize monthly lunches, gatherings, and gaming competitions, including a semester-long challenge dubbed the OLIDSpics (an homage to the Olympic Games). He says that being on the committees was a great way to engage with and contribute to the LIDS community, a group for which he is grateful.</p> <p>“At MIT, and especially at LIDS, you can learn something new from everyone you speak to. I’ve been in many places, and this is the only place where I’ve experienced a community like that,” Omer says.</p> <p>As Omer’s time at LIDS draws to an end, he is still debating what to do next. On one hand, his love of solving real-world problems is drawing him toward industry. He spent four summers during his PhD interning at companies including the Mitsubishi Electric Research Lab. He enjoyed the fast pace of industry, being able to see his solutions implemented relatively quickly.</p> <p>On the other hand, Omer is not sure he could ever leave academia for long; he loves research and is also truly passionate about teaching. Omer, who grew up in Bosnia-Herzegovina, began teaching in his first year of high school, at a math camp for younger children. He has been teaching in one form or another ever since.</p> <p>At MIT, Omer has taught both undergraduate- and graduate-level courses, including as an instructor-G, an appointment only given to advanced students who have demonstrated teaching expertise. He has won two teaching awards, the MIT School of Engineering Graduate Student Extraordinary Teaching and Mentoring Award in 2018 and the MIT EECS Carlton E. Tucker Teaching Award in 2017.</p> <p>The magnitude of Omer’s love for teaching is clear when he speaks about working with students: “That moment when you explain something to a student and you see them really understand the concept is priceless. No matter how much energy you have to spend to make that happen, it’s worth it,” Omer says.</p> <p>In communications, power efficiency is key, but when it comes to research and teaching, there’s no limit to Omer’s energy.</p> Omer Tanovic says that his engineering background has taught him never to lose sight of the intended applications of his work, or the practical parameters for implementation.Laboratory for Information and Decision Systems (LIDS), Electrical Engineering & Computer Science (eecs), School of Engineering, Wireless, Energy, Networks, Algorithms, Profile, Graduate, postdoctoral, Research, Emissions, Industry, Students Experiments show dramatic increase in solar cell output Method for collecting two electrons from each photon could break through theoretical solar-cell efficiency limit. Wed, 03 Jul 2019 12:59:59 -0400 David L. Chandler | MIT News Office <p>In any conventional silicon-based solar cell, there is an absolute limit on overall efficiency, based partly on the fact that each photon of light can only knock loose a single electron, even if that photon carried twice the energy needed to do so. But now, researchers have demonstrated a method for getting high-energy photons striking silicon to kick out two electrons instead of one, opening the door for a new kind of solar cell with greater efficiency than was thought possible.</p> <p>While conventional silicon cells have an absolute theoretical maximum efficiency of about 29.1 percent conversion of solar energy, the new approach, developed over the last several years by researchers at MIT and elsewhere, could bust through that limit, potentially adding several percentage points to that maximum output. The results are described today in the journal <em>Nature</em>, in a paper by graduate student Markus Einzinger, professor of chemistry Moungi Bawendi, professor of electrical engineering and computer science Marc Baldo, and eight others at MIT and at Princeton University.</p> <p>The basic concept behind this new technology has been known for decades, and the first demonstration that the principle could work was carried out by some members of this team <a href="" target="_blank">six years ago</a>. But actually translating the method into a full, operational silicon solar cell took years of hard work, Baldo says.</p> <p>That initial demonstration “was a good test platform” to show that the idea could work, explains Daniel Congreve PhD ’15, an alumnus now at the Rowland Institute at Harvard, who was the lead author in that prior report and is a co-author of the new paper. Now, with the new results, “we’ve done what we set out to do” in that project, he says.</p> <p>The original study demonstrated the production of two electrons from one photon, but it did so in an organic photovoltaic cell, which is less efficient than a silicon solar cell. It turned out that transferring the two electrons from a top collecting layer made of tetracene into the silicon cell “was not straightforward,” Baldo says. Troy Van Voorhis, a professor of chemistry at MIT who was part of that original team, points out that the concept was first proposed back in the 1970s, and says wryly that turning that idea into a practical device “only took 40 years.”</p> <p>The key to splitting the energy of one photon into two electrons lies in a class of materials that possess “excited states” called excitons, Baldo says: In these excitonic materials, “these packets of energy propagate around like the electrons in a circuit,” but with quite different properties than electrons. “You can use them to change energy — you can cut them in half, you can combine them.” In this case, they were going through a process called singlet exciton fission, which is how the light’s energy gets split into two separate, independently moving packets of energy. The material first absorbs a photon, forming an exciton that rapidly undergoes fission into two excited states, each with half the energy of the original state.</p> <p>But the tricky part was then coupling that energy over into the silicon, a material that is not excitonic. This coupling had never been accomplished before.</p> <p>As an intermediate step, the team tried coupling the energy from the excitonic layer into a material called quantum dots. “They’re still excitonic, but they’re inorganic,” Baldo says. “That worked; it worked like a charm,” he says. By understanding the mechanism taking place in that material, he says, “we had no reason to think that silicon wouldn’t work.”</p> <p>What that work showed, Van Voorhis says, is that the key to these energy transfers lies in the very surface of the material, not in its bulk. “So it was clear that the surface chemistry on silicon was going to be important. That was what was going to determine what kinds of surface states there were.” That focus on the surface chemistry may have been what allowed this team to succeed where others had not, he suggests.</p> <p>The key was in a thin intermediate layer. “It turns out this tiny, tiny strip of material at the interface between these two systems [the silicon solar cell and the tetracene layer with its excitonic properties] ended up defining everything. It’s why other researchers couldn’t get this process to work, and why we finally did.” It was Einzinger “who finally cracked that nut,” he says, by using a layer of a material called hafnium oxynitride.</p> <p>The layer is only a few atoms thick, or just 8 angstroms (ten-billionths of a meter), but it acted as a “nice bridge” for the excited states, Baldo says. That finally made it possible for the single high-energy photons to trigger the release of two electrons inside the silicon cell. That produces a doubling of the amount of energy produced by a given amount of sunlight in the blue and green part of the spectrum. Overall, that could produce an increase in the power produced by the solar cell — from a theoretical maximum of 29.1 percent, up to a maximum of about 35 percent.</p> <p>Actual silicon cells are not yet at their maximum, and neither is the new material, so more development needs to be done, but the crucial step of coupling the two materials efficiently has now been proven. “We still need to optimize the silicon cells for this process,” Baldo says. For one thing, with the new system those cells can be thinner than current versions. Work also needs to be done on stabilizing the materials for durability. Overall, commercial applications are probably still a few years off, the team says.</p> <p>Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”</p> <p>The researchers have measured one special property of hafnium oxynitride that helps it transfer the excitonic energy. “We know that hafnium oxynitride generates additional charge at the interface, which reduces losses by a process called electric field passivation. If we can establish better control over this phenomenon, efficiencies may climb even higher.” Einzinger says. So far, no other material they’ve tested can match its properties.</p> <p>The research was supported as part of the MIT Center for Excitonics, funded by the U.S. Department of Energy.</p> Diagram depicts the process of “singlet fission,” which is the first step toward producing two electrons from a single incoming photon of light.Image courtesy of the researchersSchool of Engineering, Alternative energy, Chemistry, Excitonics, Climate change, Energy, MIT Energy Initiative, Research, Solar, Department of Energy (DoE), National Science Foundation (NSF), Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), Materials Science and Engineering