MIT News - Nanoscience and nanotechnology - MIT.nano MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Thu, 05 Mar 2020 23:59:59 -0500 Novel method for easier scaling of quantum devices System “recruits” defects that usually cause disruptions, using them to instead carry out quantum operations. Thu, 05 Mar 2020 23:59:59 -0500 Rob Matheson | MIT News Office <p>In an advance that may help researchers scale up quantum devices, an MIT team has developed a method to “recruit” neighboring quantum bits made of nanoscale defects in diamond, so that instead of causing disruptions they help carry out quantum operations.</p> <p>Quantum devices perform operations using quantum bits, called “qubits,” that can represent the two states corresponding to classic binary bits — a 0 or 1 — or a “quantum superposition” of both states simultaneously. The unique superposition state can enable quantum computers to solve problems that are practically impossible for classical computers, potentially spurring breakthroughs in biosensing, neuroimaging, machine learning, and other applications.</p> <p>One promising qubit candidate is a defect in diamond, called a nitrogen-vacancy (NV) center, which holds electrons that can be manipulated by light and microwaves. In response, the defect emits photons that can carry quantum information. Because of their solid-state environments, however, NV centers are always surrounded by many other unknown defects with different spin properties, called “spin defects.” When the measurable NV-center qubit interacts with those spin defects, the qubit loses its coherent quantum state — “decoheres”—&nbsp;and operations fall apart. Traditional solutions try to identify these disrupting defects to protect the qubit from them.</p> <p>In a paper published Feb. 25 in <em>Physical Letters Review</em>, the researchers describe a method that uses an NV center to probe its environment and uncover the existence of several nearby spin defects. Then, the researchers can pinpoint the defects’ locations and control them to achieve a coherent quantum state — essentially leveraging them as additional qubits.</p> <p>In experiments, the team generated and detected quantum coherence among three electronic spins — scaling up the size of the quantum system from a single qubit (the NV center) to three qubits (adding two nearby spin defects). The findings demonstrate a step forward in scaling up quantum devices using NV centers, the researchers say. &nbsp;</p> <p>“You always have unknown spin defects in the environment that interact with an NV center. We say, ‘Let’s not ignore these spin defects, which [if left alone] could cause faster decoherence. Let’s learn about them, characterize their spins, learn to control them, and ‘recruit’ them to be part of the quantum system,’” says the lead co-author Won Kyu Calvin Sun, a graduate student in the Department of Nuclear Science and Engineering and a member of the Quantum Engineering group. “Then, instead of using a single NV center [or just] one qubit, we can then use two, three, or four qubits.”</p> <p>Joining Sun on the paper are lead author Alexandre Cooper ’16 of Caltech; Jean-Christophe Jaskula, a research scientist in the MIT Research Laboratory of Electronics (RLE) and member of the Quantum Engineering group at MIT; and Paola Cappellaro, a professor in the Department of Nuclear Science and Engineering, a member of RLE, and head of the Quantum Engineering group at MIT.</p> <p><strong>Characterizing defects</strong></p> <p>NV centers occur where carbon atoms in two adjacent places in a diamond’s lattice structure are missing — one atom is replaced by a nitrogen atom, and the other space is an empty “vacancy.” The NV center essentially functions as an atom, with a nucleus and surrounding electrons that are extremely sensitive to tiny variations in surrounding electrical, magnetic, and optical fields. Sweeping microwaves across the center, for instance, makes it change, and thus control, the spin states of the nucleus and electrons.</p> <p>Spins are measured using a type of magnetic resonance spectroscopy. This method plots the frequencies of electron and nucleus spins in megahertz as a “resonance spectrum” that can dip and spike, like a heart monitor. Spins of an NV center under certain conditions are well-known. But the surrounding spin defects are unknown and difficult to characterize.</p> <p>In their work, the researchers identified, located, and controlled two electron-nuclear spin defects near an NV center. They first sent microwave pulses at specific frequencies to control the NV center. Simultaneously, they pulse another microwave that probes the surrounding environment for other spins. They then observed the resonance spectrum of the spin defects interacting with the NV center.</p> <p>The spectrum dipped in several spots when the probing pulse interacted with nearby electron-nuclear spins, indicating their presence. The researchers then swept a magnetic field across the area at different orientations. For each orientation, the defect would “spin” at different energies, causing different dips in the spectrum. Basically, this allowed them to measure each defect’s spin in relation to each magnetic orientation. They then plugged the energy measurements into a model equation with unknown parameters. This equation is used to describe the quantum interactions of an electron-nuclear spin defect under a magnetic field. Then, they could solve the equation to successfully characterize each defect.</p> <p><strong>Locating and controlling</strong></p> <p>After characterizing the defects, the next step was to characterize the interaction between the defects and the NV, which would simultaneously pinpoint their locations. To do so, they again swept the magnetic field at different orientations, but this time looked for changes in energies describing the interactions between the two defects and the NV center. The stronger the interaction, the closer they were to one another. They then used those interaction strengths to determine where the defects were located, in relation to the NV center and to each other. That generated a good map of the locations of all three defects in the diamond.</p> <p>Characterizing the defects and their interaction with the NV center allow for full control, which involves a few more steps to demonstrate. First, they pump the NV center and surrounding environment with a sequence of pulses of green light and microwaves that help put the three qubits in a well-known quantum state. Then, they use another sequence of pulses that ideally entangles the three qubits briefly, and then disentangles them, which enables them to detect the three-spin coherence of the qubits.</p> <p>The researchers verified the three-spin coherence by measuring a major spike in the resonance spectrum. The measurement of the spike recorded was essentially the sum of the frequencies of the three qubits. If the three qubits for instance had little or no entanglement, there would have been four separate spikes of smaller height.</p> <p>“We come into a black box [environment with each NV center]. But when we probe the NV environment, we start seeing dips and wonder which types of spins give us those dips. Once we [figure out] the spin of the unknown defects, and their interactions with the NV center, we can start controlling their coherence,” Sun says. “Then, we have full universal control of our quantum system.”</p> <p>Next, the researchers hope to better understand other environmental noise surrounding qubits. That will help them develop more robust error-correcting codes for quantum circuits. Furthermore, because on average the process of NV center creation in diamond creates numerous other spin defects, the researchers say they could potentially scale up the system to control even more qubits. “It gets more complex with scale. But if we can start finding NV centers with more resonance spikes, you can imagine starting to control larger and larger quantum systems,” Sun says.</p> An MIT team found a way to “recruit” normally disruptive quantum bits (qubits) in diamond to, instead, help carry out quantum operations. This approach could be used to help scale up quantum computing systems. Image: Christine Daniloff, MITResearch, Computer science and technology, Quantum computing, Nuclear science and engineering, Nanoscience and nanotechnology, Sensors, Research Laboratory of Electronics, Materials Science and Engineering, Physics, School of Engineering Undergraduate Teaching Lab wins SafetyStratus College and University Health and Safety Award The award is given annually by the American Chemical Society. Tue, 03 Mar 2020 15:20:01 -0500 Danielle Randall Doughty | Department of Chemistry <p>The Department of Chemistry’s&nbsp;<a href="" target="_blank">Undergraduate Teaching Laboratory</a>&nbsp;has been awarded the&nbsp;<a href="" target="_blank">2020 SafetyStratus College and University Health and Safety Award</a> by the American Chemical Society (ACS). ACS gives this award in recognition of the most comprehensive chemical safety programs in higher undergraduate education.</p> <p>The process of submitting the Undergraduate Teaching Laboratory for this illustrious prize was initiated by Whitney Hess, manager of safety systems and programs at MIT.Nano, who worked diligently with laboratory Director John Dolhun to complete the comprehensive application required for the award.</p> <div> <p>“We feel very honored, this represents our collective efforts to strive for the highest standards in fostering a safe teaching lab and in enhancing the chemical safety education of our undergraduates,” said Dolhun and Hess in a joint statement. “We are motivated to continuously improve our lab operations and safety, in alignment with the research advances happening in the chemistry department that inspire our lab modules.”</p> <div> <p>Winners of the SafetyStratus College and University Health and Safety Award receive a $1,000 honorarium and an engraved plaque, which are presented at the 2020 CHAS Awards Symposium at the ACS Fall national meeting. The recipients will deliver a 15- to 20-minute presentation on any topic pertaining to chemical safety at the symposium.</p> <div> <p>MIT previously received this award in 2005 and 1991.</p> </div> </div> </div> Whitney Hess (left) and John DolhunPhoto: Danielle DoughtyChemistry, Awards, honors and fellowships, School of Science, Safety, Health, MIT.nano, Nanoscience and nanotechnology Making a remarkable material even better Aerogels for solar devices and windows are more transparent than glass. Tue, 25 Feb 2020 13:10:01 -0500 Nancy W. Stauffer | MIT Energy Initiative <p>In recent decades, the search for high-performance thermal insulation for buildings has prompted manufacturers to turn to aerogels. Invented in the 1930s, these remarkable materials are translucent, ultraporous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow, making them ideal for keeping heat inside on a cold winter day and outside when summer temperatures soar.</p> <p>Five years ago, researchers led by&nbsp;<a href="">Evelyn Wang</a>, a professor and head of the Department of Mechanical Engineering, and Gang Chen, the Carl Richard Soderberg Professor in Power Engineering, set out to add one more property to that list. They aimed to make a silica aerogel that was truly transparent.</p> <p>“We started out trying to realize an optically transparent, thermally insulating aerogel for solar thermal systems,” says Wang. Incorporated into a solar thermal collector, a slab of aerogel would allow sunshine to come in unimpeded but prevent heat from coming back out — a key problem in today’s systems. And if the transparent aerogel were sufficiently clear, it could be incorporated into windows, where it would act as a good heat barrier but still allow occupants to see out.</p> <p>When the researchers started their work, even the best aerogels weren’t up to those tasks. “People had known for decades that aerogels are a good thermal insulator, but they hadn’t been able to make them very optically transparent,” says Lin Zhao PhD ’19 of mechanical engineering. “So in our work, we’ve been trying to understand exactly why they’re not very transparent, and then how we can improve their transparency.”</p> <p><strong>Aerogels: opportunities and challenges</strong></p> <p>The remarkable properties of a silica aerogel are the result of its nanoscale structure. To visualize that structure, think of holding a pile of small, clear particles in your hand. Imagine that the particles touch one another and slightly stick together, leaving gaps between them that are filled with air. Similarly, in a silica aerogel, clear, loosely connected, nanoscale silica particles form a three-dimensional solid network within an overall structure that is mostly air. Because of all that air, a silica aerogel has an extremely low density — in fact, one of the lowest densities of any known bulk material — yet it’s solid and structurally strong, though brittle.</p> <p>If a silica aerogel is made of transparent particles and air, why isn’t it transparent? Because the light that enters doesn’t all pass straight through. It is diverted whenever it encounters an interface between a solid particle and the air surrounding it. Figure 1 in the slideshow above illustrates the process. When light enters the aerogel, some is absorbed inside it. Some — called direct transmittance — travels straight through. And some is redirected along the way by those interfaces. It can be scattered many times and in any direction, ultimately exiting the aerogel at an angle. If it exits from the surface through which it entered, it is called diffuse reflectance; if it exits from the other side, it is called diffuse transmittance.</p> <p>To make an aerogel for a solar thermal system, the researchers needed to maximize the total transmittance: the direct plus the diffuse components. And to make an aerogel for a window, they needed to maximize the total transmittance and simultaneously minimize the fraction of the total that is diffuse light. “Minimizing the diffuse light is critical because it’ll make the window look cloudy,” says Zhao. “Our eyes are very sensitive to any imperfection in a transparent material.”</p> <p><strong>Developing a model</strong></p> <p>The sizes of the nanoparticles and the pores between them have a direct impact on the fate of light passing through an aerogel. But figuring out that interaction by trial and error would require synthesizing and characterizing too many samples to be practical. “People haven’t been able to systematically understand the relationship between the structure and the performance,” says Zhao. “So we needed to develop a model that would connect the two.”</p> <p>To begin, Zhao turned to the radiative transport equation, which describes mathematically how the propagation of light (radiation) through a medium is affected by absorption and scattering. It is generally used for calculating the transfer of light through the atmospheres of Earth and other planets. As far as Wang knows, it has not been fully explored for the aerogel problem.</p> <p>Both scattering and absorption can reduce the amount of light transmitted through an aerogel, and light can be scattered multiple times. To account for those effects, the model decouples the two phenomena and quantifies them separately — and for each wavelength of light.</p> <p>Based on the sizes of the silica particles and the density of the sample (an indicator of total pore volume), the model calculates light intensity within an aerogel layer by determining its absorption and scattering behavior using predictions from electromagnetic theory. Using those results, it calculates how much of the incoming light passes directly through the sample and how much of it is scattered along the way and comes out diffuse.</p> <p>The next task was to validate the model by comparing its theoretical predictions with experimental results.</p> <p><strong>Synthesizing aerogels</strong></p> <p>Working in parallel, graduate student Elise Strobach of mechanical engineering had been learning how best to synthe­size aerogel samples — both to guide development of the model and ultimately to validate it. In the process, she produced new insights on how to synthesize an aerogel with a specific desired structure.</p> <p>Her procedure starts with a common form of silicon called silane, which chemically reacts with water to form an aerogel. During that reaction, tiny nucleation sites occur where particles begin to form. How fast they build up determines the end structure. To control the reaction, she adds a catalyst, ammonia. By carefully selecting the ammonia-to-silane ratio, she gets the silica particles to grow quickly at first and then abruptly stop growing when the precursor materials are gone — a means of producing particles that are small and uniform. She also adds a solvent, methanol, to dilute the mixture and control the density of the nucleation sites, thus the pores between the particles.</p> <p>The reaction between the silane and water forms a gel containing a solid nanostructure with interior pores filled with the solvent. To dry the wet gel, Strobach needs to get the solvent out of the pores and replace it with air — without crushing the delicate structure. She puts the aerogel into the pressure chamber of a critical point dryer and floods liquid CO<sub>2</sub>&nbsp;into the chamber. The liquid CO<sub>2</sub>&nbsp;flushes out the solvent and takes its place inside the pores. She then slowly raises the temperature and pressure inside the chamber until the liquid CO<sub>2</sub>&nbsp;transforms to its supercritical state, where the liquid and gas phases can no longer be differentiated. Slowly venting the chamber releases the CO<sub>2</sub>&nbsp;and leaves the aerogel behind, now filled with air. She then subjects the sample to 24 hours of annealing — a standard heat-treatment process — which slightly reduces scatter without sacrificing the strong thermal insulating behavior. Even with the 24 hours of annealing, her novel procedure shortens the required aerogel synthesis time from several weeks to less than four days.</p> <p><strong>Validating and using the model</strong></p> <p>To validate the model, Strobach fabricated samples with carefully controlled thicknesses, densities, and pore and particle sizes — as determined by small-angle X-ray scattering — and used a standard spectrophotometer to measure the total and diffuse transmittance.</p> <p>The data confirmed that, based on measured physical properties of an aerogel sample, the model could calculate total transmittance of light as well as a measure of clarity called haze, defined as the fraction of total transmittance that is made up of diffuse light.</p> <p>The exercise confirmed simplifying assumptions made by Zhao in developing the model. Also, it showed that the radiative properties are independent of sample geometry, so his model can simulate light transport in aerogels of any shape. And it can be applied not just to aerogels, but to any porous materials.</p> <p>Wang notes what she considers the most important insight from the modeling and experimental results: “Overall, we determined that the key to getting high transparency and minimal haze — without reducing thermal insulating capability — is to have particles and pores that are really small and uniform in size,” she says.</p> <p>One analysis demonstrates the change in behavior that can come with a small change in particle size. Many applications call for using a thicker piece of transparent aerogel to better block heat transfer. But increasing thickness may decrease transparency. With their samples, as long as particle size is small, increasing thickness to achieve greater thermal insulation will not significantly decrease total transmittance or increase haze.</p> <p><strong>Comparing aerogels from MIT and elsewhere</strong></p> <p>How much difference does their approach make? “Our aerogels are more transparent than glass because they don’t reflect — they don’t have that glare spot where the glass catches the light and reflects to you,” says Strobach.</p> <p>To Lin, a main contribution of their work is the development of general guidelines for material design, as demonstrated by Figure 4 in the slideshow above. Aided by such a “design map,” users can tailor an aerogel for a particular application. Based on the contour plots, they can determine the combinations of controllable aerogel properties — namely, density and particle size — needed to achieve a targeted haze and transmittance outcome for many applications.</p> <p><strong>Aerogels in solar thermal collectors</strong></p> <p>The researchers have already demonstrated the value of their new aerogels for solar thermal energy conversion systems, which convert sunlight into thermal energy by absorbing radiation and transforming it into heat. Current solar thermal systems can produce thermal energy at so-called intermediate temperatures — between 120 and 220 degrees Celsius — which can be used for water and space heating, steam generation, industrial processes, and more. Indeed, in 2016, U.S. consumption of thermal energy exceeded the total electricity generation from all renewable sources.</p> <p>However, state-of-the-art solar thermal systems rely on expensive optical systems to concentrate the incoming sunlight, specially designed surfaces to absorb radiation and retain heat, and costly and difficult-to-maintain vacuum enclosures to keep that heat from escaping. To date, the costs of those components have limited market adoption.</p> <p>Zhao and his colleagues thought that using a transparent aerogel layer might solve those problems. Placed above the absorber, it could let through incident solar radiation and then prevent the heat from escaping. So it would essentially replicate the natural greenhouse effect that’s causing global warming — but to an extreme degree, on a small scale, and with a positive outcome.</p> <p>To try it out, the researchers designed an aerogel-based solar thermal receiver. The device consists of a nearly “blackbody” absorber (a thin copper sheet coated with black paint that absorbs all radiant energy that falls on it), and above it a stack of optimized, low-scattering silica aerogel blocks, which efficiently transmit sunlight and suppress conduction, convection, and radiation heat losses simultaneously. The nanostructure of the aerogel is tailored to maximize its optical trans­parency while maintaining its ultralow thermal conductivity. With the aerogel present, there is no need for expensive optics, surfaces, or vacuum enclosures.</p> <p>After extensive laboratory tests of the device, the researchers decided to test it “in the field” — in this case, on the roof of an MIT building. On a sunny day in winter, they set up their device, fixing the receiver toward the south and tilted 60 degrees from horizontal to maximize solar exposure. They then monitored its performance between 11 a.m. and 1 p.m. Despite the cold ambient temperature (less than 1 C) and the presence of clouds in the afternoon, the temperature of the absorber started increasing right away and eventually stabilized above 220 C.</p> <p>To Zhao, the performance already demonstrated by the artificial greenhouse effect opens up what he calls “an exciting pathway to the promotion of solar thermal energy utilization.” Already, he and his colleagues have demonstrated that it can convert water to steam that is greater than 120 C. In collaboration with researchers at the Indian Institute of Technology Bombay, they are now exploring possible process steam applications in India and performing field tests of a low-cost, completely passive solar autoclave for sterilizing medical equipment in rural communities.</p> <p><strong>Windows and more</strong></p> <p>Strobach has been pursuing another promising application for the transparent aerogel — in windows. “In trying to make more transparent aerogels, we hit a regime in our fabrication process where we could make things smaller, but it didn’t result in a significant change in the transparency,” she says. “But it did make a significant change in the clarity,” a key feature for a window.</p> <p>The availability of an affordable, thermally insulating window would have several impacts, says Strobach. Every winter, windows in the United States lose enough energy to power over 50 million homes. That wasted energy costs the economy more than $32 billion a year and generates about 350 million tons of CO<sub>2</sub> — more than is emitted by 76 million cars. Consumers can choose high-efficiency triple-pane windows, but they’re so expensive that they’re not widely used.</p> <p>Analyses by Strobach and her colleagues showed that replacing the air gap in a conventional double-pane window with an aerogel pane could be the answer. The result could be a double-pane window that is 40 percent more insulating than traditional ones and 85 percent as insulating as today’s triple-pane windows — at less than half the price. Better still, the technology could be adopted quickly. The aerogel pane is designed to fit within the current two-pane manufacturing process that’s ubiquitous across the industry, so it could be manufactured at low cost on existing production lines with only minor changes.</p> <p>Guided by Zhao’s model, the researchers are continuing to improve the performance of their aerogels, with a special focus on increasing clarity while maintaining transparency and thermal insulation. In addition, they are considering other traditional low-cost systems that would — like the solar thermal and window technologies — benefit from sliding in an optimized aerogel to create a high-performance heat barrier that lets in abundant sunlight.</p> <p>This research was supported by the Full-Spectrum Optimized Conversion and Utilization of Sunlight program of the U.S. Department of Energy’s Advanced Research Projects Agency–Energy; the Solid-State Solar Thermal Energy Conversion Center, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences; and the MIT Tata Center for Technology and Design. Elise Strobach received funding from the National Science Foundation Graduate Research Fellowship Program. Lin Zhao PhD ’19 is now an optics design engineer at 3M in St. Paul, Minnesota.&nbsp;</p> <p><em>This article appears in the&nbsp;<a href="" target="_blank">Autumn 2019</a>&nbsp;issue of&nbsp;</em><a href="" target="_blank">Energy Futures</a><em>, the magazine of the MIT Energy Initiative.&nbsp;</em></p> MIT Professor Evelyn Wang (right), graduate student Elise Strobach (left), and their colleagues have been performing theoretical and experimental studies of low-cost silica aerogels optimized to serve as a transparent heat barrier in specific devices.Photo: Stuart DarschMIT Energy Initiative, Mechanical engineering, Energy, School of Engineering, Tata Center, Research, Nanoscience and nanotechnology, Materials Science and Engineering A material’s insulating properties can be tuned at will Most materials have a fixed ability to conduct heat, but applying voltage to this thin film changes its thermal properties drastically. Mon, 24 Feb 2020 14:50:23 -0500 David Chandler | MIT News Office <p>Materials whose electronic and magnetic properties can be significantly changed by applying electrical inputs form the backbone of all of modern electronics. But achieving the same kind of tunable control over the thermal conductivity of any material has been an elusive quest.</p> <p>Now, a team of researchers at MIT have made a major leap forward. They have designed a long-sought device, which they refer to as an “electrical heat valve,” that can vary the thermal conductivity on demand. They demonstrated that the material’s ability to conduct heat can be “tuned” by a factor of 10 at room temperature.</p> <p>This technique could potentially open the door to new technologies for controllable insulation in smart windows, smart walls, smart clothing, or even new ways of harvesting the energy of waste heat.&nbsp;</p> <p>The findings are reported today in the journal <em>Nature Materials</em>, in a paper by MIT professors Bilge Yildiz and Gang Chen, recent graduates Qiyang Lu PhD ’18 and Samuel Huberman PhD ’18, and six others at MIT and at Brookhaven National Laboratory.</p> <p>Thermal conductivity describes how well heat can transfer through a material. For example, it’s the reason you can easily pick up a hot frying pan with a wooden handle, because of wood’s low thermal conductivity, but you might get burned picking up a similar frying pan with a metal handle, which has high thermal conductivity.</p> <p>The researchers used a material called strontium cobalt oxide (SCO), which can be made in the form of thin films. By adding oxygen to SCO in a crystalline form called brownmillerite, thermal conductivity increased. Adding hydrogen to it caused conductivity to decrease.</p> <p>The process of adding or removing oxygen and hydrogen can be controlled simply by varying a voltage applied to the material. In essence, the process is electrochemically driven. Overall, at room temperature, the researchers found this process provided a tenfold variation in the material’s heat conduction. Such an order-of-magnitude range of electrically controllable variation has never been seen in any material before, the researchers say.</p> <p>In most known materials, thermal conductivity is invariable — wood never conducts heat well, and metals never conduct heat poorly. As such, when the researchers found that adding certain atoms into the molecular structure of a material could actually increase its thermal conductivity, it was an unexpected result. If anything, adding the extra atoms — or, more specifically, ions, atoms stripped of some electrons, or with excess electrons, to give them a net charge — should make conductivity worse (which, it turned out, was the case when adding hydrogen, but not oxygen).</p> <p>“It was a surprise to me when I saw the result,” Chen says. But after further studies of the system, he says, “now we have a better understanding” of why this unexpected phenomenon happens.</p> <p>It turns out that inserting oxygen ions into the structure of the brownmillerite SCO transforms it into what’s known as a perovskite structure — one that has an even more highly ordered structure than the original. “It goes from a low-symmetry structure to a high-symmetry one. It also reduces the amount of so-called oxygen vacancy defect sites. These together lead to its higher heat conduction,” Yildiz says.</p> <p>Heat is conducted readily through such highly ordered structures, while it tends to be scattered and dissipated by highly irregular atomic structures. Introducing hydrogen ions, by contrast, causes a more disordered structure.</p> <p>“We can introduce more order, which increases thermal conductivity, or we can introduce more disorder, which gives rise to lower conductivity. We could figure this out by performing computational modeling, in addition to our experiments,” Yildiz explains.</p> <p>While the thermal conductivity can be varied by about a factor of 10 at room temperature, at lower temperatures the variation is even greater, she adds.</p> <p>The new method makes it possible to continuously vary that degree of order, in both directions, simply by varying a voltage applied to the thin-film material. The material is either immersed in an ionic liquid (essentially a liquid salt) or in contact with a solid electrolyte, that supplies either negative oxygen ions or positive hydrogen ions (protons) into the material when the voltage is turned on. In the liquid electrolyte case, the source of oxygen and hydrogen is hydrolysis of water from the surrounding air.</p> <p>“What we have shown here is really a demonstration of the concept,” Yildiz explains. The fact that they require the use of a liquid electrolyte medium for the full range of hydrogenation and oxygenation makes this version of the system “not easily applicable to an all-solid-state device,” which would be the ultimate goal, she says. Further research will be needed to produce a more practical version. “We know there are solid-state electrolyte materials” that could theoretically be substituted for the liquids, she says. The team is continuing to explore these possibilities, and have demonstrated working devices with solid electrolytes as well.</p> <p>Chen says “there are many applications where you want to regulate heat flow.” For example, for energy storage in the form of heat, such as from a solar-thermal installation, it would be useful to have a container that could be highly insulating to retain the heat until it’s needed, but which then could be switched to be highly conductive when it comes time to retrieve that heat. “The holy grail would be something we could use for energy storage,” he says. “That’s the dream, but we’re not there yet.”</p> <p>But this finding is so new that there may also be a variety of other potential uses. This approach, Yildiz says, “could open up new applications we didn’t think of before.” And while the work was initially confined to the SCO material, “the concept is applicable to other materials, because we know we can oxygenate or hydrogenate a range of materials electrically, electrochemically” she says. In addition, although this research focused on changing the thermal properties, the same process actually has other effects as well, Chen says: “It not only changes thermal conductivity, but it also changes optical properties.”</p> <p>“This is a truly innovative and novel way for using ion insertion and extraction in solids to tune or switch thermal conductivity,” says Juergen Fleig, a professor of chemical technology and analytics at the University of Vienna, Austria, who was not involved in this work. “The measured effects (caused by two phase transitions) are not only quite large but also bi-directional, which is exciting. I’m also impressed that the processes work so well at room temperature, since such oxide materials are usually operated at much higher temperatures.”</p> <p>Yongjie Hu, an associate professor of mechanical and aerospace engineering at the University of California at Los Angeles, who also was not involved in this work, says “Active control over thermal transport is fundamentally challenging. This is a very exciting study and represents an important step to achieve the goal. It is the first report that has looked in detail at the structures and thermal properties of tri-state phases, and may open up new venues for thermal management and energy applications.”</p> <p>The research team also included Hantao Zhang, Qichen Song, Jayue Wang and Gulin Vardar at MIT, and Adrian Hunt and Iradwikanari Waluyo at Brookhaven National Laboratory in Upton, New York. The work was supported by the National Science Foundation and the U.S. Department of Energy.</p> Researchers found that strontium cobalt oxide (SCO) naturally occurs in an atomic configuration called brownmillerite (center), but when oxygen ions are added to it (right), it becomes more orderly and more heat conductive, and when hydrogen ions are added (left) it becomes less orderly and less heat conductive.Image: courtesy of the researchersMechanical engineering, Nuclear science and engineering, DMSE, Materials Science and Engineering, Research, Energy, Renewable energy, Nanoscience and nanotechnology, School of Engineering Mirrored chip could enable handheld dark-field microscopes Simple chip powered by quantum dots allows standard microscopes to visualize difficult-to-image biological organisms. Mon, 24 Feb 2020 11:29:35 -0500 Jennifer Chu | MIT News Office <p>Do a Google search for dark-field images, and you’ll discover a beautifully detailed world of microscopic organisms set in bright contrast to their midnight-black backdrops. Dark-field microscopy can reveal intricate details of translucent cells and aquatic organisms, as well as faceted diamonds and other precious stones that would otherwise appear very faint or even invisible under a typical bright-field microscope.</p> <p>Scientists generate dark-field images by fitting standard microscopes with often costly components to illumate the sample stage with a hollow, highly angled cone of light. When a translucent sample is placed under a dark-field microscope, the cone of light scatters off the sample’s features to create an image of the sample on the microscope’s camera, in bright contrast to the dark background.</p> <p>Now, engineers at MIT have developed a small, mirrored chip that helps to produce dark-field images, without dedicated expensive components. The chip is slightly larger than a postage stamp and as thin as a credit card. When placed on a microscope’s stage, the chip emits a hollow cone of light that can be used to generate detailed dark-field images of algae, bacteria, and similarly translucent tiny objects.</p> <p><img alt="Credit: Cecile Chazot" src="/sites/" style="width: 570px; height: 380px;" /></p> <p><em style="font-size: 10px;">Credit: Cecile Chazot</em></p> <p>The new optical chip can be added to standard microscopes as an affordable, downsized alternative to conventional dark-field components. The chip may also be fitted into hand-held microscopes to produce images of microorganisms in the field.</p> <p>“Imagine you’re a marine biologist,” says Cecile Chazot, a graduate student in MIT’s Department of Materials Science and Engineering. “You normally have to bring a big bucket of water into the lab to analyze. If the sample is bad, you have to go back out to collect more samples. If you have a hand-held, dark-field microscope, you can check a drop in your bucket while you’re out at sea, to see if you can go home or if you need a new bucket.”</p> <p>Chazot is the lead author of a paper detailing the team’s new design, published today in the journal <em>Nature Photonics. </em>Her co-authors are Sara Nagelberg, Igor Coropceanu, Kurt Broderick, Yunjo Kim, Moungi Bawendi, Peter So, and Mathias Kolle of MIT, along with Christopher Rowlands at Imperial College London and Maik Scherer of Papierfabrik Louisenthal GmbH in Germany.</p> <p><strong>Forever fluorescent</strong></p> <p>In an ongoing effort, members of Kolle’s lab are designing materials and devices that exhibit long-lasting “structural colors” that do not rely on dyes or pigmentation. Instead, they employ nano- and microscale structures that reflect and scatter light much like tiny prisms or soap bubbles. They can therefore appear to change colors depending on how their structures are arranged or manipulated.</p> <p>Structural color can be seen in the iridescent wings of beetles and butterflies, the feathers of birds, as well as fish scales and some flower petals. Inspired by examples of structural color in nature, Kolle has been investigating various ways to manipulate light from a microscopic, structural perspective.</p> <p>As part of this effort, he and Chazot designed a small, three-layered chip that they originally intended to use as a miniature laser. The middle layer functions as the chip’s light source, made from a polymer infused with quantum dots — tiny nanoparticles that emit light when excited with fluorescent light. Chazot likens this layer to a glowstick bracelet, where the reaction of two chemicals creates the light; except here no chemical reaction is needed — just a bit of blue light will make the quantum dots shine in bright orange and red colors.</p> <p>“In glowsticks, eventually these chemicals stop emitting light,” Chazot says. “But quantum dots are stable. If you were to make a bracelet with quantum dots, they would be fluorescent for a very long time.”</p> <p>Over this light-generating layer, the researchers placed a Bragg mirror — a structure made from alternating nanoscale layers of transparent materials, with distinctly different refractive indices, meaning the degrees to which the layers reflect incoming light.</p> <p>The Bragg mirror, Kolle says, acts as a sort of “gatekeeper” for the photons that are emitted by the quantum dots. The arrangement and thicknesses of the mirror’s layers is such that it lets photons escape up and out of the chip, but only if the light arrives at the mirror at high angles. Light arriving at lower angles is bounced back down into the chip.</p> <p>The researchers added a third feature below the light-generating layer to recycle the photons initially rejected by the Bragg mirror. This third layer is molded out of solid, transparent epoxy coated with a reflective gold film and resembles a miniature egg crate, pocked with small wells, each measuring about 4 microns in diameter.</p> <p>Chazot lined this surface with a thin layer of highly reflective gold — an optical arrangement that acts to catch any light that reflects back down from the Bragg mirror, and ping-pong that light back up, likely at a new angle that the mirror would let through. The design for this third layer was inspired by the microscopic scale structure in the wings of the <em>Papilio</em> butterfly.</p> <p>“The butterfly’s wing scales feature really intriguing egg crate-like structures with a Bragg mirror lining, which gives them their iridescent color,” Chazot says.</p> <p><strong>An optical shift</strong></p> <p>The researchers originally designed the chip as an array of miniature laser sources, thinking that its three layers could work together to create tailored laser emission patterns.</p> <p>“The initial project was to build an assembly of individually switchable coupled microscale lasing cavities,” says Kolle, associate professor of mechanical engineering at MIT. “But when Cecile made the first surfaces we realized that they had a very interesting emission profile, even without the lasing.”</p> <p>When Chazot had looked at the chip under a microscope, she noticed something curious: The chip emitted photons only at high angles forming a hollow cone of light. Turns out, the Bragg mirror had just the right layer thicknesses to &nbsp;only let photons pass through when they came at the mirror with a certain (high) angle.</p> <p>“Once we saw this hollow cone of light, we wondered: ‘Could this device be useful for something?’” Chazot says. “And the answer was: Yes!”</p> <p>As it turns out, they had incorporated the capabilities of multiple expensive, bulky dark-field microscope components into a single small chip.</p> <p>Chazot and her colleagues used well-established theoretical optical concepts to model the chip’s optical properties to optimize its performance for this newly found task. They fabricated multiple chips, each producing a hollow cone of light with a tailored angular profile. &nbsp;</p> <p>“Regardless of the microscope you’re using, among all these tiny little chips, one will work with your objective,” Chazot says.</p> <p>To test the chips, the team collected samples of seawater as well as nonpathogenic strains of the bacteria <em>E. coli</em>, and placed each sample on a chip that they set on the platform of a standard bright-field microscope. With this simple setup, they were able to produce clear and detailed dark-field images of individual bacterial cells, as well as microorganisms in seawater, which were close to invisible under bright-field illumination.</p> <p>In the near future these dark-field illumination chips could be mass-produced and tailored for even simple, high school-grade microscopes, to enable imaging of low-contrast, translucent biological samples. In combination with other work in Kolle’s lab, the chips may also be incorporated into miniaturized dark-field imaging devices for point-of-care diagnostics and bioanalytical applications in the field. &nbsp;</p> <p>“This is a wonderful story of discovery based innovation that has the potential for widespread impact in science and education through outfitting garden-variety microscopes with this technology,” says James Burgess, program manager for the Institute for Soldier Nanotechnologies, Army Research Office. “Additionally, the ability to obtain superior contrast in imaging of biological and inorganic materials under optical magnification could be incorporated into systems for identification of new biological threats and toxins in Army Medical Center laboratories and on the battlefield.”</p> <p>This research was supported, in part, by the National Science Foundation, the U.S. Army Research Office, and the National Institutes of Health.</p> Imaging, Microscopy, Materials Science and Engineering, Mechanical engineering, Quantum Dots, Bacteria, Biology, Microbes, Photonics, Nanoscience and nanotechnology, Research, DMSE, School of Engineering, National Science Foundation (NSF), National Institutes of (NIH) MIT continues to advance toward greenhouse gas reduction goals Investments in energy efficiency projects, sustainable design elements essential as campus transforms. Fri, 21 Feb 2020 14:20:01 -0500 Nicole Morell | Office of Sustainability <p>At MIT, making a better world often starts on campus. That’s why, as the Institute works to find solutions to complex global problems, MIT has taken important steps to grow and transform its physical campus: adding new capacity, capabilities, and facilities to better support student life, education, and research. But growing and transforming the campus relies on resource and energy use — use that can exacerbate the complex global problem of climate change. This raises the question: How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?</p> <p>It’s a question — and a challenge — that MIT is committed to tackling.</p> <p><strong>Tracking toward 2030 goals</strong></p> <p>Guided by the <a href="" target="_blank">2015 Plan for Action on Climate Change</a>, MIT continues to work toward a goal of a minimum of 32 percent reduction in campus greenhouse gas emissions by 2030. As reported in the MIT Office of Sustainability’s (MITOS) <a href="!2019%20ghg%20emissions" target="_blank">climate action plan update</a>, campus greenhouse gas (GHG) emissions rose by 2 percent in 2019, in part due to a longer cooling season as well as the new MIT.nano facility coming fully online. Despite this, overall net emissions are 18 percent below the 2014 baseline, and MIT continues to track toward its 2030 goal.</p> <p>Joe Higgins, vice president for campus services and stewardship, is optimistic about MIT’s ability to not only meet, but exceed this current goal. “With this growth [to campus], we are discovering unparalleled opportunities to work toward carbon neutrality by collaborating with key stakeholders across the Institute, tapping into the creativity of our faculty, students, and researchers, and partnering with industry experts. We are committed to making steady progress toward achieving our GHG reduction goal,” he says.</p> <p><strong>New growth to campus </strong></p> <p>This past year marked the first full year of operation for the new MIT.nano facility. This facility includes many energy-intensive labs that necessitate high ventilation rates to meet the requirements of a nano technology clean room fabrication laboratory. As a result, the facility’s energy demands and GHG emissions can be much higher than a traditional science building. In addition, this facility — among others — uses specialty research gases that can act as potent greenhouse gases. Still, the 214,000-square-foot building has a number of sustainable, high-energy-efficiency design features, including an innovative air filtering process to support clean room standards while minimizing energy use. For these sustainable design elements, the facility was recognized with an International Institute for Sustainable Laboratories (I2SL) 2019 <a href="" target="_blank">Go Beyond Award</a>.</p> <p>In 2020, MIT.nano will be joined by new residential and multi-use buildings in both West Campus and Kendall Square, with the Vassar Street Residence and Kendall Square Sites 4 and 5 set to be completed. In keeping with MIT’s target for LEED v4 Gold Certification for new projects, these buildings were designed for high energy efficiency to minimize emissions and include a number of other sustainability measures, from green roofs to high-performance building envelopes. With new construction on campus, integrated design processes allow for sustainability and energy efficiency strategies to be adopted at the outset.</p> <p><strong>Energy efficiency on an established campus</strong></p> <p>For years, MIT has been keenly focused on increasing the energy efficiency and reducing emissions of its existing buildings, but as the campus grows, reducing emissions of current buildings through deep energy enhancements is an increasingly important part of offsetting emissions from new growth.</p> <p>To best accomplish this, the Department of Facilities — in close collaboration with the Office of Sustainability — has developed and rolled out a governance structure that relies on cross-functional teams to create new standards and policies, identify opportunities, develop projects, and assess progress relevant to building efficiency and emissions reduction. “Engaging across campus and across departments is essential to building out MIT’s full capacity to advance emissions reductions,” explains Director of Sustainability Julie Newman.</p> <p>These cross-functional teams — which include Campus Construction; Campus Services and Maintenance; Environment, Health, and Safety; Facilities Engineering; the Office of Sustainability; and Utilities — have focused on a number of strategies in the past year, including both building-wide and targeted energy strategies that have revealed priority candidates for energy retrofits to drive efficiency and minimize emissions.</p> <p>Carlo Fanone, director of facilities engineering, explains that “the cross-functional teams play an especially critical role at MIT, since we are a district energy campus. We supply most of our own energy, we distribute it, and we are the end users, so the teams represent a holistic approach that looks at all three of these elements equally — supply, distribution, and end-use — and considers energy solutions that address any or all of these elements.” Fanone notes that MIT has also identified 25 facilities on campus that have a high energy-use intensity and a high greenhouse gas emissions footprint. These 25 buildings account for up to 50 percent of energy consumption on the MIT campus. “Going forward,” Fanone says, “we are focusing our energy work on these buildings and on other energy enhancements that could have a measurable impact on the progress toward MIT’s 2030 goal.”</p> <p>Armed with these data, the Department of Facilities last year led retrofits for smart lighting and mechanical systems upgrades, as well as smart building management systems, in a number of buildings across campus. These building audits will continue to guide future projects focused on improving and optimizing energy elements such as heat recovery, lighting, and building systems controls.</p> <p>In addition to building-level efficiency improvements, MIT’s <a href="">Central Utilities Plant</a> upgrade is expected to contribute significantly to the reduction of on-campus emissions in upcoming years. The upgraded plant — set to be completed this year — will incorporate more efficient equipment and state-of-the-art controls. Between this upgrade, a fuel switch improvement made in 2015, and the building-level energy improvements, regulated pollutant emissions on campus are expected to reduce by more than 25 percent and campus greenhouse gas emissions by 10 percent from 2014 levels, helping to offset a projected 10 percent increase in greenhouse gas emissions due to energy demands created by new growth.</p> <p><strong>Climate research and action on campus</strong></p> <p>As MIT explores energy efficiency opportunities, the campus itself plays an important role as an incubator for new ideas.</p> <p>In 2019, MITOS director Julie Newman and professor of mechanical engineering Timothy Gutowski are once again teaching 11.S938 / 2.S999 (Solving for Carbon Neutrality at MIT) this semester. <strong>“</strong>The course, along with others that have emerged across campus, provides students the opportunity to devise ideas and solutions for real-world challenges while connecting them back to campus. It also gives the students a sense of ownership on this campus, sharing ideas to chart the course for carbon-neutral MIT,” Newman says.</p> <p>Also on campus, a new energy storage project is being developed to test the feasibility and scalability of using different battery storage technologies to redistribute electricity provided by variable renewable energy. Funded by a Campus Sustainability Incubator Fund grant and led by Jessika Trancik, associate professor in the Institute for Data, Systems, and Society, the project aims to test software approaches to synchronizing energy demand and supply and evaluate the performance of different energy-storage technologies against these use cases. It has the benefit of connecting on-campus climate research with climate action. “Building this storage testbed, and testing technologies under real-world conditions, can inform new algorithms and battery technologies and act as a multiplier, so that the lessons we learn at MIT can be applied far beyond campus,” says Trancik of the project.</p> <p><strong>Supporting on-campus efforts</strong></p> <p>MIT’s work toward emissions reductions already extends beyond campus as the Institute continues to benefit from the Institute’s 25-year commitment to purchase electricity generated through its <a href="" target="_self">Summit Farms Power Purchase Agreement</a> (PPA), which enabled the construction of a 650-acre, 60-megawatt solar farm in North Carolina. Through the purchase of 87,300 megawatt-hours of solar power, MIT was able to offset over 30,000 metric tons of greenhouse gas emissions from our on-campus operations in 2019.</p> <p>The Summit Farms PPA model has provided inspiration for similar projects around the country and has also demonstrated what MIT can accomplish through partnership. MIT continues to explore the possibility of collaborating on similar large power-purchase agreements, possibly involving other local institutions and city governments.</p> <p><strong>Looking ahead</strong></p> <p>As the campus continues to work toward reducing emissions, Fanone notes that a comprehensive approach will help MIT address the challenge of growing a campus while reducing emissions.</p> <p>“District-level energy solutions, additional renewables, coupled with energy enhancements within our buildings, will allow MIT to offset growth and meet our 2030 GHG goals,” says Fanone. Adds Newman, “It’s an exciting time that MIT is now positioned to put the steps in place to respond to this global crisis at the local level.”</p> How can an institution like MIT grow, and simultaneously work to lessen its greenhouse gas emissions and contributions to climate change?Photo: Maia Weinstock Sustainability, MIT.nano, Facilities, Campus buildings and architecture, Campus development, IDSS, Mechanical engineering, Climate change, Energy, Greenhouse gases, Community Deblina Sarkar joins the MIT Media Lab faculty New research group aims to bridge the gap between nanotechnology and synthetic biology. Wed, 19 Feb 2020 12:45:01 -0500 MIT Media Lab <p>Engineer and physicist Deblina Sarkar has been appointed as an assistant professor and AT&amp;T Career Development Chair Professor in the Media Arts and Sciences program at the MIT Media Lab. Her research group, <a href="">Nano-Cybernetic Biotrek</a>, is a novel interdisciplinary effort, bringing together engineering, applied physics, and biology with two distinct goals: first, to develop disruptive technologies for ultra-low power nanoelectronic computation; and second, to merge such next-generation technologies with living matter to create new paradigms for life-machine symbiosis.&nbsp;</p> <p>Sarkar earned her PhD in nanoelectronics, and then made what she calls a “drastic decision” to shift from electrical engineering to neuroscience. “During the last year of my PhD I realized that I wanted to diversify — the brain was on my mind, as was physics.” She applied for postdocs and research positions all over the country, but couldn’t find a good fit until she learned about Ed Boyden’s Synthetic Neurobiology group. “Most said it would be too challenging. Ed’s response was unique. I told him my background was totally different to what he does, but that I wanted to learn and could bring fresh ideas. Ed wrote back in five minutes saying sure, let’s talk.”</p> <p>Postdoc work with Boyden followed, during which time Sarkar applied her knowledge of physics and engineering to develop technology that achieves super-resolution mapping of the brain’s building blocks using a conventional optical microscope with biomolecular retention, allowing users to decipher the nanoscale structure of biomolecules not otherwise accessible with existing technologies.&nbsp;</p> <p>“First as a graduate student, and then as a postdoc at MIT, Deblina demonstrated the fantastic ability to develop new technologies that confront major problems. She fuses together interdisciplinary thinking about biology, health, physics, and devices in a new way,” says Boyden. “I'm excited about her future work, which will boldly stitch together these disciplines to result in new technologies that change how living systems interface to machines.”</p> <p>The freedom to think in enterprising and unusual ways about fields with little historic overlap excited Sarkar, and contributed to her decision to continue her work at the Media Lab. Above all, she wanted her group’s mission to encompass a sense of adventure. And the name, Nano-Cybernetic Biotrek? She knows it’s a mouthful, but it was important to Sarkar that the name capture the spirit of what she’s trying to do.</p> <p>“If I had gone to a different department in a different school, my group would definitely have a more boring name,” she laughs. “Here, I felt I had ample room for creativity. If I break down the group name: nano, of course, because we are building nanoscale devices. Cybernetic, the case of my group, means to use technology to control either inanimate computing systems or biological systems or their hybrids. Bio stands for biology. Trek, because the goal of the group is to set foot into untraversed territories of science — it’s an adventure.”</p> <p>So, how does one conduct research in Nano-Cybernetic Biotrek? What does a project look like? Current research projects include <a href="">developing nanoelectronic transistors using meta-materials</a> to achieve extreme energy efficiency and computing performance; creating next-generation nanomachines that can effectively camouflage and trick the body into thinking that it is its own part, causing a paradigm shift in <a href="">life-machine synergism</a>; building <a href="">nanoantennae</a> that can non-invasively and remotely monitor and modulate biological systems; <a href="">nanoscale mapping</a> of bio-molecular building blocks of the brain; and ultra-sensitive electrical biosensors <a href="">for point-of-care applications</a>.&nbsp;</p> <p>“The thing is, biology and nature, they have optimized certain things really well. In my postdoc research, I showed how the arrangement of biomolecules in the brain influences the way information is processed,” says Sarkar. “With biology, the difficulty is that it doesn’t provide us with the opportunity to build something from scratch. We’re limited by what nature has built already, which has been optimized for a certain function, which is not necessarily what we want. In contrast, the versatility of electronics is that they can be built from scratch to perform functions beyond the capabilities of biology. Our long-term goal is to achieve seamless integration of nanoelectronics into our biological systems to incorporate functionalities not otherwise enabled by biology — thus transcending existing biological limitations.”&nbsp;&nbsp;</p> <p>The group starts each research project with feasibility studies, then performs more-rigorous calculations to create simulations to arrive at a robust design. Then the design is brought to nanofab to build the experimental infrastructure.</p> <p>Sarkar looks for students and researchers who are excited about contributing to the adventurous and interdisciplinary ethos of the group. “My students and postdocs all have very different backgrounds: materials science, electrical, mechanical, and biological engineering. The main thing I look for in my group members is, whatever research they did previously, they are strong in their fundamentals. I believe someone who has done extremely well in their own field can also come to a new field and contribute. None of my students are doing exactly what they did in their undergrad. They contribute their expertise to new types of projects. I believe that someone who is enthusiastic and passionate can excel in any field!”</p> Engineer and physicist Deblina Sarkar has been appointed as an assistant professor at the MIT Media Lab, where she will lead the Nano-Cybernetic Biotrek group.Photo Courtesy of the MIT Media Lab.Media Lab, Nanoscience and nanotechnology, Synthetic biology, Faculty, School of Architecture and Planning Correcting the “jitters” in quantum devices A new study suggests a path to more efficient error correction, which may help make quantum computers and sensors more practical. Tue, 18 Feb 2020 00:00:00 -0500 David L. Chandler | MIT News Office <p>Labs around the world are racing to develop new computing and sensing devices that operate on the principles of quantum mechanics and could offer dramatic advantages over their classical counterparts. But these technologies still face several challenges, and one of the most significant is how to deal with “noise” — random fluctuations that can eradicate the data stored in such devices.</p> <p>A new approach developed by researchers at MIT could provide a significant step forward in quantum error correction. The method involves fine-tuning the system to address the kinds of noise that are the most likely, rather than casting a broad net to try to catch all possible sources of disturbance.</p> <p>The analysis is described in the journal <em>Physical Review Letters</em>, in a paper by MIT graduate student David Layden, postdoc Mo Chen, and professor of nuclear science and engineering Paola Cappellaro.</p> <p>“The main issues we now face in developing quantum technologies are that current systems are small and noisy,” says Layden. Noise, meaning unwanted disturbance of any kind, is especially vexing because many quantum systems are inherently highly sensitive, a feature underlying some of their potential applications.</p> <p>And there’s another issue, Layden says, which is that quantum systems are affected by any observation. So, while one can detect that a classical system is drifting and apply a correction to nudge it back, things are more complicated in the quantum world. “What's really tricky about quantum systems is that when you look at them, you tend to collapse them,” he says.</p> <p>Classical error correction schemes are based on redundancy. For example, in a communication system subject to noise, instead of sending a single bit (1 or 0), one might send three copies of each (111 or 000). Then, if the three bits don’t match, that shows there was an error. The more copies of each bit get sent, the more effective the error correction can be.</p> <p>The same essential principle could be applied to adding redundancy in quantum bits, or “qubits.” But, Layden says, “If I want to have a high degree of protection, I need to devote a large part of my system to doing these sorts of checks. And this is a nonstarter right now because we have fairly small systems; we just don’t have the resources to do particularly useful quantum error correction in the usual way.” So instead, the researchers found a way to target the error correction very narrowly at the specific kinds of noise that were most prevalent.</p> <p>The quantum system they’re working with consists of carbon nuclei near a particular kind of defect in a diamond crystal called a nitrogen vacancy center. These defects behave like single, isolated electrons, and their presence enables the control of the nearby carbon nuclei.</p> <p>But the team found that the overwhelming majority of the noise affecting these nuclei came from one single source: random fluctuations in the nearby defects themselves. This noise source can be accurately modeled, and suppressing its effects could have a major impact, as other sources of noise are relatively insignificant.</p> <p>“We actually understand quite well the main source of noise in these systems,” Layden says. “So we don't have to cast a wide net to catch every hypothetical type of noise.”</p> <p>The team came up with a different error correction strategy, tailored to counter this particular, dominant source of noise. As Layden describes it, the noise comes from “this one central defect, or this one central ‘electron,’ which has a tendency to hop around at random. It jitters.”</p> <p>That jitter, in turn, is felt by all those nearby nuclei, in a predictable way that can be corrected.</p> <p>“The upshot of our approach is that we’re able to get a fixed level of protection using far fewer resources than would otherwise be needed,” he says. “We can use a much smaller system with this targeted approach.”</p> <p>The work so far is theoretical, and the team is actively working on a lab demonstration of this principle in action. If it works as expected, this could make up an important component of future quantum-based technologies of various kinds, the researchers say, including quantum computers that could potentially solve previously unsolvable problems, or quantum communications systems that could be immune to snooping, or highly sensitive sensor systems.</p> <p>“This is a component that could be used in a number of ways,” Layden says. “It’s as though we’re developing a key part of an engine. We’re still a ways from building a full car, but we’ve made progress on a critical part.”</p> <p>"Quantum error correction is the next challenge for the field," says Alexandre Blais, a professor of physics at the University of Sherbrooke, in Canada, who was not associated with this work. "The complexity of current quantum error correcting codes is, however, daunting as they require a very large number of qubits to robustly encode quantum information."</p> <p>Blais adds, "We have now come to realize that exploiting our understanding of the devices in which quantum error correction is to be implemented can be very advantageous.&nbsp;This work makes an important contribution in this direction by showing that a common type of error can be corrected for in a much more efficient manner than expected. For quantum computers to become practical we need more ideas like this.​"</p> <p>The research was supported by the U.S. Army Research Office and the National Science Foundation.</p> In a diamond crystal, three carbon atom nuclei (shown in blue) surround an empty spot called a nitrogen vacancy center, which behaves much like a single electron (shown in red). The carbon nuclei act as quantum bits, or qubits, and it turns out the primary source of noise that disturbs them comes from the jittery “electron” in the middle. By understanding the single source of that noise, it becomes easier to compensate for it, the researchers found.Image: David LaydenStudents, Research, Graduate, postdoctoral, School of Engineering, Nuclear science and engineering, Quantum computing, Research Laboratory of Electronics, Nanoscience and nanotechnology, National Science Foundation (NSF) SENSE.nano awards seed grants in optoelectronics, interactive manufacturing The mission of SENSE.nano is to foster the development and use of novel sensors, sensing systems, and sensing solutions. Thu, 13 Feb 2020 16:40:01 -0500 MIT.nano <p>SENSE.nano has announced the recipients of the third annual SENSE.nano seed grants. This year’s grants serve to advance innovations in sensing technologies for augmented and virtual realities (AR/VR) and advanced manufacturing systems.</p> <p>A center of excellence powered by MIT.nano, SENSE.nano received substantial interest in its 2019 call for proposals, making for stiff competition. Proposals were reviewed and evaluated by a committee consisting of industry and academia thought-leaders and were selected for funding following significant discussion. Ultimately, two projects were awarded $75,000 each to further research related to detecting movement in molecules and monitoring machine health.&nbsp;</p> <p>“SENSE.nano strives to&nbsp;convey the breadth and depth of sensing research at MIT," says Brian Anthony, co-leader of SENSE.nano, associate director of MIT.nano, and a principal&nbsp;research scientist in the Department of Mechanical Engineering. “As we work to grow SENSE.nano’s research footing and to attract partners, it is encouraging to know that so much important research — in sensors; sensor systems; and sensor science, engineering — is taking place at the Institute.”</p> <p>The projects receiving grants are:</p> <p><strong>P. Donald Keathley and Karl Berggren: Nanostructured optical-field samplers for visible to near-infrared time-domain spectroscopy</strong></p> <p>Research Scientist Phillip “Donnie” Keathley and Professor Karl Berggren from the Department of Electrical Engineering and Computer Science are developing a field-sampling technique using nanoscale structures and light waves to sense vibrational motion of molecules. Keathley is a member of Berggren’s quantum nanostructures and nanofabrication group in the Research Laboratory of Electronics (RLE). The two are investigating an all-on-chip nanoantenna device for sampling weak sub-femtojoule-level electronic fields, in the near-infrared and visible spectrums.</p> <p>Current technology for sampling these spectra of optical energy requires a large apparatus — there is no compact device with enough sensitivity to detect the low-energy signals. Keathley and Berggren propose using plasmonic nanoantennas for measuring low-energy pulses. This technology could have significant impacts on the medical and food-safety industries by revolutionizing the accurate detection and identification of chemicals and bio-chemicals.</p> <p><strong>Jeehwan Kim: Interactive manufacturing enabled by simultaneous sensing and recognition</strong></p> <p>Jeehwan Kim, associate professor with a dual appointment in mechanical engineering and materials science and engineering, proposes an ultra-sensitive sensor system using neuromorphic chips to improve advanced manufacturing through real-time monitoring of machines. Machine failures compromise productivity and cost. Sensors that can instantly process data to provide real-time feedback would be a valuable tool for preventive maintenance of factory machines.</p> <p>Kim’s group, also part of RLE, aims to develop single-crystalline gallium nitride sensors that, when connected to AI chips, will create a feedback loop with the factory machines. Failure patterns would be recognized by the AI hardware, creating an intelligent manufacturing system that can predict and prevent failures. These sensors will have the sensitivity to navigate noisy factory environments, be small enough to form dense arrays, and have the power efficiency to be used on a large number of manufacturing machines.</p> <p>The mission of SENSE.nano is to foster the development and use of novel sensors, sensing systems, and sensing solutions in order to provide previously unimaginable insight into the condition of our world. Two new calls for seed grant proposals will open later this year in conjunction with the Immersion Lab NCSOFT collaboration and then with the SENSE.nano 2020 symposium.</p> <p>In addition to seed grants and the annual conference, SENSE.nano recently launched Talk SENSE — a monthly series for MIT students to further engage with these topics and connect with experts working in sensing technologies.</p> A center of excellence powered by MIT.nano, SENSE.nano received substantial interest in its 2019 call for proposals, making for stiff competition.Photo: David SellaMIT.nano, Mechanical engineering, Electrical engineering and computer science (EECS), Materials Science and Engineering, Nanoscience and nanotechnology, Awards, honors and fellowships, Augmented and virtual reality, Computer science and technology, Artificial intelligence, Research, Funding, Grants, Sensors, School of Engineering, Research Laboratory of Electronics Bubble-capturing surface helps get rid of foam Bubbly buildup can hinder many industrial processes, but a new method can reduce or even eliminate it. Tue, 11 Feb 2020 23:59:59 -0500 David L. Chandler | MIT News Office <p>In many industrial processes, such as in bioreactors that produce fuels or pharmaceuticals, foam can get in the way. Frothy bubbles can take up a lot of space, limiting the volume available for making the product and sometimes gumming up pipes and valves or damaging living cells. Companies spend an estimated $3 billion a year on chemical additives called defoamers, but these can affect the purity of the product and may require extra processing steps for their removal.</p> <p>Now, researchers at MIT have come up with a simple, inexpensive, and completely passive system for reducing or eliminating the foam buildup, using bubble-attracting sheets of specially textured mesh that make bubbles collapse as fast as they form. The new process is described in the journal <em>Advanced Materials Interfaces</em>, in <a href="" target="_blank">a paper</a> by recent graduate Leonid Rapoport PhD ’18, visiting student Theo Emmerich, and professor of mechanical engineering Kripa Varanasi.</p> <p>The new system uses surfaces the researchers call “aerophilic,” which attract and shed bubbles of air or gas in much the same way that hydrophilic (water-attracting) surfaces cause droplets of water to cling to a surface, spread out, and fall away, Varanasi explains.</p> <p>“Foams are everywhere” in industrial processes, he says, including beer brewing, paper making, oil and gas production and processing, biofuel generation, shampoo and cosmetics production, and chemical processing.</p> <p>Also, “It’s one of the main challenges in cell culture or in bioreactors,” he adds. To promote cell growth, various gases are typically diffused through the water or other liquid medium. But this can lead to a buildup of foam, and as the tiny bubbles burst they can produce shear forces that can damage or kill the cells, so controlling the foam is essential.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 324px;" /></p> <p><span style="font-size:10px;"><em>In two beakers with identical streams of bubbles, inserting a piece of the new textured material developed by the MIT team (on right) causes the foam buildup at the top of the beaker to dissipate almost completely, whereas a similar material without the special surface texture (at left) leaves the foam undisturbed. Courtesy of the Varanasi Lab</em></span></p> <p>The usual way of dealing with the foam problem is by adding chemicals such as glycols or alcohols, which typically then need to be filtered out again. But that adds cost and extra processing steps, and can affect the chemistry of the product. So, the team asked, “How can you get rid of foams without having to add chemicals? That was our challenge,” Varanasi says.</p> <p>To tackle the problem, they created high-speed video in order to study how bubbles react when they strike a surface. They found that the bubbles tend to bounce away like a rubber ball, bouncing several times before eventually sticking in place, just as droplets of liquid do when they hit a surface, only upside down. (The bubbles are rising, so they bounce downward.)</p> <p>“In order to effectively capture the impacting bubble, we had to understand how the liquid film separating it from the surface drains,” says Rapoport. “And we had to start at square one because there wasn’t even an established metric to measure how good a surface is at capturing impacting bubbles. Ultimately, we were able to understand the physics behind what causes a bubble to bounce away, and that understanding drove the design process.”</p> <p>The team came up with a flat device that has a set of carefully designed surface textures at a variety of size scales. The surface was tuned so that bubbles would adhere right away without bouncing, and quickly spread out and dissipate to make way for the next bubble instead of accumulating as foam.</p> <p>“The key to quickly capturing bubbles and controlling foam turned out to be a three-layered system with features of progressively finer sizes,” says Emmerich. These features help to trap a very thin layer of air along the surface of a material. This surface, known as a plastron, has similarities to the texture of some feathers on diving birds that help keep the animals dry underwater. In this case, the plastron helps to make the bubbles stick to the surface and dissipate.</p> <p>The net effect is to reduce the time it takes for a bubble to stick to the surface by a hundredfold, Varanasi says. In tests, the bouncing time was reduced from hundreds of milliseconds to just a few milliseconds.</p> <p>To test the idea in the lab, the team built a device containing a bubble-capturing surface and inserted it into a beaker that had bubbles rising through it. They placed that beaker next to an identical one containing foaming suds with a sheet of the same size, but without the textured material. In the beaker with the bubble-capturing surface, the foam quickly dissipated down to almost nothing, while a full layer of foam stayed in place in the other beaker.</p> <p>Such bubble-capturing surfaces could easily be retrofitted to many industrial processing facilities that currently rely on defoaming chemicals, Varanasi says. He speculated that in the longer run, such a method might even be used as a way to capture methane seeping from melting permafrost as the world warms. This could both prevent some of that potent greenhouse gas from making it into the atmosphere, and at the same time provide a source of fuel. At this point that possibility is “pie in the sky,” he says, but in principle it could work.</p> <p>Unlike many new technology developments, this system is simple enough that it could be readily implemented, Varanasi says. “It’s ready to go. … We look forward to working with industry.”</p> <p>The work was supported by the Bill and Melinda Gates Foundation.</p> In a beaker with a constant stream of bubbles, inserting a piece of the new textured material developed by the MIT team (gray object extending into the surface at top) causes the foam buildup at the top of the beaker to dissipate almost completely within ten minutes.Courtesy of the Varanasi LabResearch, Nanoscience and nanotechnology, Mechanical engineering, School of Engineering, Surface engineering Researchers develop a roadmap for growth of new solar cells Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon. Thu, 06 Feb 2020 10:57:11 -0500 David L. Chandler | MIT News Office <p>Materials called perovskites show strong potential for a new generation of solar cells, but they’ve had trouble gaining traction in a market dominated by silicon-based solar cells. Now, a study by researchers at MIT and elsewhere outlines a roadmap for how this promising technology could move from the laboratory to a significant place in the global solar market.</p> <p>The “technoeconomic” analysis shows that by starting with higher-value niche markets and gradually expanding, solar panel manufacturers could avoid the very steep initial capital costs that would be required to make perovskite-based panels directly competitive with silicon for large utility-scale installations at the outset. Rather than making a prohibitively expensive initial investment, of hundreds of millions or even billions of dollars, to build a plant for utility-scale production, the team found that starting with more specialized applications could be accomplished for more realistic initial capital investment on the order of $40 million.</p> <p>The results are described in a paper in the journal <em>Joule</em> by MIT postdoc Ian Mathews, research scientist Marius Peters, professor of mechanical engineering Tonio Buonassisi, and five others at MIT, Wellesley College, and Swift Solar Inc.</p> <p>Solar cells based on perovskites — a broad category of compounds characterized by a certain arrangement of their molecular structure — could provide dramatic improvements in solar installations. Their constituent materials are inexpensive, and they could be manufactured in a roll-to-roll process like printing a newspaper, and printed onto lightweight and flexible backing material. This could greatly reduce costs associated with transportation and installation, although they still require further work to improve their durability. Other promising new solar cell materials are also under development in labs around the world, but none has yet made inroads in the marketplace.</p> <p>“There have been a lot of new solar cell materials and companies launched over the years,” says Mathews, “and yet, despite that, silicon remains the dominant material in the industry and has been for decades.”</p> <p>Why is that the case? “People have always said that one of the things that holds new technologies back is that the expense of constructing large factories to actually produce these systems at scale is just too much,” he says. “It’s difficult for a startup to cross what’s called ‘the valley of death,’ to raise the tens of millions of dollars required to get to the scale where this technology might be profitable in the wider solar energy industry.”</p> <p>But there are a variety of more specialized solar cell applications where the special qualities of perovskite-based solar cells, such as their light weight, flexibility, and potential for transparency, would provide a significant advantage, Mathews says. By focusing on these markets initially, a startup solar company could build up to scale gradually, leveraging the profits from the premium products to expand its production capabilities over time.</p> <p>Describing the literature on perovskite-based solar cells being developed in various labs, he says, “They’re claiming very low costs. But they’re claiming it once your factory reaches a certain scale. And I thought, we’ve seen this before — people claim a new photovoltaic material is going to be cheaper than all the rest and better than all the rest. That’s great, except we need to have a plan as to how we actually get the material and the technology to scale.”</p> <p>As a starting point, he says, “We took the approach that I haven’t really seen anyone else take: Let’s actually model the cost to manufacture these modules as a function of scale. So if you just have 10 people in a small factory, how much do you need to sell your solar panels at in order to be profitable? And once you reach scale, how cheap will your product become?”</p> <p>The analysis confirmed that trying to leap directly into the marketplace for rooftop solar or utility-scale solar installations would require very large upfront capital investment, he says. But “we looked at the prices people might get in the internet of things, or the market in building-integrated photovoltaics. People usually pay a higher price in these markets because they’re more of a specialized product. They’ll pay a little more if your product is flexible or if the module fits into a building envelope.” Other potential niche markets include self-powered microelectronics devices.</p> <p>Such applications would make the entry into the market feasible without needing massive capital investments. “If you do that, the amount you need to invest in your company is much, much less, on the order of a few million dollars instead of tens or hundreds of millions of dollars, and that allows you to more quickly develop a profitable company,” he says.</p> <p>“It’s a way for them to prove their technology, both technically and by actually building and selling a product and making sure it survives in the field,” Mathews says, “and also, just to prove that you can manufacture at a certain price point.”</p> <p>Already, there are a handful of startup companies working to try to bring perovskite solar cells to market, he points out, although none of them yet has an actual product for sale. The companies have taken different approaches, and some seem to be embarking on the kind of step-by-step growth approach outlined by this research, he says. “Probably the company that’s raised the most money is a company called Oxford PV, and they’re looking at tandem cells,” which incorporate both silicon and perovskite cells to improve overall efficiency. Another company is one started by Joel Jean PhD ’17 (who is also a co-author of this paper) and others, called Swift Solar, which is working on flexible perovskites. And there’s a company called Saule Technologies, working on printable perovskites.</p> <p>Mathews says the kind of technoeconomic analysis the team used in its study could be applied to a wide variety of other new energy-related technologies, including rechargeable batteries and other storage systems, or other types of new solar cell materials.</p> <p>“There are many scientific papers and academic studies that look at how much it will cost to manufacture a technology once it’s at scale,” he says. “But very few people actually look at how much does it cost at very small scale, and what are the factors affecting economies of scale? And I think that can be done for many technologies, and it would help us accelerate how we get innovations from lab to market.”</p> <p>The research team also included MIT alumni Sarah Sofia PhD ’19 and Sin Cheng Siah PhD ’15, Wellesley College student Erica Ma, and former MIT postdoc Hannu Laine. The work was supported by the European Union’s Horizon 2020 research and innovation program, the Martin Family Society for Fellows of Sustainability, the U.S. Department of Energy, Shell, through the MIT Energy Initiative, and the Singapore-MIT Alliance for Research and Technology.</p> Perovskites, a family of materials defined by a particular kind of molecular structure as illustrated here, have great potential for new kinds of solar cells. A new study from MIT shows how these materials could gain a foothold in the solar marketplace.Image: Christine Daniloff, MITResearch, School of Engineering, Energy, Solar, Nanoscience and nanotechnology, Materials Science and Engineering, Mechanical engineering, National Science Foundation (NSF), Renewable energy, Alternative energy, Sustainability, Artificial intelligence, Machine learning, MIT Energy Initiative, Singapore-MIT Alliance for Research and Technology (SMART) Engineers mix and match materials to make new stretchy electronics Next-generation devices made with new “peel and stack” method may include electronic chips worn on the skin. Wed, 05 Feb 2020 13:00:00 -0500 Jennifer Chu | MIT News Office <p>At the heart of any electronic device is a cold, hard computer chip, covered in a miniature city of transistors and other semiconducting elements. Because computer chips are rigid, the electronic devices that they power, such as our smartphones, laptops, watches, and televisions, are similarly inflexible.</p> <p>Now a process developed by MIT engineers may be the key to manufacturing flexible electronics with multiple functionalities in a cost-effective way.</p> <p>The process is called&nbsp; “remote epitaxy” and involves growing thin films of semiconducting material on a large, thick wafer of the same material, which is covered in an intermediate layer of graphene. Once the researchers grow a semiconducting film, they can peel it away from the graphene-covered wafer and then reuse the wafer, which itself can be expensive depending on the type of material it’s made from. In this way, the team can copy and peel away any number of thin, flexible semiconducting films, using the same underlying wafer.</p> <p>In a paper published today in the journal <em>Nature</em>, the researchers demonstrate that they can use remote epitaxy to produce freestanding films of any functional material. More importantly, they can stack films made from these different materials, to produce flexible, multifunctional electronic devices.</p> <p>The researchers expect that the process could be used to produce stretchy electronic films for a wide variety of uses, including virtual reality-enabled contact lenses, solar-powered skins that mold to the contours of your car, electronic fabrics that respond to the weather, and other flexible electronics that seemed until now to be the stuff of Marvel movies.</p> <p>“You can use this technique to mix and match any semiconducting material to have new device functionality, in one flexible chip,” says Jeehwan Kim, an associate professor of mechanical engineering at MIT. “You can make electronics in any shape.”</p> <p>Kim’s co-authors include Hyun S. Kum, Sungkyu Kim, Wei Kong, Kuan Qiao, Peng Chen, Jaewoo Shim, Sang-Hoon Bae, Chanyeol Choi, Luigi Ranno, Seungju Seo, Sangho Lee, Jackson Bauer, and Caroline Ross from MIT, along with collaborators from the University of Wisconsin at Madison, Cornell University, the University of Virginia, Penn State University, Sun Yat-Sen University, and the Korea Atomic Energy Research Institute.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Buying time</strong></p> <p>Kim and his colleagues reported their <a href="">first results</a> using remote epitaxy in 2017. Then, they were able to produce thin, flexible films of semiconducting material by first placing a layer of graphene on a thick, expensive wafer made from a combination of exotic metals. They flowed atoms of each metal over the graphene-covered wafer and found the atoms formed a film on top of the graphene, in the same crystal pattern as the underlying wafer. The graphene provided a nonstick surface from which the researchers could peel away the new film, leaving the graphene-covered wafer, which they could reuse.&nbsp;</p> <p>In 2018, the team showed that they could use remote epitaxy to make semiconducting materials from metals in groups 3 and 5 of the periodic table, but not from group 4. The reason, they found, <a href="">boiled down to polarity</a>, or the respective charges between the atoms flowing over graphene and the atoms in the underlying wafer.</p> <p>Since this realization, Kim and his colleagues have tried a number of increasingly exotic semiconducting combinations. As reported in this new paper, the team used remote epitaxy to make flexible semiconducting films from complex oxides — chemical compounds made from oxygen and at least two other elements. Complex oxides are known to have a wide range of electrical and magnetic properties, and some combinations can generate a current when physically stretched or exposed to a magnetic field.</p> <p>Kim says the ability to manufacture flexible films of complex oxides could open the door to new energy-havesting devices, such as sheets or coverings that stretch in response to vibrations and produce electricity as a result. Until now, complex oxide materials have only been manufactured on rigid, millimeter-thick wafers, with limited flexibility and therefore limited energy-generating potential.</p> <p>The researchers did have to tweak their process to make complex oxide films. They initially found that when they tried to make a complex oxide such as strontium titanate (a compound of strontium, titanium, and three oxygen atoms), the oxygen atoms that they flowed over the graphene tended to bind with the graphene’s carbon atoms, etching away bits of graphene instead of following the underlying wafer’s pattern and binding with strontium and titanium. As a surprisingly simple fix, the researchers added a second layer of graphene.</p> <p>“We saw that by the time the first layer of graphene is etched off, oxide compounds have already formed, so elemental oxygen, once it forms these desired compounds, does not interact as heavily with graphene,” Kim explains. “So two layers of graphene buys some time for this compound to form.”</p> <p><strong>Peel and stack</strong></p> <p>The team used their newly tweaked process to make films from multiple complex oxide materials, peeling off each 100-nanometer-thin layer as it was made. They were also able to stack together layers of different complex oxide materials and effectively glue them together by heating them slightly, producing a flexible, multifunctional device.</p> <p>“This is the first demonstration of stacking multiple nanometers-thin membranes like LEGO blocks, which has been impossible because all functional electronic materials exist in a thick wafer form,” Kim says.</p> <p>In one experiment, the team stacked together films of two different complex oxides: cobalt ferrite, known to expand in the presence of a magnetic field, and PMN-PT, a material that generates voltage when stretched. When the researchers exposed the multilayer film to a magnetic field, the two layers worked together to both expand and produce a small electric current.&nbsp;</p> <p>The results demonstrate that remote epitaxy can be used to make flexible electronics from a combination of materials with different functionalities, which previously were difficult to combine into one device. In the case of cobalt ferrite and PMN-PT, each material has a different crystalline pattern. Kim says that traditional epitaxy techniques, which grow materials at high temperatures on one wafer, can only combine materials if their crystalline patterns match. He says that with remote epitaxy, researchers can make any number of different films, using different, reusable wafers, and then stack them together, regardless of their crystalline pattern.</p> <p>“The big picture of this work is, you can combine totally different materials in one place together,” Kim says. “Now you can imagine a thin, flexible device made from layers that include a sensor, computing system, a battery, a solar cell, so you could have a flexible, self-powering, internet-of-things stacked chip.”</p> <p>The team is exploring various combinations of semiconducting films and is working on developing prototype devices, such as something Kim is calling an “electronic tattoo” — a flexible, transparent chip that can attach and conform to a person’s body to sense and wirelessly relay vital signs such as temperature and pulse.</p> <p>“We can now make thin, flexible, wearable electronics with the highest functionality,” Kim says. “Just peel off and stack up.”</p> <p>The research was the outcome of close collaboration between the researchers at MIT and at the University of Wisconsin at Madison, which was supported by the Defense Advanced Research Projects Agency.</p> With a new technique, MIT researchers can peel and stack thin films of metal oxides — chemical compounds that can be designed to have unique magnetic and electronic properties. The films can be mixed and matched to create multi-functional, flexible electronic devices, such as solar-powered skins and electronic fabrics.Image: Felice Frankelelectronics, Graphene, Mechanical engineering, Research, Nanoscience and nanotechnology, Materials Science and Engineering, DMSE, School of Engineering New electrode design may lead to more powerful batteries An MIT team has devised a lithium metal anode that could improve the longevity and energy density of future batteries. Mon, 03 Feb 2020 10:59:59 -0500 David L. Chandler | MIT News Office <p>New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer, based on the long-sought goal of using pure lithium metal as one of the battery’s two electrodes, the anode. &nbsp;</p> <p>The new electrode concept comes from the laboratory of Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. It is described today in the journal <em>Nature</em>, in a paper co-authored by Yuming Chen and Ziqiang Wang at MIT, along with 11 others at MIT and in Hong Kong, Florida, and Texas.</p> <p>The design is part of a concept for developing safe all-solid-state batteries, dispensing with the liquid or polymer gel usually used as the electrolyte material between the battery’s two electrodes. An electrolyte allows lithium ions to travel back and forth during the charging and discharging cycles of the battery, and an all-solid version could be safer than liquid electrolytes, which have high volatilility and have been the source of explosions in lithium batteries.</p> <p>“There has been a lot of work on solid-state batteries, with lithium metal electrodes and solid electrolytes,” Li says, but these efforts have faced a number of issues.</p> <p>One of the biggest problems is that when the battery is charged up, atoms accumulate inside the lithium metal, causing it to expand. The metal then shrinks again during discharge, as the battery is used. These repeated changes in the metal’s dimensions, somewhat like the process of inhaling and exhaling, make it difficult for the solids to maintain constant contact, and tend to cause the solid electrolyte to fracture or detach.</p> <p>Another problem is that none of the proposed solid electrolytes are truly chemically stable while in contact with the highly reactive lithium metal, and they tend to degrade over time.</p> <p>Most attempts to overcome these problems have focused on designing solid electrolyte materials that are absolutely stable against lithium metal, which turns out to be difficult.&nbsp; Instead, Li and his team adopted an unusual design that utilizes two additional classes of solids, “mixed ionic-electronic conductors” (MIEC) and “electron and Li-ion insulators” (ELI), which are absolutely chemically stable in contact with lithium metal.</p> <p>The researchers developed a three-dimensional nanoarchitecture in the form of a honeycomb-like array of hexagonal MIEC tubes, partially infused with the solid lithium metal to form one electrode of the battery, but with extra space left inside each tube. When the lithium expands in the charging process, it flows into the empty space in the interior of the tubes, moving like a liquid even though it retains its solid crystalline structure. This flow, entirely confined inside the honeycomb structure, relieves the pressure from the expansion caused by charging, but without changing the electrode’s outer dimensions or the boundary between the electrode and electrolyte. The other material, the ELI, serves as a crucial mechanical binder between the MIEC walls and the solid electrolyte layer.</p> <p>“We designed this structure that gives us three-dimensional electrodes, like a honeycomb,” Li says. The void spaces in each tube of the structure allow the lithium to “creep backward” into the tubes, “and that way, it doesn’t build up stress to crack the solid electrolyte.” The expanding and contracting lithium inside these tubes moves in and out, sort of like a car engine’s pistons inside their cylinders. Because these structures are built at nanoscale dimensions (the tubes are about 100 to 300 nanometers in diameter, and tens of microns in height), the result is like “an engine with 10 billion pistons, with lithium metal as the working fluid,” Li says.</p> <p>Because the walls of these honeycomb-like structures are made of chemically stable MIEC, the lithium never loses electrical contact with the material, Li says. Thus, the whole solid battery can remain mechanically and chemically stable as it goes through its cycles of use. The team has proved the concept experimentally, putting a test device through 100 cycles of charging and discharging without producing any fracturing of the solids.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 440px;" /></p> <p><em><span style="font-size:10px;">Reversible Li metal plating and stripping in a carbon tubule with&nbsp;an inner diameter of 100nm. Courtesy of the researchers.</span></em></p> <p>Li says that though many other groups are working on what they call solid batteries, most of those systems actually work better with some liquid electrolyte mixed with the solid electrolyte material. “But in our case,” he says, “it’s truly all solid. There is no liquid or gel in it of any kind.”&nbsp;&nbsp;</p> <p>The new system could lead to safe anodes that weigh only a quarter as much as their conventional counterparts in lithium-ion batteries, for the same amount of storage capacity. If combined with new concepts for lightweight versions of the other electrode, the cathode, this work could lead to substantial reductions in the overall weight of lithium-ion batteries. For example, the team hopes it could lead to cellphones that could be charged just once every three days, without making the phones any heavier or bulkier.</p> <p>One new concept for a lighter cathode was described by another team led by Li, in a paper that appeared last month in the journal <em>Nature Energy</em>, co-authored by MIT postdoc Zhi Zhu and graduate student Daiwei Yu. The material would reduce the use of nickel and cobalt, which are expensive and toxic and used in present-day cathodes. The new cathode does not rely only on the capacity contribution from these transition-metals in battery cycling. Instead, it would rely more on the redox capacity of oxygen, which is much lighter and more abundant. But in this process the oxygen ions become more mobile, which can cause them to escape from the cathode particles. The researchers used a high-temperature surface treatment with molten salt to produce a protective surface layer on particles of manganese- and lithium-rich metal-oxide, so the amount of oxygen loss is drastically reduced.</p> <p>Even though the surface layer is very thin, just 5 to 20 nanometers thick on a 400 nanometer-wide particle, it provides good protection for the underlying material. “It’s almost like immunization,” Li says, against the destructive effects of oxygen loss in batteries used at room temperature. The present versions provide at least a 50 percent improvement in the amount of energy that can be stored for a given weight, with much better cycling stability.</p> <p>The team has only built small lab-scale devices so far, but “I expect this can be scaled up very quickly,” Li says. The materials needed, mostly manganese, are significantly cheaper than the nickel or cobalt used by other systems, so these cathodes could cost as little as a fifth as much as the conventional versions.</p> <p>The research teams included researchers from MIT, Hong Kong Polytechnic University, the University of Central Florida, the University of Texas at Austin, and Brookhaven National Laboratories in Upton, New York. The work was supported by the National Science Foundation.</p> New research by engineers at MIT and elsewhere could lead to batteries that can pack more power per pound and last longer.Credit: MIT NewsResearch, Research Laboratory of Electronics, Nuclear science and engineering, School of Engineering, Materials Science and Engineering, DMSE, Batteries, Energy storage, Energy, National Science Foundation (NSF), Nanoscience and nanotechnology MIT.nano receives international sustainability award “Go Beyond” Award celebrates commitment to excellence in efficiency and sustainability. Tue, 28 Jan 2020 15:20:01 -0500 MIT.nano <p>MIT.nano, the campus facility for nanoscience and nanotechnology research, has been awarded the International Institute for Sustainable Laboratories (I2SL) 2019 “Go Beyond” Award for excellence in sustainability in laboratory and other high-technology facility projects.</p> <p>In selecting the recipients, I2SL looks for projects that “go beyond the facility itself to consider shared resources, infrastructure and services, and neighboring communities, as well as contributing to increased use of energy-efficient and environmentally-sustainable designs, systems, and products.”</p> <p>Designed by Wilson HGA and completed in 2018, the 216,000 square-foot facility, located in the heart of MIT’s campus, is a shared resource for MIT faculty, students, and researchers, as well as external academic and industry users. MIT.nano offers state-of-the-art equipment and environmental controls that would be challenging for individual labs or departments to afford or maintain on their own.</p> <p>“To meet MIT’s goal of designing the most energy-efficient academic cleanroom, we benchmarked against 16 national facilities to establish energy-use drivers and identify best-in-class measures for energy reduction,” explains Samir Srouji, design principal at Wilson HGA. “The design anticipates a 51 percent source energy savings and 50 percent reduction in CO<sub>2</sub> emissions, a true feat for a cleanroom project.”</p> <p>MIT.nano has 47,000 square feet of cleanroom suites that make up two, two-story spaces in the center of the facility. The majority of the cleanroom area under filter is rated ISO 5 (i.e., Class 100), meaning the air is&nbsp;continuously filtered and replaced every 15-30 seconds to maintain a standard that allows no more than 100 particles of 0.5 microns or larger within a cubic foot of air.</p> <p>Despite such resource-intensive technical requirements, MIT and Wilson HGA achieved high sustainability metrics by implementing 60 energy conservation measures (ECM), six of which are considered “go beyond” ECMs, meaning they are not standard practice in cleanroom design and significantly reduce energy use. These measures are:</p> <ul> <li>glycol “run-around” heat recovery from exhaust;</li> <li>variable-volume exhaust and make-up air;</li> <li>condenser heat recovery from sub-cooling chiller;</li> <li>100 percent filter coverage in cleanroom ceiling to lower fan static;</li> <li>variable air volume (VAV) recirculation air handling unit (RAHU), based on occupancy and particle counters; and</li> <li>reheat in RAHUs, avoiding central reheat of all make-up air.</li> </ul> <p>No other cleanroom to date has implemented more than three “go beyond” ECMs, according to Wilson HGA.</p> <p>“MIT.nano is the most technically complex building on campus with thousands of monitoring points spread throughout the facility,” says Dennis Grimard, managing director at MIT.nano. “These points help maintain MIT.nano’s sustainability goals by constantly monitoring the building’s health and operation. Significant resources have also been committed from MIT’s Department of Facilities to ensure the building continues to operate properly.”</p> <p>MIT has made increased efficiency and reduced waste a priority over the past several years, including the creation of the <a href="">Office of Sustainability</a> in 2013. One of the ways MIT is carrying out this commitment is by ensuring new buildings and renovations, from the earliest design stages, are focused on efficiency and sustainability in their energy, water, waste-handling, and other systems.</p> <p>“MIT faces the unique challenge of a growing campus paired with ambitious goals in reducing emissions while increasing investments in energy efficiency,” says Julie Newman, director of sustainability at MIT. “The MIT.nano design team boldly approached this challenge by designing a best-in-class&nbsp;particle-free lab that integrates sustainable and high-performance design standards while concurrently preparing for a changing climate.”</p> <p>MIT.nano boasts a 40 percent water use reduction and over 90 percent of construction waste was diverted. The facility is on track to meet the U.S. Green Building Council’s Leadership in Energy and Environmental Design (<a href="">LEED</a>) Platinum certification. In order to reach this level, buildings must attain 80 or more points based on compliance with different aspects of sustainability. It is the highest LEED certification possible.</p> <p>The Go Beyond Award is the latest honor for MIT.nano. The building has previously received the <a href="" target="_blank">53rd&nbsp;annual Lab of the Year Award</a> from&nbsp;<em>R&amp;D Magazin</em>e and the <a href="" target="_blank">2019 Education Facility Design Award of Merit</a>, presented by the American Institute of Architects Committee on Architecture for Education.</p> Designed by Wilson HGA and completed in 2018, the 216,000 square-foot MIT.nano building, located in the heart of MIT’s campus, is a shared resource for MIT faculty, students, and researchers, as well as external academic and industry users.Photo: Wilson ArchitectsMIT.nano, Facilities, Sustainability, Awards, honors and fellowships, Campus buildings and architecture, Nanoscience and nanotechnology For cheaper solar cells, thinner really is better Solar panel costs have dropped lately, but slimming down silicon wafers could lead to even lower costs and faster industry expansion. Sun, 26 Jan 2020 23:59:59 -0500 David L. Chandler | MIT News Office <p>Costs of solar panels have plummeted over the last several years, leading to rates of solar installations far greater than most analysts had expected. But with most of the potential areas for cost savings already pushed to the extreme, further cost reductions are becoming more challenging to find.</p> <p>Now, researchers at MIT and at the National Renewable Energy Laboratory (NREL) have outlined a pathway to slashing costs further, this time by slimming down the silicon cells themselves.</p> <p>Thinner silicon cells have been explored before, especially around a dozen years ago when the cost of silicon peaked because of supply shortages. But this approach suffered from some difficulties: The thin silicon wafers were too brittle and fragile, leading to unacceptable levels of losses during the manufacturing process, and they had lower efficiency. The researchers say there are now ways to begin addressing these challenges through the use of better handling equipment and some recent developments in solar cell architecture.</p> <p>The new findings are detailed in a paper in the journal <em>Energy and Environmental Science</em>, co-authored by MIT postdoc Zhe Liu, professor of mechanical engineering Tonio Buonassisi, and five others at MIT and NREL.</p> <p>The researchers describe their approach as “technoeconomic,” stressing that at this point economic considerations are as crucial as the technological ones in achieving further improvements in affordability of solar panels.</p> <p>Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year, the researchers say. Today’s silicon photovoltaic cells, the heart of these solar panels, are made from wafers of silicon that are 160 micrometers thick, but with improved handling methods, the researchers propose this could be shaved down to 100 micrometers —&nbsp; and eventually as little as 40 micrometers or less, which would only require one-fourth as much silicon for a given size of panel.</p> <p>That could not only reduce the cost of the individual panels, they say, but even more importantly it could allow for rapid expansion of solar panel manufacturing capacity. That’s because the expansion can be constrained by limits on how fast new plants can be built to produce the silicon crystal ingots that are then sliced like salami to make the wafers. These plants, which are generally separate from the solar cell manufacturing plants themselves, tend to be capital-intensive and time-consuming to build, which could lead to a bottleneck in the rate of expansion of solar panel production. Reducing wafer thickness could potentially alleviate that problem, the researchers say.</p> <p>The study looked at the efficiency levels of four variations of solar cell architecture, including PERC (passivated emitter and rear contact) cells and other advanced high-efficiency technologies, comparing their outputs at different thickness levels. The team found there was in fact little decline in performance down to thicknesses as low as 40 micrometers, using today’s improved manufacturing processes.</p> <p>“We see that there’s this area (of the graphs of efficiency versus thickness) where the efficiency is flat,” Liu says, “and so that’s the region where you could potentially save some money.” Because of these advances in cell architecture, he says, “we really started to see that it was time to revisit the cost benefits.”</p> <p>Changing over the huge panel-manufacturing plants to adapt to the thinner wafers will be a time-consuming and expensive process, but the analysis shows the benefits can far outweigh the costs, Liu says. It will take time to develop the necessary equipment and procedures to allow for the thinner material, but with existing technology, he says, “it should be relatively simple to go down to 100 micrometers,” which would already provide some significant savings. Further improvements in technology such as better detection of microcracks before they grow could help reduce thicknesses further.</p> <p>In the future, the thickness could potentially be reduced to as little as 15 micrometers, he says. New technologies that grow thin wafers of silicon crystal directly rather than slicing them from a larger cylinder could help enable such further thinning, he says.</p> <p>Development of thin silicon has received little attention in recent years because the price of silicon has declined from its earlier peak. But, because of cost reductions that have already taken place in solar cell efficiency and other parts of the solar panel manufacturing process and supply chain, the cost of the silicon is once again a factor that can make a difference, he says.</p> <p>“Efficiency can only go up by a few percent. So if you want to get further improvements, thickness is the way to go,” Buonassisi says. But the conversion will require large capital investments for full-scale deployment.</p> <p>The purpose of this study, he says, is to provide a roadmap for those who may be planning expansion in solar manufacturing technologies. By making the path “concrete and tangible,” he says, it may help companies incorporate this in their planning. “There is a path,” he says. “It’s not easy, but there is a path. And for the first movers, the advantage is significant.”</p> <p>What may be required, he says, is for the different key players in the industry to get together and lay out a specific set of steps forward and agreed-upon standards, as the integrated circuit industry did early on to enable the explosive growth of that industry. “That would be truly transformative,” he says.</p> <p>Andre Augusto, an associate research scientist at Arizona State University who was not connected with this research, says “refining silicon and wafer manufacturing is the most capital-expense (capex) demanding part of the process of manufacturing solar panels. So in a scenario of fast expansion, the wafer supply can become an issue. Going thin solves this problem in part as you can manufacture more wafers per machine without increasing significantly the capex.” He adds that “thinner wafers may deliver performance advantages in certain climates,” performing better in warmer conditions.</p> <p>Renewable energy analyst Gregory Wilson of Gregory Wilson Consulting, who was not associated with this work, says “The impact of reducing the amount of silicon used in mainstream cells would be very significant, as the paper points out. The most obvious gain is in the total amount of capital required to scale the PV industry to the multi-terawatt scale required by the climate change problem. Another benefit is in the amount of energy required to produce silicon PV panels. This is because the polysilicon production and ingot growth processes that are required for the production of high efficiency cells are very energy intensive.”</p> <p>Wilson adds “Major PV cell and module manufacturers need to hear from credible groups like Prof. Buonassisi’s at MIT, since they will make this shift when they can clearly see the economic benefits.”</p> <p>The team also included Sarah Sofia, Hannu Lane, Sarah Wieghold and Marius Peters at MIT and Michael Woodhouse at NREL. The work was partly supported by the U.S. Department of Energy, the Singapore-MIT Alliance for Research and Technology (SMART),&nbsp;and by a Total Energy Fellowship through the MIT Energy Initiative.</p> Currently, 90 percent of the world’s solar panels are made from crystalline silicon, and the industry continues to grow at a rate of about 30 percent per year.Research, School of Engineering, Energy, Solar, Nanoscience and nanotechnology, Materials Science and Engineering, Mechanical engineering, Renewable energy, Alternative energy, Sustainability, MIT Energy Initiative, Climate change, Department of Energy (DoE), Singapore-MIT Alliance for Research and Technology (SMART) Understanding combustion Assistant Professor Sili Deng is on a quest to understand the chemistry involved in combustion and develop strategies to make it cleaner. Thu, 23 Jan 2020 15:15:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>Much of the conversation around energy sustainability is dominated by clean-energy technologies like wind, solar, and thermal. However, with roughly 80 percent of energy use in the United States coming from fossil fuels, combustion remains the dominant method of energy conversion for power generation, electricity, and transportation.</p> <p>“People think of combustion as a dirty technology, but it’s currently the most feasible way to produce electricity and power,” explains Sili Deng, assistant professor of mechanical engineering and the Brit (1961) &amp; Alex (1949) d’Arbeloff Career Development Professor.</p> <p>Deng is working toward understanding the chemistry and flow that interacts in combustion in an effort to improve technologies for current or near-future energy conversion applications. “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” she adds.</p> <p>Deng’s interest in combustion stemmed from a conversation she had with a friend before applying to Tsinghua University for undergraduate study. “One day, I was talking about my dream school and major with a friend and she said ‘What if you could increase the efficiency of energy utilization by just 1 percent?’” recalls Deng. “Considering how much energy we use globally each year, you could make a huge difference.”</p> <p>This discussion inspired Deng to study combustion. After graduating with a bachelor’s degree in thermal engineering, she received her master’s and PhD from Princeton University. At Princeton, Deng focused on the how the coupling effects of chemistry and flow influence combustion and emissions.</p> <p>“The details of combustion are much more complicated than our general understanding of fuel and air combining to form water, carbon dioxide, and heat,” Deng explains. “There are hundreds of chemical species and thousands of reactions involved, depending on the type of fuel, fuel-air mixing, and flow dynamics.”</p> <p>Along with her team at the <a href="" target="_blank">Deng Energy and Nanotechnology Group at MIT</a>, she hopes that understanding chemically reacting flow in the combustion process will result in new strategies to control the process of combustion and reduce or eliminate the soot generated in combustion.&nbsp;</p> <p>“My group utilizes both experimental and computational tools to build a fundamental understanding of the combustion process that can guide the design of combustors for high performance and low emissions,” Deng adds. Her team is also utilizing artificial intelligence algorithms along with physical models to predict — and hopefully control — the combustion process.</p> <p>By understanding and controlling the combustion process, Deng is uncovering more about how soot, combustion’s most notorious by-product, is created.</p> <p>“Once soot leaves the site of combustion, it is difficult to contain. There isn’t much you can do to prevent haze or smog from developing,” she explains.</p> <p>The production of soot starts within the flame itself — even on a small scale, such as burning a candle. As Deng describes it, a “chemical soup” of hydrocarbons, vapor, melting wax, and oxygen interact to create soot particles visible as the yellow glow surrounding a candle light.</p> <p>“By understanding exactly how this soot is generated within a flame, we’re hoping to develop methods to reduce or eliminate it before it gets out of the combustion channel,” says Deng.</p> <p>Deng’s research on flames extends beyond the formation of soot. By developing a technology called flame synthesis, she is working on producing nanomaterials that can be used for renewable energy applications.</p> <p>The process of synthesizing nanomaterials via flames shares similarities with the soot formation in flames. Instead of generating the byproducts of incomplete combustion, certain precursors are added to the flame, which result in the production of nanomaterials. One common example of using flame synthesis to create nanomaterials is the production of titanium dioxide, a white pigment often used in paint and sunscreen.&nbsp;</p> <p>“I’m hoping to create a similar type of reaction to develop new materials that can be used for things like renewable energy, water treatment, pollution reduction, and catalysts,” she explains. Her team has been tweaking the various parameters of combustion — from temperature to the type of fuel used — to create nanomaterials that could eventually be used to clean up other, more nefarious byproducts created in combustion.</p> <p>To be successful in her quest to make combustion cleaner, Deng acknowledges that collaboration will be key. “There’s an opportunity to combine the fundamental research on combustion that my lab is doing with the materials, devices, and products being developed across areas like materials science and automotive engineering,” she says.</p> <p>Since we may be decades away from transitioning to a grid powered by renewable resources like solar, wave, and wind, Deng is helping carve out an important role for fellow combustion scientists.</p> <p>“While clean-energy technologies are continuing to be developed, it’s crucial that we continue to work toward finding ways to improve combustion technologies,” she adds.</p> “My goal is to find out how to make the combustion process more efficient, reliable, safe, and clean,” says Sili Deng, assistant professor of mechanical engineering at MIT.Photo: Tony PulsoneMechanical engineering, School of Engineering, Energy, Environment, Faculty, Oil and gas, Carbon, Emissions, Profile, Sustainability, Nanoscience and nanotechnology MIT graduate students lead conference on microsystems and nanotechnology Student committee puts together research showcase while balancing coursework, qualifying exams, and extracurriculars. Wed, 22 Jan 2020 12:30:01 -0500 Amanda Stoll | MIT.nano <p>Organizing the Microsystems Annual Research Conference (<a href="">MARC</a>) is no small feat. Each January during the MIT Independent Activities Period, more than 200 students, faculty, staff, postdocs, and industry members come together at an off-campus site to explore technical achievements and research ideas at the forefront of microsystems and nanotechnology.</p> <p>The secret to MARC’s success year after year? A student committee that handles every aspect of coordinating and executing the showcase event, to be held this year in late January in New Hampshire. Chaired by MIT doctoral students Mayuran Saravanapavanantham and Rachel Yang, the 2020 student leaders arrange for research talks, poster presentations, keynote lectures, and all of the logistics for transporting scores of attendees to and from the conference.</p> <p>Part research symposium and part networking event, MARC strives to share new research directions, identify job opportunities, and help participants refine their technical communications skills — with a bit of skiing and snowshoeing on the side.</p> <p>“MARC has many moving parts that have to be managed simultaneously,” says Saravanapavanantham, a third-year graduate student in Professor Vladimir Bulović’s Organic and Nanostructured Electronics (ONE) Lab. “In planning a large, off-campus conference, it’s really important to have a strong, dedicated committee. MARC caters to a broad audience, so we have to make sure we tie everything together to keep everyone engaged.”</p> <p>Saravanapavanantham and Yang have each participated in two previous MARC conferences. Together, they have been overseeing a student committee of 13 individuals over the past six months to recreate positive elements of previous MARCs and generate new solutions to old challenges.</p> <p><strong>A strong foundation</strong></p> <p>Now in its 16th year, the conference has expanded significantly since its inception. It grew out of the semesterly VLSI Research Reviews, which began in 1984 under the Microsystems Research Center. From there, it evolved into a faculty-run research review that became known as the Microsystems Technology Laboratories (MTL) Annual Student Review. In 2005, the event was rebranded as MARC and became the student-run conference that exists today. This year, the conference is co-sponsored for the first time by MTL and MIT.nano.</p> <p>The organizing committee comprises eight core committee members and five session chairs. The core committee takes on logistical responsibilities such as finding speakers, building the agenda, and working with vendors, while the session chairs focus on abstract submissions from over 90 MIT student presenters.</p> <p>To keep on track, the team follows a strict timeline passed down from previous MARC co-chairs. The committee must monitor not only registration deadlines, but hotel reservations, transportation, printing of materials, and abstract reviews.</p> <p>This year, the responsibilities are broken into eight categories, each chaired by a different PhD student: Navid Abedzadeh of the Department of Electrical Engineering and Computer Scence (EECS) is managing photography; Jessica Boles of EECS is managing communications training; Elaine McVay of EECS is managing social activities; Rishabh Mittal of EECS is managing registration logistics; Jatin Patil of the Department of Materials Science and Engineering is managing the website and proceedings; Morteza Sarmadi of the Department of Mechanical Engineering (MechE) is managing outreach; Jay Sircar of MechE is managing winter activities and transportation; and Miaorong Wang of EECS is managing audio/visual presentations.</p> <p><strong>Appealing to a wide audience</strong></p> <p>“One of the greatest challenges of MARC is promoting the event to encourage students, faculty, staff, and industry members to attend,” explains Yang, a second-year graduate student working on power magnetics design in Professor David Perreault’s power electronics group. “This year, we dedicated more efforts to our marketing, including adding an outreach chair to our committee.”</p> <p>Their efforts paid off with 20 faculty and PIs registered to attend, a significant increase from previous years. The MARC committee also decided to make poster pitches optional in 2020 to increase students’ interest in participating.</p> <p>Each year, MARC aims to host 100 poster presenters from nearly 40 research groups across seven categories. Participating students are required to go through at least one round of abstract feedback and edits, maintaining MARC’s reputation of high-quality writing. “Communications training is an essential part of the conference. We train students in abstract writing, poster design, and pitch preparation,” says Saravanapavanantham. “This helps MARC participants prepare to submit their work at future conferences.”</p> <p>Abstracts are divided into eight categories that are reviewed by the 2020 session chairs. Topics include electronic and quantum devices, energy harvesting, medical devices, biotechnology, and photonics, to name a few. The five MIT students reviewing this year’s abstract submissions, all current EECS PhD students, are Mohamed Ibrahim, Kruthika Kikkeri, Jane Lai, Haozhe Wang and Qingyun Xie.</p> <p>The student committee is also charged with identifying and securing keynote speakers who are experts in their field. The 2020 keynote lectures will focus on driving innovation at all scales. The speakers include Reed Sturtevant, who, as general partner of The Engine, a venture capital firm built by MIT, facilitates the launch of new technologies through startup incubation; and Mark Rosker, director of the U.S. Defense Advanced Research Projects Agency’s Microsystems Technology Office, which sets direction for micro/nanoelectronics research at a national level.</p> <p>Saravanapavanantham and Yang are excited to see the committee’s planning efforts come to life at MARC 2020. The two are also looking toward the future, documenting their processes and reflecting on their visions for the future: “We hope this conference will continue to grow as a platform to inspire ideas and to foster research collaboration between MIT and industry,” said Yang.</p> Left to right: MARC committee members and MIT graduate students Navid Abedzadeh, Mayuran Saravanapavanantham, Haozhe Wang, Elaine McVay, Qingyun Xie, Jatin Patil, Jessica Boles, Rachel Yang, and Rishabh Mittal. Photo: Navid AbedzadehMicrosystems Technology Laboratories, MIT.nano, Special events and guest speakers, Mechanical engineering, Graduate, postdoctoral, DMSE, Industry, electronics, Independent Activities Period, Students, Nanoscience and nanotechnology, School of Engineering How to verify that quantum chips are computing correctly A new method determines whether circuits are accurately executing complex operations that classical computers can’t tackle. Mon, 13 Jan 2020 10:59:03 -0500 Rob Matheson | MIT News Office <p>In a step toward practical quantum computing, researchers from MIT, Google, and elsewhere have designed a system that can verify when quantum chips have accurately performed complex computations that classical computers can’t.</p> <p>Quantum chips perform computations using quantum bits, called “qubits,” that can represent the two states corresponding to classic binary bits — a 0 or 1 — or a “quantum superposition” of both states simultaneously. The unique superposition state can enable quantum computers to solve problems that are practically impossible for classical computers, potentially spurring breakthroughs in material design, drug discovery, and machine learning, among other applications.</p> <p>Full-scale quantum computers will require millions of qubits, which isn’t yet feasible. In the past few years, researchers have started developing “Noisy Intermediate Scale Quantum” (NISQ) chips, which contain around 50 to 100 qubits. That’s just enough to demonstrate “quantum advantage,” meaning the NISQ chip can solve certain algorithms that are intractable for classical computers. Verifying that the chips performed operations as expected, however, can be very inefficient. The chip’s outputs can look entirely random, so it takes a long time to simulate steps to determine if everything went according to plan.</p> <p>In a paper published today in <em>Nature Physics</em>, the researchers describe a novel protocol to efficiently verify that an NISQ chip has performed all the right quantum operations. They validated their protocol on a notoriously difficult quantum problem running on custom quantum photonic chip.</p> <p>“As rapid advances in industry and academia bring us to the cusp of quantum machines that can outperform classical machines, the task of quantum verification becomes time critical,” says first author Jacques Carolan, a postdoc in the Department of Electrical Engineering and Computer Science (EECS) and the Research Laboratory of Electronics (RLE). “Our technique provides an important tool for verifying a broad class of quantum systems. Because if I invest billions of dollars to build a quantum chip, it sure better do something interesting.”</p> <p>Joining Carolan on the paper are researchers from EECS and RLE at MIT, as well from the Google Quantum AI Laboratory, Elenion Technologies, Lightmatter, and Zapata Computing. &nbsp;</p> <p><strong>Divide and conquer</strong></p> <p>The researchers’ work essentially traces an output quantum state generated by the quantum circuit back to a known input state. Doing so reveals which circuit operations were performed on the input to produce the output. Those operations should always match what researchers programmed. If not, the researchers can use the information to pinpoint where things went wrong on the chip.</p> <p>At the core of the new protocol, called “Variational Quantum Unsampling,” lies a “divide and conquer” approach, Carolan says, that breaks the output quantum state into chunks. “Instead of doing the whole thing in one shot, which takes a very long time, we do this unscrambling layer by layer. This allows us to break the problem up to tackle it in a more efficient way,” Carolan says.</p> <p>For this, the researchers took inspiration from neural networks — which solve problems through many layers of computation —&nbsp;to build a novel “quantum neural network” (QNN), where each layer represents a set of quantum operations.</p> <p>To run the QNN, they used traditional silicon fabrication techniques to build a 2-by-5-millimeter NISQ chip with more than 170 control parameters — tunable circuit components that make manipulating the photon path easier. Pairs of photons are generated at specific wavelengths from an external component and injected into the chip. The photons travel through the chip’s phase shifters — which change the path of the photons — interfering with each other. This produces a random quantum output state —&nbsp;which represents what would happen during computation. The output is measured by an array of external photodetector sensors.</p> <p>That output is sent to the QNN. The first layer uses complex optimization techniques to dig through the noisy output to pinpoint the signature of a single photon among all those scrambled together. Then, it “unscrambles” that single photon from the group to identify what circuit operations return it to its known input state. Those operations should match exactly the circuit’s specific design for the task. All subsequent layers do the same computation — removing from the equation any previously unscrambled photons — until all photons are unscrambled.</p> <p>As an example, say the input state of qubits fed into the processor was all zeroes. The NISQ chip executes a bunch of operations on the qubits to generate a massive, seemingly randomly changing number as output. (An output number will constantly be changing as it’s in a quantum superposition.) The QNN selects chunks of that massive number. Then, layer by layer, it determines which operations revert each qubit back down to its input state of zero. If any operations are different from the original planned operations, then something has gone awry. Researchers can inspect any mismatches between the expected output to input states, and use that information to tweak the circuit design.</p> <p><strong>Boson “unsampling”</strong></p> <p>In experiments, the team successfully ran a popular computational task used to demonstrate quantum advantage, called “boson sampling,” which is usually performed on photonic chips. In this exercise, phase shifters and other optical components will manipulate and convert a set of input photons into a different quantum superposition of output photons. Ultimately, the task is to calculate the probability that a certain input state will match a certain output state. That will essentially be a sample from some probability distribution.</p> <p>But it’s nearly impossible for classical computers to compute those samples, due to the unpredictable behavior of photons. It’s been theorized that NISQ chips can compute them fairly quickly. Until now, however, there’s been no way to verify that quickly and easily, because of the complexity involved with the NISQ operations and the task itself.</p> <p>“The very same properties which give these chips quantum computational power makes them nearly impossible to verify,” Carolan says.</p> <p>In experiments, the researchers were able to “unsample” two photons that had run through the boson sampling problem on their custom NISQ chip — and in a fraction of time it would take traditional verification approaches.</p> <p>“This is an excellent paper that employs a nonlinear quantum neural network to learn the unknown unitary operation performed by a black box,” says Stefano Pirandola, a professor of computer science who specializes in quantum technologies at the University of York. “It is clear that this scheme could be very useful to verify the actual gates that are performed by a quantum circuit — [for example] by a NISQ processor. From this point of view, the scheme serves as an important benchmarking tool for future quantum engineers. The idea was remarkably implemented on a photonic quantum chip.”</p> <p>While the method was designed for quantum verification purposes, it could also help capture useful physical properties, Carolan says. For instance, certain molecules when excited will vibrate, then emit photons based on these vibrations. By injecting these photons into a photonic chip, Carolan says, the unscrambling technique could be used to discover information about the quantum dynamics of those molecules to aid in bioengineering molecular design. It could also be used to unscramble photons carrying quantum information that have accumulated noise by passing through turbulent spaces or materials. &nbsp;</p> <p>“The dream is to apply this to interesting problems in the physical world,” Carolan says.</p> Researchers from MIT, Google, and elsewhere have designed a novel method for verifying when quantum processors have accurately performed complex computations that classical computers can’t. They validate their method on a custom system (pictured) that’s able to capture how accurately a photonic chip (“PNP”) computed a notoriously difficult quantum problem.Image: Mihika PrabhuResearch, Computer science and technology, Algorithms, Light, Photonics, Nanoscience and nanotechnology, Quantum computing, electronics, Machine learning, Artificial intelligence, Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), School of Engineering A new approach to making airplane parts, minus the massive infrastructure Carbon nanotube film produces aerospace-grade composites with no need for huge ovens or autoclaves. Mon, 13 Jan 2020 00:00:00 -0500 Jennifer Chu | MIT News Office <p>A modern airplane’s fuselage is made from multiple sheets of different composite materials, like so many layers in a phyllo-dough pastry. Once these layers are stacked and molded into the shape of a fuselage, the structures are wheeled into warehouse-sized ovens and autoclaves, where the layers fuse together to form a resilient, aerodynamic shell.</p> <p>Now MIT engineers have developed a method to produce aerospace-grade composites without the enormous ovens and pressure vessels. The technique may help to speed up the manufacturing of airplanes and other large, high-performance composite structures, such as blades for wind turbines.</p> <p>The researchers detail their new method in a paper published today in the journal <em>Advanced Materials Interfaces. </em></p> <p>“If you’re making a primary structure like a fuselage or wing, you need to build a pressure vessel, or autoclave, the size of a two- or three-story building, which itself requires time and money to pressurize,” says Brian Wardle, professor of aeronautics and astronautics at MIT. “These things are massive pieces of infrastructure. Now we can make primary structure materials without autoclave pressure, so we can get rid of all that infrastructure.”</p> <p>Wardle’s co-authors on the paper are lead author and MIT postdoc Jeonyoon Lee, and Seth Kessler of Metis Design Corporation, an aerospace structural health monitoring company based in Boston.</p> <p><strong>Out of the oven, into a blanket</strong></p> <p>In 2015, Lee led the team, along with another member of Wardle’s lab, in creating a method to make aerospace-grade composites without requiring an oven to fuse the materials together. Instead of placing layers of material inside an oven to cure, the researchers essentially wrapped them in an ultrathin film of carbon nanotubes (CNTs). When they applied an electric current to the film, the CNTs, like a nanoscale electric blanket, quickly generated heat, causing the materials within to cure and fuse together.</p> <p>With this out-of-oven, or OoO, technique, the team was able to produce composites as strong as the materials made in conventional airplane manufacturing ovens, using only 1 percent of the energy.</p> <p>The researchers next looked for ways to make high-performance composites without the use of large, high-pressure autoclaves — building-sized vessels that generate high enough pressures to press materials together, squeezing out any voids, or air pockets, at their interface.</p> <p>“There’s microscopic surface roughness on each ply of a material, and when you put two plys together, air gets trapped between the rough areas, which is the primary source of voids and weakness in a composite,” Wardle says. “An autoclave can push those voids to the edges and get rid of them.”</p> <p>Researchers including Wardle’s group have explored “out-of-autoclave,” or OoA, techniques to manufacture composites without using the huge machines. But most of these techniques have produced composites where nearly 1 percent of the material contains voids, which can compromise a material’s strength and lifetime. In comparison, aerospace-grade composites made in autoclaves are of such high quality that any voids they contain are neglible and not easily measured.</p> <p>“The problem with these OoA approaches is also that the materials have been specially formulated, and none are qualified for primary structures such as wings and fuselages,” Wardle says. “They’re making some inroads in secondary structures, such as flaps and doors, but they still get voids.”</p> <p><strong>Straw pressure</strong></p> <p>Part of Wardle’s work focuses on developing nanoporous networks — ultrathin films made from aligned, microscopic material such as carbon nanotubes, that can be engineered with exceptional properties, including color, strength, and electrical capacity. The researchers wondered whether these nanoporous films could be used in place of giant autoclaves to squeeze out voids between two material layers, as unlikely as that may seem.</p> <p>A thin film of carbon nanotubes is somewhat like a dense forest of trees, and the spaces between the trees can function like thin nanoscale tubes, or capillaries. A capillary such as a straw can generate pressure based on its geometry and its surface energy, or the material’s ability to attract liquids or other materials.&nbsp;</p> <p>The researchers proposed that if a thin film of carbon nanotubes were sandwiched between two materials, then, as the materials were heated and softened, the capillaries between the carbon nanotubes should have a surface energy and geometry such that they would draw the materials in toward each other, rather than leaving a void between them. Lee calculated that the capillary pressure should be larger than the pressure applied by the autoclaves.</p> <p>The researchers tested their idea in the lab by growing films of vertically aligned carbon nanotubes using a technique they previously developed, then laying the films between layers of materials that are typically used in the autoclave-based manufacturing of primary aircraft structures. They wrapped the layers in a second film of carbon nanotubes, which they applied an electric current to to heat it up. They observed that as the materials heated and softened in response, they were pulled into the capillaries of the intermediate CNT film.</p> <p>The resulting composite lacked voids, similar to aerospace-grade composites that are produced in an autoclave. The researchers subjected the composites to strength tests, attempting to push the layers apart, the idea being that voids, if present, would allow the layers to separate more easily.</p> <p>“In these tests, we found that our out-of-autoclave composite was just as strong as the gold-standard autoclave process composite used for primary aerospace structures,” Wardle says.</p> <p>The team will next look for ways to scale up the pressure-generating CNT film. In their experiments, they worked with samples measuring several centimeters wide — large enough to demonstrate that nanoporous networks can pressurize materials and prevent voids from forming. To make this process viable for manufacturing entire wings and fuselages, researchers will have to find ways to manufacture CNT and other nanoporous films at a much larger scale.</p> <p>“There are ways to make really large blankets of this stuff, and there’s continuous production of sheets, yarns, and rolls of material that can be incorporated in the process,” Wardle says.</p> <p>He plans also to explore different formulations of nanoporous films, engineering capillaries of varying surface energies and geometries, to be able to pressurize and bond other high-performance materials.</p> <p>“Now we have this new material solution that can provide on-demand pressure where you need it,” Wardle says. “Beyond airplanes, most of the composite production in the world is composite pipes, for water, gas, oil, all the things that go in and out of our lives. This could make making all those things, without the oven and autoclave infrastructure.”</p> <p>This research was supported, in part, by Airbus, ANSYS, Embraer, Lockheed Martin, Saab AB, Saertex, and Teijin Carbon America through MIT’s Nano-Engineered Composite aerospace Structures (NECST) Consortium.</p> MIT postdoc Jeonyoon LeeImage: Melanie Gonick, MITAeronautical and astronautical engineering, Carbon nanotubes, Manufacturing, Materials Science and Engineering, Research, School of Engineering, Nanoscience and nanotechnology Julia Ortony: Concocting nanomaterials for energy and environmental applications The MIT assistant professor is entranced by the beauty she finds pursuing chemistry. Thu, 09 Jan 2020 14:00:01 -0500 Leda Zimmerman | MIT Energy Initiative <p>A molecular engineer,&nbsp;<a href="">Julia Ortony</a>&nbsp;performs a contemporary version of alchemy.</p> <p>“I take powder made up of disorganized, tiny molecules, and after mixing it up with water, the material in the solution zips itself up into threads 5 nanometers thick — about 100 times smaller than the wavelength of visible light,” says Ortony, the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering (DMSE). “Every time we make one of these nanofibers, I am amazed to see it.”</p> <p>But for Ortony, the fascination doesn’t simply concern the way these novel structures self-assemble, a product of the interaction between a powder’s molecular geometry and water. She is plumbing the potential of these nanomaterials for use in renewable energy and environmental remediation technologies, including promising new approaches to water purification and the photocatalytic production of fuel.</p> <p><strong>Tuning molecular properties</strong></p> <p>Ortony’s current research agenda emerged from a decade of work into the behavior of a class of carbon-based molecular materials that can range from liquid to solid.</p> <p>During doctoral work at the University of California at Santa Barbara, she used magnetic resonance (MR) spectroscopy to make spatially precise measurements of atomic movement within molecules, and of the interactions between molecules. At Northwestern University, where she was a postdoc, Ortony focused this tool on self-assembling nanomaterials that were biologically based, in research aimed at potential biomedical applications such as cell scaffolding and regenerative medicine.</p> <p>“With MR spectroscopy, I investigated how atoms move and jiggle within an assembled nanostructure,” she says. Her research revealed that the surface of the nanofiber acted like a viscous liquid, but as one probed further inward, it behaved like a solid. Through molecular design, it became possible to tune the speed at which molecules that make up a nanofiber move.</p> <p>A door had opened for Ortony. “We can now use state-of-matter as a knob to tune nanofiber properties,” she says. “For the first time, we can design self-assembling nanostructures, using slow or fast internal molecular dynamics to determine their key behaviors.”</p> <p><strong>Slowing down the dance</strong></p> <p>When she arrived at MIT in 2015, Ortony was determined to tame and train molecules for nonbiological applications of self-assembling “soft” materials.</p> <p>“Self-assembling molecules tend to be very dynamic, where they dance around each other, jiggling all the time and coming and going from their assembly,” she explains. “But we noticed that when molecules stick strongly to each other, their dynamics get slow, and their behavior is quite tunable.” The challenge, though, was to synthesize nanostructures in nonbiological molecules that could achieve these strong interactions.</p> <p>“My hypothesis coming to MIT was that if we could tune the dynamics of small molecules in water and really slow them down, we should be able to make self-assembled nanofibers that behave like a solid and are viable outside of water,” says Ortony.</p> <p>Her efforts to understand and control such materials are now starting to pay off.</p> <p>“We’ve developed unique, molecular nanostructures that self-assemble, are stable in both water and air, and — since they’re so tiny — have extremely high surface areas,” she says. Since the nanostructure surface is where chemical interactions with other substances take place, Ortony has leapt to exploit this feature of her creations — focusing in particular on their potential in environmental and energy applications.</p> <p><strong>Clean water and fuel from sunlight</strong></p> <p>One key venture, supported by Ortony’s Professor Amar G. Bose Fellowship, involves water purification. The problem of toxin-laden drinking water affects tens of millions of people in underdeveloped nations. Ortony’s research group is developing nanofibers that can grab deadly metals such as arsenic out of such water. The chemical groups she attaches to nanofibers are strong, stable in air, and in recent tests “remove all arsenic down to low, nearly undetectable levels,” says Ortony.</p> <p>She believes an inexpensive textile made from nanofibers would be a welcome alternative to the large, expensive filtration systems currently deployed in places like Bangladesh, where arsenic-tainted water poses dire threats to large populations.</p> <p>“Moving forward, we would like to chelate arsenic, lead, or any environmental contaminant from water using a solid textile fabric made from these fibers,” she says.</p> <p>In another research thrust, Ortony says, “My dream is to make chemical fuels from solar energy.” Her lab is designing nanostructures with molecules that act as antennas for sunlight. These structures, exposed to and energized by light, interact with a catalyst in water to reduce carbon dioxide to different gases that could be captured for use as fuel.</p> <p>In recent studies, the Ortony lab found that it is possible to design these catalytic nanostructure systems to be stable in water under ultraviolet irradiation for long periods of time. “We tuned our nanomaterial so that it did not break down, which is essential for a photocatalytic system,” says Ortony.</p> <p><strong>Students dive in</strong></p> <p>While Ortony’s technologies are still in the earliest stages, her approach to problems of energy and the environment are already drawing student enthusiasts.</p> <p>Dae-Yoon Kim, a postdoc in the Ortony lab, won the 2018 Glenn H. Brown Prize from the International Liquid Crystal Society for his work on synthesized photo-responsive materials and started a tenure track position at the Korea Institute of Science and Technology this fall. Ortony also mentors Ty Christoff-Tempesta, a DMSE doctoral candidate, who was recently awarded a Martin Fellowship for Sustainability. Christoff-Tempesta hopes to design nanoscale fibers that assemble and disassemble in water to create environmentally sustainable materials. And Cynthia Lo ’18 won a best-senior-thesis award for work with Ortony on nanostructures that interact with light and self-assemble in water, work that will soon be published. She is “my superstar&nbsp;<a href="">MIT Energy Initiative UROP</a>&nbsp;[undergraduate researcher],” says Ortony.</p> <p>Ortony hopes to share her sense of wonder about materials science not just with students in her group, but also with those in her classes. “When I was an undergraduate, I was blown away at the sheer ability to make a molecule and confirm its structure,” she says. With her new lab-based course for grad students — 3.65 (Soft Matter Characterization) — Ortony says she can teach about “all the interests that drive my research.”</p> <p>While she is passionate about using her discoveries to solve critical problems, she remains entranced by the beauty she finds pursuing chemistry. Fascinated by science starting in childhood, Ortony says she sought out every available class in chemistry, “learning everything from beginning to end, and discovering that I loved organic and physical chemistry, and molecules in general.”</p> <p>Today, she says, she finds joy working with her “creative, resourceful, and motivated” students. She celebrates with them “when experiments confirm hypotheses, and it’s a breakthrough and it’s thrilling,” and reassures them “when they come with a problem, and I can let them know it will be thrilling soon.”</p> <p><em>This article appears in the <a href="" target="_blank">Autumn 2019 issue of </a></em><a href="" target="_blank">Energy Futures</a>, <em>the magazine of the MIT Energy Initiative.</em></p> Julia Ortony is the Finmeccanica Career Development Assistant Professor of Engineering in the Department of Materials Science and Engineering.Photo: Lillie Paquette/School of EngineeringMIT Energy Initiative, Materials Science and Engineering, School of Engineering, Research, Faculty, DMSE, Chemistry, Nanoscience and nanotechnology, Renewable energy, Water Preventing energy loss in windows Mechanical engineers are developing technologies that could prevent heat from entering or escaping windows, potentially preventing a massive loss of energy. Mon, 06 Jan 2020 15:30:01 -0500 Mary Beth Gallagher | Department of Mechanical Engineering <p>In the quest to make buildings more energy efficient, windows present a particularly difficult problem. According to the U.S. Department of Energy, heat that either escapes or enters windows accounts for roughly 30 percent of the energy used to heat and cool buildings. Researchers are developing a variety of window technologies that could prevent this massive loss of energy.</p> <p>“The choice of windows in a building has a direct influence on energy consumption,” says Nicholas Fang, professor of mechanical engineering. “We need an effective way of blocking solar radiation.”</p> <p>Fang is part of a large collaboration that is working together to develop smart adaptive control and monitoring systems for buildings. The research team, which includes researchers from the Hong Kong University of Science and Technology and Leon Glicksman, professor of building technology and mechanical engineering at MIT, has been tasked with helping Hong Kong achieve its ambitious goal to reduce carbon emissions by 40 percent by 2025.</p> <p>“Our idea is to adapt new sensors and smart windows in an effort to help achieve energy efficiency and improve thermal comfort for people inside buildings,” Fang explains.</p> <p>His contribution is the development of a smart material that can be placed on a window as a film that blocks heat from entering. The film remains transparent when the surface temperature is under 32 degrees Celsius, but turns milky when it exceeds 32 C. This change in appearance is due to thermochromic microparticles that change phases in response to heat. The smart window’s milky appearance can block up to 70 percent of solar radiation from passing through the window, translating to a 30 percent reduction in cooling load.&nbsp;</p> <p>In addition to this thermochromic material, Fang’s team is hoping to embed windows with sensors that monitor sunlight, luminance, and temperature. “Overall, we want an integral solution to reduce the load on HVAC systems,” he explains.</p> <p>Like Fang, graduate student Elise Strobach is working on a material that could significantly reduce the amount of heat that either escapes or enters through windows. She has developed a high-clarity silica aerogel that, when placed between two panes of glass, is 50 percent more insulating than traditional windows and lasts up to a decade longer.</p> <p>“Over the course of the past two years, we’ve developed a material that has demonstrated performance and is promising enough to start commercializing,” says Strobach, who is a PhD candidate in MIT’s Device Research Laboratory. To help in this commercialization, Strobach has co-founded the startup <a href="">AeroShield Materials</a>.&nbsp;</p> <p>Lighter than a marshmallow, AeroShield’s material comprises 95 percent air. The rest of the material is made up of silica nanoparticles that are just 1-2 nanometers large. This structure blocks all three modes of heat loss: conduction, convection, and radiation. When gas is trapped inside the material’s small voids, it can no longer collide and transfer energy through convection. Meanwhile, the silica nanoparticles absorb radiation and re-emit it back in the direction it came from.</p> <p>“The material’s composition allows for a really intense temperature gradient that keeps the heat where you want it, whether it’s hot or cold outside,” explains Strobach, who, along with AeroShield co-founder Kyle Wilke, was named one of <a href="">Forbes’ 30 Under 30 in Energy</a>. Commercialization of this research is being supported by the MIT Deshpande Center for Technological Innovation.</p> <p>Strobach also sees possibilities for combining AeroShield technologies with other window solutions being developed at MIT, including Fang’s work and research being conducted by Gang Chen, Carl Richard Soderberg Professor of Power Engineering, and research scientist Svetlana Boriskina.</p> <p>“Buildings represent one third of U.S. energy usage, so in many ways windows are low-hanging fruit,” explains Chen.</p> <p>Chen and Boriskina previously worked with Strobach on the first iteration of the AeroShield material for their project developing a solar thermal aerogel receiver. More recently, they have developed polymers that could be used in windows or building facades to trap or reflect heat, regardless of color.&nbsp;</p> <p>These polymers were partially inspired by stained-glass windows. “I have an optical background, so I’m always drawn to the visual aspects of energy applications,” says Boriskina. “The problem is, when you introduce color it affects whatever energy strategy you are trying to pursue.”</p> <p>Using a mix of polyethylene and a solvent, Chen and Boriskina added various nanoparticles to provide color. Once stretched, the material becomes translucent and its composition changes. Previously disorganized carbon chains reform as parallel lines, which are much better at conducting heat.</p> <p>While these polymers need further development for use in transparent windows, they could possibly be used in colorful, translucent windows that reflect or trap heat, ultimately leading to energy savings. “The material isn’t as transparent as glass, but it’s translucent. It could be useful for windows in places you don’t want direct sunlight to enter — like gyms or classrooms,” Boriskina adds.</p> <p>Boriskina is also using these materials for military applications. Through a three-year project funded by the U.S. Army, she is developing lightweight, custom-colored, and unbreakable polymer windows. These windows can provide passive temperature control and camouflage for portable shelters and vehicles.</p> <p>For any of these technologies to have a meaningful impact on energy consumption, researchers must improve scalability and affordability. “Right now, the cost barrier for these technologies is too high — we need to look into more economical and scalable versions,” Fang adds.&nbsp;</p> <p>If researchers are successful in developing manufacturable and affordable solutions, their window technologies could vastly improve building efficiency and lead to a substantial reduction in building energy consumption worldwide.</p> A smart window developed by Professor Nicholas Fang includes thermochromic material that turns frosty when exposed to temperatures of 32 C or higher, such as when a researcher touches the window with her hand. Photo courtesy of the researchers.Mechanical engineering, School of Engineering, Materials Science and Engineering, Energy, Architecture, Climate change, Glass, Nanoscience and nanotechnology Workshop connects microscale mechanics to real-world alloy design “Micromechanics informed alloy design: Overcoming scale-transition challenges” focuses on bridging scale gaps. Tue, 17 Dec 2019 14:45:01 -0500 Materials Research Laboratory <p>New micro- and nanomechanical tests reveal the behavior of metal alloys at the micro- and nanoscale, but integrating these findings into engineering-scale metal-alloy designs and products remains a challenge.&nbsp;</p> <p>“I can go and test a tiny volume of a metal to learn about how it behaves.&nbsp;This is very interesting because it gives us insight about some of the fundamental characteristics of material, because as you can imagine, if you are probing smaller and smaller volumes, then you look at simpler and simpler structures,” says <a href="">C. Cem Taşan</a>, associate professor of metallurgy.&nbsp;</p> <p>“Still, at the macro world — the alloys, the materials that we all use — they have complicated microstructures. They are not simple at all,” he says. “The big challenge is, how do I connect the world of grains and atoms at the micro and nano scale to the deformations and crashes and impacts at the engineering macro scale.”</p> <p>More than 50 students and professors from multiple departments and universities, as well as representatives from industry, participated in the third annual <a href="">Alloy Design Workshop</a> at MIT on Dec. 6. The workshop, titled “Micro-mechanics informed alloy design: Overcoming scale-transition challenges,” focused on bridging scale gaps, enabling complex alloy design through the understanding of fundamental nano-scale mechanisms of plasticity and fracture mechanics. This year’s workshop sponsors were Allegheny Technologies Incorporated (<a href="">ATI</a>) and ExxonMobil.<br /> &nbsp;&nbsp;&nbsp; &nbsp;<br /> “There are specific challenges associated with carrying this information that is from the micro and nano scale to the engineering world, the scale you and I can see with our eye. That’s why we invited eight leading professors in the world to give talks,” Taşan says. The workshop ended with a panel discussion that included professors Timothy P. Weihs from Johns Hopkins University, Amy Clarke from Colorado School of Mines, Mitra Taheri from Johns Hopkins University, Sharvan Kumar from Brown University, Thomas Bieler from Michigan State University, and Motomichi Koyama from Tohoku University.</p> <p>In her presentation, Clarke described her work studying solidification of materials such as <a href="">aluminum-copper alloy melts</a>. This real-time imaging with synchrotron X‐rays allows her to map out the processing space. These experiments also provide information that had been missing in aluminum copper alloy simulations, or models, she noted.&nbsp;</p> <p>Humankind has been working metal for 4,000 years, mostly by trial-and-error up until the scientific age. “For some students, they may have the feeling maybe there isn’t so much new to be said in this field, a field that is thousands of years old,” Taşan observes. Yet, metals remain central to modern transportation, building, packaging, and many other key industries. “There is no projection I can think of in the near future where metals dominance in these structural applications is going to be significantly reduced,” Taşan notes. While newer composite materials may replace some metal components, “There is not a huge change coming, as we still need the properties metallic materials exhibit.”</p> <p>Taşan noted what Apple Materials Engineering Director Jim Yurko spoke about in his recent Wulff lecture at MIT. “Why is a company that produces phones and computers interested in casting and heat treatment of aluminum alloys, to optimize their microstructure and precipitation?” Taşan asks. “Because they use aluminum and they need to somehow produce it, and solve the small problems with it. We do not always realize it, but metals are widely incorporated in most engineering products around us.”</p> <p>“It’s very interesting that in this field — metallurgy and alloy design&nbsp;— challenges and solutions are distributed widely,” Taşan says. “In a single day, I may meet with a person from the jewelry industry and then somebody from the trucking or automotive industries. Very different materials, similar problems, and they all want solutions to their problems.”&nbsp;</p> <p>Car and truck makers seek steel designs that are higher in strength, because stronger steel allows them to use less steel, which lightens vehicles and cuts fuel consumption. “But there is an interesting dilemma,” Taşan says. “Typically, if you make a material stronger, it becomes more susceptible to cracking and fracture. You can increase strength, but the more you increase strength, the less you can form complex shapes during manufacturing.</p> <p>“This is an ongoing challenge. Researchers have been looking for different chemistries, different processing cycles, to be able to create microstructures that give both strength and ductility,” he says.</p> <p>Taşan created the Alloy Design Workshops to emphasize the continued importance of alloy design in modern materials science. The workshop is held each year on the last day of the Materials Research Society Fall Meeting in Boston, Massachusetts, to provide an opportunity for the MIT community and the materials community as a whole to congregate in an intimate setting to present and discuss new, unpublished research.</p> <p><a href="">Previous workshops</a> covered the topics of “New guidelines in alloy design: From atomistic simulations to combinatorial metallurgy” and “Sustainability through alloy design: Challenges and opportunities.”</p> Attendees joined a group photo at the 2019 MIT Alloy Design Workshop, which focused on bridging scale gaps, enabling complex alloy design through the understanding of fundamental nanoscale mechanisms of plasticity and fracture mechanics.Photo courtesy of Tasan Group.Materials Research Laboratory, Materials Science and Engineering, Metals, School of Engineering, Special events and guest speakers, Nanoscience and nanotechnology Paul McEuen delivers inaugural Dresselhaus Lecture on cell-sized robots Cornell University professor and physicist uses nanoscale parts to create smart, active microbots. Wed, 04 Dec 2019 15:30:01 -0500 Amanda Stoll | MIT.nano <p>Functional, intelligent robots the size of a single cell are within reach, said Cornell University Professor Paul McEuen at the inaugural Mildred S. Dresselhaus Lecture at MIT on Nov. 13.</p> <p>“To build a robot that is on the scale of 100 microns in size, and have it work, that’s a big dream,” said McEuen, the John A. Newman Professor of Physical Science at Cornell University and director of Kavli Institute at Cornell for Nanoscale Science. “One hundred microns is a very special size. It is the border between the visible and the invisible, or microscopic, world.”</p> <p>In a talk entitled “Cell-sized Sensors and Robots” in front of a large audience in MIT’s 10-250 lecture hall, McEuen introduced his concept for a new generation of machines that work at the microscale by combining microelectronics, solar cells, and light.&nbsp;The microbots, as he calls them, operate&nbsp;using optical wireless integrated circuits and&nbsp;surface electrochemical actuators.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Kicking off the Dresselhaus Lectures</strong></p> <p>Inaugurated this year to honor MIT professor and physicist Mildred "Millie" Dresselhaus, the Dresselhaus Lecture recognizes a significant figure in science and engineering whose&nbsp;leadership and impact echo the late Institute Professor's life, accomplishments, and values. The lecture will be presented annually in November, the month of her birth.</p> <p>Dresselhaus spent over 50 years at MIT, where she was a professor in the Department of Electrical Engineering and Computer Science (originally the Department of Electrical Engineering) as well as in the Department of Physics. She was MIT’s first female Institute Professor, co-organizer of the first MIT Women’s Forum, the first solo recipient of a Kavli Prize, and the first woman to win the National Medal of Science in the engineering category.</p> <p>Her research into the fundamental properties of carbon earned her the nickname the “Queen of Carbon Science.” She was also nationally known for her work to develop wider opportunities for women in science and engineering.</p> <p>“Millie was a physicist, a materials scientist, and an electrical engineer; an MIT professor, researcher, and doctoral supervisor; a prolific author; and a longtime leader in the scientific community,” said&nbsp;Asu Ozdaglar, current EECS department head, in her opening remarks. “Even in her final years, she was active in her field at MIT and in the department, attending EECS faculty meetings and playing an important role in developing the MIT.nano facility.”</p> <p><strong>Pushing the boundaries of physics</strong></p> <p>McEuen,&nbsp;who first met Dresselhaus when he attended graduate school at Yale University with her son, expressed what a privilege it was to celebrate Millie as the inaugural speaker. “When I think of my scientific heroes, it’s a very, very short list. And I think at the top of it would be Millie Dresselhaus.&nbsp;To be able to give this lecture in her honor means the world to me.”</p> <p>After earning his bachelor’s degree in engineering physics from the University of Oklahoma, McEuen continued his research at Yale University, where he completed his PhD in 1990 in applied physics. McEuen spent two years at MIT as a postdoc studying condensed matter physics, and then became a principal investigator at the Lawrence Berkeley National Laboratory. He spent eight years teaching at the University of California at Berkeley before joining the faculty at Cornell as a professor in the physics department in 2001.</p> <p>“Paul is a pioneer for our generation, exploring the domain of atoms and molecules to push the frontier even further. It is no exaggeration to say that his discoveries and innovations will help define the Nano Age,” said Vladimir Bulović, the founding faculty director of MIT.nano and the&nbsp;Fariborz Maseeh (1990) Professor in Emerging Technology.</p> <p><strong>“</strong><strong>The world is our oyster”</strong></p> <p>McEuen joked at the beginning of his talk that speaking of technology measured in microns sounds “so 1950s” in today’s world, in which researchers can manipulate at the scale of nanometers. One micron — an abbreviation for micrometer — is one millionth of a meter; a nanometer is one billionth of a meter.</p> <p>“[But] if you want a micro robot, you need nanoscale parts. Just as the birth of the transistor gave rise to all the computational systems we have now,” he said, “the birth of simple, nanoscale mechanical and electronic elements is going to give birth to a robotics technology at the microscopic scale of less than 100 microns.”</p> <p>The motto of McEuen and his research group at Cornell is “anything, as long as it’s small.” This focus includes fundamentals of nanostructures, atomically-thin origami for metamaterials and micromachines, and microscale smart phones and optobots. McEuen emphasized the importance of borrowing from other fields, such as microelectronics technology, to build something new. Cornell researchers have used this technology to build an optical wireless integrated circuit (OWIC) — essentially a microscopic cellphone made of solar cells that power it and receive external information, a simple transistor circuit to serve as its brain, and a light-emitting diode to blink out data.</p> <p>Why make something so small? The first reason is cost; the second is its wide array of applications. Such tiny devices could measure voltage or temperature, making them useful for microfluidic experiments. In the future, they could be deployed&nbsp;as&nbsp;smart, secure tags for counterfeiting, invisible sensors for the internet of things, or used for neural interfacing to measure electrical activity in the brain.</p> <p>Adding a&nbsp;surface electrochemical actuator to these OWICs brings mechanical movement to McEuen’s microbots. By capping a very thin piece of platinum on one side and applying a voltage to the other, “we could make all kinds of cool things.”</p> <p>At the end of his talk, McEuen answered audience questions moderated by&nbsp;Bulović, such as how do the microbots communicate with one another and what is their functional lifespan. He closed&nbsp;with a final quote from Millie Dresselhaus: “Follow your interests, get the best available education and training, set your sights high, be persistent, be flexible, keep your options open, accept help when offered, and be prepared to help others.”</p> <p>Nominations for the 2020 Dresselhaus lecture can be submitted <a href="" target="_blank">on MIT.nano’s website</a>. Any significant figure in science and engineering from anywhere in the world may be considered.</p> Cornell University’s Paul McEuen gives the inaugural Mildred S. Dresselhaus Lecture on cell-sized sensors and robots.Photo: Justin KnightMIT.nano, Electrical engineering and computer science (EECS), Physics, School of Engineering, School of Science, Nanoscience and nanotechnology, Special events and guest speakers, Faculty, Women in STEM, Carbon, Robots, Robotics, History of MIT Toward more efficient computing, with magnetic waves Circuit design offers a path to “spintronic” devices that use little electricity and generate practically no heat. Thu, 28 Nov 2019 13:59:59 -0500 Rob Matheson | MIT News Office <p>MIT researchers have devised a novel circuit design that enables precise control of computing with magnetic waves — with no electricity needed. The advance takes a step toward practical magnetic-based devices, which have the potential to compute far more efficiently than electronics.</p> <p>Classical computers rely on massive amounts of electricity for computing and data storage, and generate a lot of wasted heat. In search of more efficient alternatives, researchers have started designing magnetic-based “spintronic” devices, which use relatively little electricity and generate practically no heat.</p> <p>Spintronic devices leverage the “spin wave” — a quantum property of electrons — in magnetic materials with a lattice structure. This approach involves modulating the spin wave properties to produce some measurable output that can be correlated to computation. Until now, modulating spin waves has required injected electrical currents using bulky components that can cause signal noise and effectively negate any inherent performance gains.</p> <p>The MIT researchers developed a circuit architecture that uses only a nanometer-wide domain wall in layered nanofilms of magnetic material to modulate a passing spin wave, without any extra components or electrical current. In turn, the spin wave can be tuned to control the location of the wall, as needed. This provides precise control of two changing spin wave states, which correspond to the 1s and 0s used in classical computing. A paper describing the circuit design was published today in <em>Science</em>.</p> <p>In the future, pairs of spin waves could be fed into the circuit through dual channels, modulated for different properties, and combined to generate some measurable quantum interference — similar to how photon wave interference is used for quantum computing. Researchers hypothesize that such interference-based spintronic devices, like quantum computers, could execute highly complex tasks that conventional computers struggle with.</p> <p>“People are beginning to look for computing beyond silicon. Wave computing is a promising alternative,” says Luqiao Liu, a professor in the Department of Electrical Engineering and Computer Science (EECS) and principal investigator of the Spintronic Material and Device Group in the Research Laboratory of Electronics. “By using this narrow domain wall, we can modulate the spin wave and create these two separate states, without any real energy costs. We just rely on spin waves and intrinsic magnetic material.”</p> <p>Joining Liu on the paper are Jiahao Han, Pengxiang Zhang, and Justin T. Hou, three graduate students in the Spintronic Material and Device Group; and EECS postdoc Saima A. Siddiqui.</p> <p><strong>Flipping magnons</strong></p> <p>Spin waves are ripples of energy with small wavelengths. Chunks of the spin wave, which are essentially the collective spin of many electrons, are called magnons. While magnons are not true particles, like individual electrons, they can be measured similarly for computing applications.</p> <p>In their work, the researchers utilized a customized “magnetic domain wall,” a nanometer-sized barrier between two neighboring magnetic structures. They layered a pattern of cobalt/nickel nanofilms — each a few atoms thick — with certain desirable magnetic properties that can handle a high volume of spin waves. Then they placed the wall in the middle of a magnetic material with a special lattice structure, and incorporated the system into a circuit.</p> <p>On one side of the circuit, the researchers excited constant spin waves in the material. As the wave passes through the wall, its magnons immediately spin in the opposite direction: Magnons in the first region spin north, while those in the second region — past the wall —&nbsp;spin south. This causes the dramatic shift in the wave’s phase (angle) and slight decrease in magnitude (power).</p> <p>In experiments, the researchers placed a separate antenna on the opposite side of the circuit, that detects and transmits an output signal. Results indicated that, at its output state, the phase of the input wave flipped 180 degrees. The wave’s magnitude — measured from highest to lowest peak —&nbsp;had also decreased by a significant amount.</p> <p><strong>Adding some torque</strong></p> <p>Then, the researchers discovered a mutual interaction between spin wave and domain wall that enabled them to efficiently toggle between two states. Without the domain wall, the circuit would be uniformly magnetized; with the domain wall, the circuit has a split, modulated wave.</p> <p>By controlling the spin wave, they found they could control the position of the domain wall. This relies on a phenomenon called, “spin-transfer torque,” which is when spinning electrons essentially jolt a magnetic material to flip its magnetic orientation.</p> <p>In the researchers’ work, they boosted the power of injected spin waves to induce a certain spin of the magnons. This actually draws the wall toward the boosted wave source. In doing so, the wall gets jammed under the antenna — effectively making it unable to modulate waves and ensuring uniform magnetization in this state.</p> <p>Using a special magnetic microscope, they showed that this method causes a micrometer-size shift in the wall, which is enough to position it anywhere along the material block. Notably, the mechanism of magnon spin-transfer torque was proposed, but not demonstrated, a few years ago. “There was good reason to think this would happen,” Liu says. “But our experiments prove what will actually occur under these conditions.”</p> <p>The whole circuit is like a water pipe, Liu says. The valve (domain wall) controls how the water (spin wave) flows through the pipe (material). “But you can also imagine making water pressure so high, it breaks the valve off and pushes it downstream,” Liu says. “If we apply a strong enough spin wave, we can move the position of domain wall — except it moves slightly upstream, not downstream.”</p> <p>Such innovations could enable practical wave-based computing for specific tasks, such as the signal-processing technique, called “fast Fourier transform.” Next, the researchers hope to build a working wave circuit that can execute basic computations. Among other things, they have to optimize materials, reduce potential signal noise, and further study how fast they can switch between states by moving around the domain wall. “That’s next on our to-do list,” Liu says.</p> An MIT-invented circuit uses only a nanometer-wide “magnetic domain wall” to modulate the phase and magnitude of a spin wave, which could enable practical magnetic-based computing — using little to no electricity.Image courtesy of the researchers, edited by MIT NewsResearch, Computer science and technology, Nanoscience and nanotechnology, Spintronics, electronics, Energy, Quantum computing, Materials Science and Engineering, Design, Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), School of Engineering Smart systems for semiconductor manufacturing Lam Research Tech Symposium, co-hosted by MIT.nano and Microsystems Technology Lab, explores challenges, opportunities for the future of the industry. Mon, 25 Nov 2019 12:55:01 -0500 Amanda Stoll | MIT.nano <p>Integrating smart systems into manufacturing offers the potential to transform many industries.&nbsp;Lam Research, a founding member of the MIT.nano Consortium and a longtime member of the Microsystems Technology Lab (MTL) Microsystems Industrial Group, explored the challenges and opportunities smart systems bring to the semiconductor industry at its annual technical symposium, held at MIT in October.</p> <p>Co-hosted by MIT.nano and the MTL, the two-day event brought together Lam’s global technical staff, academic collaborators, and industry leaders with MIT faculty, students, and researchers to focus on software and hardware needed for smart manufacturing and process controls.</p> <p>Tim Archer, president and CEO of Lam Research, kicked off the first day, noting that “the semiconductor industry is more impactful to people's lives than ever before."&nbsp;</p> <p>“We stand at an innovation inflection point where smart systems will transform the way we work and live,” says Rick Gottscho, executive vice president and chief technology officer of Lam Research. “The event inspires us to make the impossible possible, through learning about exciting research opportunities that drive innovation, fostering collaboration between industry and academia to discover best-in-class solutions together, and engaging researchers and students in our industry. For all of us to realize the opportunities of smart systems, we have to embrace challenges, disrupt conventions, and collaborate.”</p> <p>The symposium featured speakers from MIT and Lam Research, as well as the University of California at Berkeley, Tsinghua University in Beijing, Stanford University, Winbond Electronics Corporation, Harting Technology Group, and GlobalFoundries, among others. Professors, corporate leaders, and MIT students came together over discussions of machine learning, micro- and nanofabrication, big data — and how it all relates to the semiconductor industry.</p> <p>“The most effective way to deliver innovative and&nbsp;lasting&nbsp;solutions is to combine our skills with others, working here on the MIT campus and beyond,” says Vladimir Bulović, faculty director of MIT.nano and the&nbsp;Fariborz Maseeh Chair in&nbsp;Emerging Technology. “The strength of this event was not only the fantastic mix&nbsp;of expertise and&nbsp;perspectives convened by Lam and MIT, but also the variety of&nbsp;opportunities it created for networking and connection.”</p> <p>Tung-Yi Chan, president of Winbond Electronics, a specialty memory integrated circuit company, set the stage on day one with his opening keynote, “Be a ‘Hidden Champion’ in the Fast-Changing Semiconductor Industry.” The second day’s keynote, given by&nbsp;Ron Sampson, senior vice president and general manager of US Fab Operations at GlobalFoundries, continued the momentum, addressing the concept that smart manufacturing is key to the future for semiconductors.</p> <p>“We all marvel at the seemingly superhuman capabilities that AI systems have recently demonstrated in areas of image classification, natural language processing, and autonomous navigation,” says Jesús del Alamo, professor of electrical engineering and computer science and former faculty director of MTL. “The symposium discussed the potential for smart tools to transform semiconductor manufacturing. This is a terrific topic for exploration in collaboration between semiconductor equipment makers and universities.”</p> <p>A series of plenary talks took place over the course of the symposium:</p> <ul> <li>“Equipment Intelligence: Fact or Fiction” – Rick Gottscho, executive vice president and chief technology officer at Lam Research</li> <li>“Machine Learning for Manufacturing: Opportunities and Challenges”&nbsp;– Duane Boning, the Clarence J. LeBel Professor in Electrical Engineering at MIT</li> <li>“Learning-based Diagnosis and Control for Nonequilibrium Plasmas”&nbsp;– Ali Mesbah, assistant professor of chemical and biomolecular engineering at the University of California at Berkeley</li> <li>“Reconfigurable Computing and AI Chips”<em>&nbsp;</em>– Shouyi Yin, professor and vice director of the Institute of Microelectronics at Tsinghua University</li> <li>“Moore’s Law Meets Industry 4.0”&nbsp;– Costas Spanos, professor at UC Berkeley</li> <li>“Monitoring Microfabrication Equipment and Processes Enabled by Machine Learning and Non-contacting Utility Voltage and Current Measurements”&nbsp;– Jeffrey H. Lang, the Vitesse Professor of Electrical Engineering at MIT, and Vivek R. Dave, director of technology at Harting, Inc. of North America</li> <li>“Big and Streaming Data in the Smart Factory”&nbsp;– Brian Anthony, associate director of MIT.nano and principal research scientist in the Institute of Medical Engineering and Sciences (IMES) and the Department of Mechanical Engineering at MIT</li> </ul> <p>Both days also included panel discussions. The first featured leaders in global development of smarter semiconductors: Tim Archer of Lam Research; Anantha Chandrakasan of MIT; Tung-Yi Chan of Winbond; Ron Sampson of GlobalFoundries; and Shaojun Wei of Tsinghua University. The second panel brought together faculty to talk about “graduating to smart systems”: Anette “Peko” Hosoi of MIT; Krishna Saraswat of Stanford University; Huaqiang Wu of Tsinghua University; and Costas Spanos of UC Berkeley.</p> <p>Opportunities specifically for startups and students to interact with industry and academic leaders capped off each day of the symposium. Eleven companies competed in a startup pitch session at the end of the first day, nine of which are associated with the MIT Startup Exchange — a program that promotes collaboration&nbsp;between MIT-connected startups and industry.&nbsp;Secure AI Labs, whose work focuses on easier data sharing while preserving data privacy, was deemed the winner by a panel of six venture capitalists. The startup received a convertible note investment provided by Lam Capital.&nbsp;HyperLight, a silicon photonics startup, and&nbsp;Southie Autonomy, a robotics startup, received honorable mentions, coming in second and third place, respectively.</p> <p>Day two concluded with a student poster session. Graduate students from MIT and Tsinghua University delivered 90-second pitches about their cutting-edge research in the areas of materials and devices, manufacturing and processing, and machine learning and modeling. The winner of the lightning pitch session was MIT’s Christian Lau for his work on a modern&nbsp;microprocessor built from complementary carbon nanotube transistors.</p> <p>The Lam Research Technical Symposium takes place annually and rotates locations between academic collaborators, MIT, Stanford University, Tsinghua University, UC Berkeley, and Lam’s headquarters in Fremont, California. The 2020 symposium will be held at UC Berkeley next fall.</p> The 2019 Lam Research Tech Symposium brought together Lam’s global technical staff, academic collaborators, and industry leaders with MIT faculty, students, and researchers for a two-day event on smart systems for semiconductor manufacturing.Photo: Lam ResearchMIT.nano, Manufacturing, Nanoscience and nanotechnology, Industry, Data, Computer science and technology, Electrical engineering and computer science (EECS), electronics, School of Engineering, Special events and guest speakers Clear, conductive coating could protect advanced solar cells, touch screens New material should be relatively easy to produce at an industrial scale, researchers say. Fri, 22 Nov 2019 13:59:59 -0500 David L. Chandler | MIT News Office <p>MIT researchers have improved on a transparent, conductive coating material, producing a tenfold gain in its electrical conductivity. When incorporated into a type of high-efficiency solar cell, the material increased the cell’s efficiency and stability.</p> <p>The new findings are reported today in the journal <em>Science Advances</em>, in a paper by MIT postdoc Meysam Heydari Gharahcheshmeh, professors Karen Gleason and Jing Kong, and three others.</p> <p>“The goal is to find a material that is electrically conductive as well as transparent,” Gleason explains, which would be “useful in a range of applications, including touch screens and solar cells.” The material most widely used today for such purposes is known as ITO, for indium titanium oxide, but that material is quite brittle and can crack after a period of use, she says.</p> <p>Gleason and her co-researchers improved a flexible version of a transparent, conductive material two years ago and published their findings, but this material still fell well short of matching ITO’s combination of high optical transparency and electrical conductivity. The new, more ordered material, she says, is more than 10 times better than the previous version.</p> <p>The combined transparency and conductivity is measured in units of Siemens per centimeter. ITO ranges from 6,000 to 10,000, and though nobody expected a new material to match those numbers, the goal of the research was to find a material that could reach at least a value of 35. The earlier publication exceeded that by demonstrating a value of 50, and the new material has leapfrogged that result, now clocking in at 3,000; the team is still working on fine-tuning the process to raise that further.</p> <p>The high-performing flexible material, an organic polymer known as PEDOT, is deposited in an ultrathin layer just a few nanometers thick, using a process called oxidative chemical vapor deposition (oCVD). This process results in a layer where the structure of the tiny crystals that form the polymer are all perfectly aligned horizontally, giving the material its high conductivity. Additionally, the oCVD method can decrease the stacking distance between polymer chains within the crystallites, which also enhances electrical conductivity.</p> <p>To demonstrate the material’s potential usefulness, the team incorporated a layer of the highly aligned PEDOT into a perovskite-based solar cell. Such cells are considered a very promising alternative to silicon because of their high efficiency and ease of manufacture, but their lack of durability has been a major drawback. With the new oCVD aligned PEDOT, the perovskite’s efficiency improved and its stability doubled.</p> <p>In the initial tests, the oCVD layer was applied to substrates that were 6 inches in diameter, but the process could be applied directly to a large-scale, roll-to-roll industrial scale manufacturing process, Heydari Gharahcheshmeh says. “It’s now easy to adapt for industrial scale-up,” he says. That’s facilitated by the fact that the coating can be processed at 140 degrees Celsius — a much lower temperature than alternative materials require.</p> <p>The oCVD PEDOT is a mild, single-step process, enabling direct deposition onto plastic substrates, as desired for flexible solar cells and displays. In contrast, the aggressive growth conditions of many other transparent conductive materials require an initial deposition on a different, more robust substrate, followed by complex processes to lift off the layer and transfer it to plastic.</p> <p>Because the material is made by a dry vapor deposition process, the thin layers produced can follow even the finest contours of a surface, coating them all evenly, which could be useful in some applications. For example, it could be coated onto fabric and cover each fiber but still allow the fabric to breathe.</p> <p>The team still needs to demonstrate the system at larger scales and prove its stability over longer periods and under different conditions, so the research is ongoing. But “there’s no technical barrier to moving this forward. It’s really just a matter of who will invest to take it to market,” Gleason says.</p> <p>The research team included MIT postdocs Mohammad Mahdi Tavakoli and Maxwell Robinson, and research affiliate Edward Gleason. The work was supported by Eni S.p.A. under the Eni-MIT Alliance Solar Frontiers Program.</p> Illustration shows the apparatus used to create a thin layer of a transparent, electrically conductive material, to protect solar cells or other devices. The chemicals used to produce the layer, shown in tubes at left, are introduced into a vacuum chamber where they deposit a layer on a substrate material at top of the chamber. Illustration courtesy of the authors, edited by MIT NewsResearch, School of Engineering, Mechanical engineering, Chemical engineering, Polymers, optoelectronics, Solar, Renewable energy, Materials Science and Engineering, Nanoscience and nanotechnology, Electrical Engineering & Computer Science (eecs) Improbability Walk at MIT.nano honors Mildred Dresselhaus Courtyard space celebrates beloved professor’s research and mentorship. Fri, 15 Nov 2019 13:50:01 -0500 Chad Galts | MIT.nano <p>The courtyard between the south-facing walls of buildings 4 and 8 and the recently constructed MIT.nano facility has the feel of a meditation space. Overlooked by the great dome, edged with pillars of bamboo, and lined by glass walls on all sides, the walkway has been christened the “Improbability Walk,” in honor of one of MIT’s most inspirational faculty members: the late Institute Professor Mildred “Millie” Dresselhaus.</p> <p>The name for the space was the idea of Vladimir Bulović, the Fariborz Maseeh (1990) Professor in Emerging Technology. “Millie often used the word ‘improbable’ to describe her success,” recalls Bulović, who is also the founding faculty director of MIT.nano. “And, especially later in her career, she often used the simple act of walking across campus as an opportunity to teach, and to learn from her students. Combining the idea of improbable journeys and walking as a form of mentorship and exchanging ideas seemed a fitting tribute to Millie.”</p> <p>The roots of walking as an avenue for teaching ran deep for Dresselhaus. While she was studying physics at the University of Chicago in the early 1950s, she lived in the same neighborhood as Nobel laureate Enrico Fermi and walked to campus with him every day, reviewing experiments and talking about their work. Throughout her life, she credited these walks with inspiring her journey to become a professor, researcher, and mentor.</p> <p>It is no surprise, therefore, that walking and talking became a lifelong practice at MIT with her colleagues and students. “On the way back from seminars, we would always talk about what we’d just learned. Or she would see something — like the Green Building, or a poster about an upcoming lecture — and she would want to tell us about it,” recalls Shengxi Huang PhD ’17 of her many strolls across campus with Dresselhaus. “And she was a really good storyteller.”&nbsp;</p> <p>Dresselhaus’s appetite for walking and talking, says Xi Ling, a postdoc in Dresselhaus’s lab from 2012 to 2016 and now an assistant professor of chemistry at Boston University, was not limited to the MIT campus, or even just Massachusetts. “We traveled to a lot of conferences, and she was always curious about everything and wanting to tell us stories,” says Ling. “When we went to New Mexico for a conference, she wanted to show us the plants. In Detroit, she wanted to explain to whole city.”</p> <p>Ling and Huang, however, take issue with Dresselhaus’ description of her path to success as improbable. “To us, seeing her work so hard — attending conferences, writing letters, sitting in the front row at every seminar — it makes perfect sense,” says Ling. “It is very hard for a normal person to do all of these things, and she did it for 60 years!”</p> <p>But Dresselhaus’s career really was launched in a succession of low-probability events. The child of newly arrived Eastern European immigrants, she grew up in the 1930s in “a dangerous, low-income neighborhood” in the Bronx, she wrote in 2012. But her diligence, her love of math and science, and a series of fortunate coincidences and connections propelled her first to Hunter College, then Radcliffe College, and then the University of Chicago, where she studied physics with Fermi.</p> <p>When she joined the MIT faculty in 1967, Dresselhaus was one of two women on the science and engineering faculty. Of the approximately 4,000 undergraduates at MIT at that time, roughly 200 were women — and only 28 were studying engineering. As she pursued her research in what would become some of the most important scientific endeavors of the late 20th and early 21st centuries, Dresselhaus also made increasing participation among women in science and engineering a personal responsibility.&nbsp;</p> <p>With help from her efforts, the landscape shifted. Today, roughly half of MIT undergraduates and more than a third of graduate students are women. And Dresselhaus’s contributions were not limited to the Institute. In 1975, she published “Some Personal Views on Engineering Education for Women” in&nbsp;<em>IEEE Transactions</em> — an open call to her academic colleagues to develop more effective approaches to providing women students with the best training opportunities. She went on to serve as the first chair of the Committee on Women in Science and Engineering for both the National Academy of Science and National Academy of Engineering, inaugurating a range of national programs and activities that continue to this day.&nbsp;</p> <p>From the perspective of where she began, Dresselhaus’s scientific accomplishments seem just as improbable. In the 1960s, she opted to work in electro-optics of semimetals, such as carbon, because she wanted, in part, “a less competitive research area while we had our babies,” she wrote. Her late-career coronation as the “queen of carbon science” was based on decades of work on the fundamental electronic properties of carbon. Her research gave the world fullerenes, superconductors, and nanotubes — paving the way for whole new avenues of scientific inquiry in nearly every field of physical science and engineering. Working with more than 900 collaborators over the course of her career, Dresselhaus authored more than 1,700 scientific papers and co-wrote eight books.&nbsp;</p> <p>The catalog of awards and honors earned by Dresselhaus during her career also strains credulity: the Presidential Medal of Freedom, the National Medal of Science, the Kavli Prize, the Enrico Fermi Award, the Vannevar Bush Award, the IEEE Founders medal, 38 honorary degrees, and many, many more. She also led the American Physical Society and the American Association for the Advancement of Science, advised three U.S. presidents, and chaired numerous influential national commissions.</p> <p>MIT.nano recently hosted the inaugural Mildred S. Dresselhaus Lecture, part of a new series of talks recognizing a significant figure in science and engineering from anywhere in the world whose leadership and impact echo Dresselhaus’s life, accomplishments, and values. The first speaker in the series was Paul McEuen, the John A. Newman Professor of Physical Science at Cornell University and director of the Kavli Institute at Cornell for Nanoscale Science, who presented on Nov. 13 on cell-sized sensors and robots. “When I think of my scientific heroes, it's a very, very short list, and I think at the top of it would be Millie Dresselhaus,” McEuen said in his opening remarks. “To be able to give this lecture in her honor means the world to me.”</p> <p>“The Improbability Walk is more than just a place. It’s a call for all of us to invest in the future of MIT, so we can allow people of all backgrounds to succeed — despite the odds,” says Bulović. “It’s also a reminder to all of us that a few simple words, said at just the right moment, can change a person’s life.”</p> The section of the MIT.nano courtyard that runs along the south side of the building has been named the Improbability Walk, in honor of Mildred Dresselhaus.Image courtesy of Wilson ArchitectsMIT.nano, Electrical engineering and computer science (EECS), Nanoscience and nanotechnology, Community, Faculty, Women in STEM, Carbon, Physics, School of Science, School of Engineering, Campus buildings and architecture SMART discovers nondisruptive way to characterize the surface of nanoparticles New method overcomes limitations of existing chemical procedures and may accelerate nanoengineering of materials. Tue, 12 Nov 2019 11:25:01 -0500 Singapore-MIT Alliance for Research and Technology <p>Researchers from the Singapore-MIT Alliance for Research and Technology (SMART) have made a discovery that allows scientists to "look" at the surface density of dispersed nanoparticles. This technique enables researchers to understand the properties of nanoparticles without disturbing them, at a much lower cost and far more quickly than with existing methods.</p> <p>The new process is explained in a <a href="">paper</a> entitled “Measuring the Accessible Surface Area within the Nanoparticle Corona using Molecular Probe Adsorption,” published in the academic journal <em>Nano Letters.</em> It was led by Michael Strano, co-lead principal investigator of the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) research group at SMART and the Carbon P. Dubbs Professor at MIT, and MIT graduate student Minkyung Park. DiSTAP is a part of SMART, MIT’s research enterprise in Singapore, and develops new technologies to enable Singapore, a city-state which is dependent upon imported food and produce, to improve its agriculture yield to reduce external dependencies.</p> <p>The molecular probe adsorption (MPA) method is based on a noninvasive adsorption of a fluorescent probe on the surface of colloidal nanoparticles in an aqueous phase. Researchers are able to calculate the surface coverage of dispersants on the nanoparticle surface — which are used to make it stable at room temperature — by the physical interaction between the probe and nanoparticle surface.</p> <p>“We can now characterize the surface of the nanoparticle through its adsorption of the fluorescent probe. This allows us to understand the surface of the nanoparticle without damaging it, which is, unfortunately, the case with chemical processes widely used today,” says Park. “This new method also uses machines that are readily available in labs today, opening up a new, easy method for the scientific community to develop nanoparticles that can help revolutionize different sectors and disciplines.”</p> <p>The MPA method is also able to characterize a nanoparticle within minutes compared to several hours that the best chemical methods require today. Because it uses only fluorescent light, it is also substantially cheaper.&nbsp;</p> <p>DiSTAP has started to use this method for nanoparticle sensors in plants and nanocarriers for delivery of molecular cargo into plants.</p> <p>“We are already using the new MPA method within DiSTAP to aid us in creating sensors and nanocarriers for plants,” says Strano. “It has enabled us to discover and optimize more sensitive sensors and understand the surface chemistry, which in turn allows for greater precision when monitoring plants. With higher-quality data and insight into plant biochemistry, we can ultimately provide optimal nutrient levels or beneficial hormones for healthier plants and higher yields.”</p> Schematic illustration of probe adsorption influenced by an attractive interaction within the coronaSingapore-MIT Alliance for Research and Technology (SMART), Nanoscience and nanotechnology, Agriculture, Imaging, Chemical engineering, School of Engineering, Research MIT.nano announces the Mildred S. Dresselhaus Lectures Cornell University’s Paul McEuen will inaugurate series to honor beloved MIT professor. Tue, 05 Nov 2019 14:30:01 -0500 Amanda Stoll | MIT.nano <p>Over a 50-year career as an MIT professor and pioneer in the field of nanoscience, Mildred Dresselhaus (1930-2017) helped unlock the secrets of carbon and paved the way for future scientists and engineers to study at the nanoscale. To pay enduring tribute to Dresselhaus and her extraordinary impact on MIT and the broader scientific community,&nbsp;MIT.nano has established the Mildred S. Dresselhaus Lectures.</p> <p>The new event — which will be held annually in November, the month of Dresselhaus’s birth — will recognize a significant figure in science and engineering from anywhere in the world whose leadership and impact echo Dresselhaus’s life, accomplishments, and values.&nbsp;</p> <p>The first distinguished lecturer, selected by a committee of MIT faculty, is Paul McEuen, the John A. Newman Professor of Physical Science at Cornell University and director of the Kavli Institute at Cornell for Nanoscale Science.</p> <p>McEuen, whose research group&nbsp;boasts the tag line “anything, as long as it’s small,”&nbsp;will present a public lecture at MIT on Wednesday, Nov. 13, entitled “<a href="">Cell-sized Sensors and Robots</a>.” The talk will address a Cornell effort to combine microelectronics, optics, paper arts, and 2D materials to create a new generation of cell-sized smart, active sensors and microbots that are powered and communicate by light.</p> <p>“Paul’s explorations of&nbsp;the electronic, optical, and mechanical properties of nanoscale materials&nbsp;are helping to lead us into the Nano Age. His contributions as a scientist are equaled by his generosity of spirit as a colleague and mentor,” says Vladimir Bulović, the founding faculty director of MIT.nano and the&nbsp;Fariborz Maseeh (1990) Professor in Emerging Technology. “We are delighted to launch the new Dresselhaus Lectures with someone whose creativity, impact, and character offer such a strong reminder of why Millie was so special to us.”</p> <p>In addition to their research focus, Dresselhaus and McEuen share a connection through the Kavli Foundation. Dresselhaus received the Kavli Prize in Nanoscience in 2012 for her work studying phonons, electron-phonon interactions, and thermal transport in nanostructures.</p> <p>Dresselhaus held several scientific leadership roles, including president of the American Physical Society in 1984, of which McEuen is now a fellow. She was well-known both off and on MIT’s campus, where she remained an active presence late into her career. Starting at Lincoln Laboratory in 1960, Dresselhaus went on to have appointments in the departments of Electrical Engineering and Physics.&nbsp;In 1985 she became the first woman at MIT to be honored with the title of Institute Professor, an esteemed position held by no more than 12 MIT professors at one time. Dresselhaus co-authored eight books and about 1,700 papers, and supervised more than 60 doctoral students.</p> <p>Dresselhaus was also known for her mentorship and dedication to promoting gender equity in science and engineering. In 1971, she organized the first Women’s Forum at MIT to explore the roles of women in science and engineering. Such early efforts reflected a lifelong commitment to promoting gender equity in science and engineering and to encouraging women to enter these traditionally male-dominated fields.</p> <p>Dresselhaus was also a strong faculty supporter during the development of MIT.nano, an open-access facility for nanoscience and nanoengineering set in the heart of MIT where researchers from different departments can encounter one another, sharing knowledge and ideas. “The vision is that this nano building will change the exploration of many things,” said Dresselhaus in 2016. “You have to be in an environment that’s permissive of crazy thoughts and crazy directions, which can lead to something really great.”</p> <p>The inaugural Dresselhaus Lecture with Paul McEuen is free and open to the public. <a href="" target="_blank">Advance registration</a> is required.</p> Mildred Dresselhaus Photo: Ed QuinnMIT.nano, Lincoln Laboratory, Physics, School of Engineering, School of Science, Nanoscience and nanotechnology, Special events and guest speakers, Faculty, History of MIT, Women in STEM, Electrical Engineering & Computer Science (eecs), 2-D, Women, Carbon Nanoparticle orientation offers a way to enhance drug delivery Coating particles with “right-handed” molecules could help them penetrate cancer cells more easily. Tue, 05 Nov 2019 05:59:59 -0500 Anne Trafton | MIT News Office <p>MIT engineers have shown that they can enhance the performance of drug-delivery nanoparticles by controlling a trait of chemical structures known as chirality — the “handedness” of the structure.</p> <p>Many biological molecules can come in either right-handed or left-handed forms, which are identical in composition but are mirror images of each other.</p> <p>The MIT team found that coating nanoparticles with the right-handed form of the amino acid cysteine helped the particles to avoid being destroyed by enzymes in the body. It also helped them to enter cells more efficiently. This finding could help researchers to design more effective carriers for drugs to treat cancer and other diseases, says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute for Integrative Cancer Research.</p> <p>“We are very excited about this paper because controlling chirality offers new possibilities for drug delivery and hence new medical treatments,” says Langer, who is one of the senior authors of the paper.</p> <p>Ana Jaklenec, a research scientist at the Koch Institute, is also a senior author of <a href="" target="_blank">the paper</a>, which appears in <em>Advanced Materials </em>on Nov. 4. The paper’s lead author is MIT postdoc Jihyeon Yeom. Other authors of the paper are former MIT postdocs Pedro Guimaraes and Kevin McHugh, MIT postdoc Quanyin Hu, and Koch Institute research affiliate Michael Mitchell. Hyo Min Ahn, BoKyeong Jung, and Chae-Ok Yun of Hanyang University in Seoul, South Korea, are also authors of the paper.</p> <p><strong>Chiral interactions</strong></p> <p>Many biologically important molecules have evolved to exist exclusively in either right-handed (“D”) or left-handed (“L”) versions, also called enantiomers. For example, naturally occurring amino acids are always “L” enantiomers, while DNA and glucose are usually “D.”</p> <p>“Chirality is ubiquitous in nature, imparting uniqueness and specificity to the biological and chemical properties of materials,” Yeom says. “For example, molecules formed with the same composition taste sweet or bitter and smell differently depending on their chirality, and one enantiomer is inactive or even toxic while the other enantiomer can serve an important biological function.”</p> <p>The MIT team hypothesized that it might be possible to take advantage of chiral interactions to improve the performance of drug-delivery nanoparticles. To test that idea, they created “supraparticles” consisting of clusters of 2-nanometer cobalt oxide particles whose chirality was provided by either the “D” or “L” version of cysteine on the surfaces.</p> <p>By flowing these particles along a channel lined with cancer cells, including myeloma and breast cancer cells, the researchers could test how well each type of particle was absorbed by the cells. They found that particles coated with “D” cysteine were absorbed more efficiently, which they believe is because they are able to interact more strongly with cholesterol and other lipids found in the cell membrane, which also have the “D” orientation.</p> <p>The researchers also believed that the “D” version of cysteine might help nanoparticles avoid being broken down by enzymes in the body, which are made of “L” amino acids. This could allow the particles to circulate in the body for longer periods of time, making it easier for them to reach their intended destinations.</p> <p>In a study of mice, the researchers found that “D”-coated particles did stay in the bloodstream longer, suggesting that they were able to successfully evade enzymes that destroyed the “L”-coated particles. About two hours after injection, the number of “D” particles in circulation was much greater than the number of “L” particles, and it remained higher over the 24 hours of the experiment.</p> <p>“This is a first step in looking at how chirality can potentially aid these particles in reaching cancer cells and increasing circulation time.&nbsp;The next step is to see if we could actually make a difference in cancer treatment,” Jaklenec says.</p> <p><strong>Modified particles</strong></p> <p>The researchers now plan to test this approach with other types of drug-delivery particles. In one project, they are investigating whether coating gold particles with “D” amino acids will improve their ability to deliver cancer drugs in mice. In another, they are using this approach to modify adenoviruses, which some of their collaborators are developing as a potential new way to treat cancer.</p> <p>“In this study, we showed that the ‘D’ chirality allows for longer circulation time and increased uptake by cancer cells. The next step would be to determine if drug-loaded chiral particles give enhanced or prolonged efficacy compared to free drug,” Jaklenec says. “This is potentially translatable to essentially any nanoparticle.”</p> <p>The research was funded by the Koch Institute’s Marble Center for Cancer Nanomedicine, the National Council for Scientific and Technological Development of Brazil, the Estudar Foundation, a Ruth L. Kirschstein National Research Service Award, a Burroughs Wellcome Fund Career Award at the Scientific Interface, an National Institutes of Health Director’s New Innovator Award, the American Cancer Society, an AACR-Bayer Innovation and Discovery Grant, and the National Research Foundation of Korea.</p> MIT engineers created clusters of nanoparticles that are coated with “right-handed” molecules of the amino acid cysteine.Image: Jihyeon YeomResearch, Drug delivery, Nanoscience and nanotechnology, Chemical engineering, Koch Institute, Institute for Medical Engineering and Science (IMES), School of Engineering, National Institutes of Health (NIH) 3 Questions: How to control biofilms in space MIT and University of Colorado researchers are collaborating on an experiment to be sent to the International Space Station. Fri, 01 Nov 2019 11:00:35 -0400 David L. Chandler | MIT News Office <p><em>Researchers from MIT will be collaborating with colleagues at the University of Colorado at Boulder on <a href="" target="_blank">an experiment</a> scheduled to be sent to the International Space Station (ISS) on Nov. 2. The experiment is looking for ways to address the formation of biofilms on surfaces within the space station. These hard-to-kill communities of bacteria or fungi can cause equipment malfunctions and make astronauts sick. </em>MIT News<em> asked professor of mechanical engineering Kripa Varanasi and doctoral student Samantha McBride to describe the planned experiments and their goals.</em></p> <p><strong>Q: </strong>For starters, tell us about the problem that this research aims to address.</p> <p><strong>Varanasi: </strong>Biofilms grow on surfaces in space stations, which initially was a surprise to me. Why would they grow in space? But it’s an issue that can jeopardize the key equipment — space suits, water recycling units, radiators, navigation windows, and so on — and can also lead to human illness. It therefore needs to be understood and characterized, especially for long-duration space missions.</p> <p>In some of the early space station missions like Mir and Skylab, there were astronauts who were getting sick in space. I don’t know if we can say for sure it’s due to these biofilms, but we do know that there have been equipment failures due to biofilm growth, such as clogged valves.</p> <p>In the past there have been studies that show the biofilms actually grow and accumulate more in space than on Earth, which is kind of surprising. They grow thicker; they have different forms. The goal of this project is to study how biofilms grow in space. Why do they get all these different morphologies? Essentially, it’s the absence of gravity and probably other driving forces, convection for example.</p> <p>We also want to think about remediation approaches. How could you solve this problem? In our current collaboration with <a href="">Luis Zea</a> at UC Boulder, we are looking at biofilm growth on engineered substrates in the presence and absence of gravity. We make different surfaces for these biofilms to grow on, and we apply some of our technologies developed in this lab, including liquid impregnated surfaces [LIS] and superhydrophobic nanotextured surfaces, and we looked at how biofilms grow on them. We found that after a year’s worth of experiments, here on Earth, the LIS surfaces did really well: There was no biofilm growth, compared to many other state of the art substrates.</p> <p><strong>Q:</strong> So what will you be looking for in this new experiment to be flown on the ISS?</p> <p><strong>McBride:</strong> There are signs indicating that bacteria might actually increase their virulence in space, and so astronauts are more likely to get sick. This is interesting because usually when you think of bacteria, you’re thinking of something that’s so small that gravity shouldn’t play that big a role.</p> <p>Professor Cynthia Collin’s group at RPI [Rensselaer Polytechnic Institute] did a previous experiment on the ISS showing that when you have normal gravity, the bacteria are able to move around and form these mushroom-like shapes, versus in microgravity mobile bacteria form this kind of canopy shape of biofilm. So basically, they’re not as constrained any more and they can start to grow outward in this unusual morphology.</p> <p>Our current work is a collaboration with UC Boulder and Luis Zea as the principal investigator. So now instead of just looking at how bacteria respond to microgravity versus gravity on Earth, we’re also looking at how they grow on different engineered substrates. And also, more fundamentally, we can see why bacteria biofilms form the way that they do on Earth, just by taking away that one variable of having the gravity.</p> <p>There are two different experiments, one with bacterial biofilms and one with fungal biofilms. Zea and his group have been growing these organisms in a test media in the presence of those surfaces, and then characterizing them by the biofilm mass, the thickness, morphology, and then the gene expression. These samples will now be sent to the space station to see how they grow there.</p> <p><strong>Q:</strong> So based on the earlier tests, what are you expecting to see when the samples come back to Earth after two months?</p> <p><strong>Varanasi: </strong>What we’ve found so far is that, interestingly, a great deal of biomass grows on superhydrophobic surfaces, which is usually thought to be antifouling. In contrast, on the liquid-impregnated surfaces, the technology behind <a href="">Liquiglide</a>, there was basically no biomass growth. This produced the same result as the negative control, where there were no bacteria.</p> <p>We also did some control tests to confirm that the oil used on the liquid impregnated surfaces is not biocidal. So we’re not just killing the bacteria, they’re actually just not adhering to the substrate, and they’re not growing there.</p> <p><strong>McBride: </strong>For the LIS surfaces, we’ll be looking at whether biofilms form on them or not. I think both results would be really interesting. If biofilms grow on these surfaces in space, but not on the ground, I think that’s going to tell us something very interesting about the behavior of these organisms. And of course, if biofilms don’t form and the surfaces prevent formation like they do on on the ground, then that’s also great, because now we have a mechanism to prevent biofilm formation on some of the equipment in the space station.&nbsp;</p> <p>So we would be happy with either result, but if the LIS does perform as well as it did on the ground, I think it’s going to have a huge impact on future missions in terms of preventing biofilms and not getting people sick.&nbsp;</p> <p>Fundamentally, from a science point of view, we want to understand the growth of these films and understand all of the biomechanical, biophysical, and biochemical mechanisms behind the growth. By adding the surface morphology, texture, and other properties like the liquid-impregnated surfaces, we may see new phenomena in the growth and evolution of these films, and maybe actually come up with a solution to fix the problem.</p> <p><strong>Varanasi:</strong> And then that can lead to designing new equipment or even space suits that have these features. So that’s where I think we would like to learn from this and then propose solutions.</p> NASA’s official mission patch for the upcoming space biofilms experiment, developed at MIT and the University of Colorado, which is scheduled to be sent to the International Space Station.Image courtesy of the researchers.Research, Microbes, Nanoscience and nanotechnology, Mechanical engineering, Surface engineering, NASA, Materials Science and Engineering, 3 Questions, Faculty, Students, Graduate, postdoctoral, School of Science, Space, astronomy and planetary science MIT engineers develop a new way to remove carbon dioxide from air The process could work on the gas at any concentrations, from power plant emissions to open air. Thu, 24 Oct 2019 23:59:59 -0400 David Chandler | MIT News Office <p>A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.</p> <p>Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.</p> <p>The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal <em>Energy and Environmental Science</em>, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.</p> <div class="cms-placeholder-content-video"></div> <p>The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.</p> <p>As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called polyanthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged — during which the device can provide part of the power needed for the whole system — and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.</p> <p>“The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains Voskian. In other words, the electrode material, by its nature, “has either a high affinity or no affinity whatsoever,” depending on the battery’s state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.</p> <p>“This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO<sub>2</sub>,” Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.</p> <p>In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.</p> <p>The process this system uses for capturing and releasing carbon dioxide “is revolutionary” he says. “All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input. It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”</p> <p>“In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles,” Hatton says. “This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations.”​</p> <p>In a working plant — for example, in a power plant where exhaust gas is being produced continuously — two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.</p> <p>The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. “We have developed very cost-effective techniques,” he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.</p> <p>Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.</p> <p>The researchers have set up a company called Verdox to commercialize the process, and hope to develop a pilot-scale plant within the next few years, he says. And the system is very easy to scale up, he says: “If you want more capacity, you just need to make more electrodes.”</p> <p>This work was supported by an MIT Energy Initiative Seed Fund grant and by Eni S.p.A.</p> In this diagram of the new system, air entering from top right passes to one of two chambers (the gray rectangular structures) containing battery electrodes that attract the carbon dioxide. Then the airflow is switched to the other chamber, while the accumulated carbon dioxide in the first chamber is flushed into a separate storage tank (at right). These alternating flows allow for continuous operation of the two-step process.Image courtesy of the researchersResearch, Chemical engineering, School of Engineering, Emissions, Carbon nanotubes, Nanoscience and nanotechnology, Climate change, Carbon dioxide, Sustainability, Carbon, Greenhouse gases Fireside chat with Don Eigler wraps up MIT.nano “Perspectives in Nanotechnology” seminars Series featured five experts who played seminal roles in understanding the nanoscale. Mon, 21 Oct 2019 12:55:01 -0400 Amanda Stoll | MIT.nano <p>On Sept. 16 MIT.nano hosted an informal public conversation with physicist Don Eigler, Kavli Laureate and former fellow of the IBM Almaden Research Center. The conversation marked the fifth and final event in the MIT.nano “Perspectives in Nanotechnology” seminar series, which began in the spring.</p> <p>Eigler was the founding leader of the Low Temperature Scanning Tunneling Microscopy Project at IBM. Among his accomplishments, he is recognized for his 1989 experiment in which he became the first person to manipulate individual atoms with precision, using a scanning tunneling microscope to spell out “I-B-M” from 35 individual xenon atoms. He is also known for creating the first quantum corrals.&nbsp;</p> <p>“Nature is not boring. If as an experimentalist you invent something that allows you to see something no one has seen before, you will find something interesting,” he told an audience of faculty, graduate students, alumni, and others.</p> <p>Following the conversation, MIT.nano and the Graduate Student Council hosted an exhibition featuring microscopy artwork by the MIT community.</p> <p>Organized by Farnaz Niroui, MIT assistant professor of electrical engineering and computer science, the “Perspectives in Nanotechnology” series featured a set of five lectures by experts who offered insight into current research and future directions based on their experiences in the field of nanoscience and nanotechnology.&nbsp;</p> <p>“This series was a great way to introduce MIT.nano&nbsp;not just as a nanoscale research facility, but as a place where we can have&nbsp;discussions around nanoscience and its different applications across disciplines within our community,” says Niroui. “Having these five pioneers&nbsp;talk about their research trajectories, and where they see the field of nanotechnology going in the future, has been inspiring.”​</p> <p>The first four speakers in the series were:</p> <p><strong>March 18: </strong><strong>Roger Howe of Stanford University</strong></p> <p>Roger Howe is the William E. Ayer Professor of Engineering at Stanford University. He was the faculty director of the Stanford Nanofabrication Facility from 2009 to 2017 and director of the National Science Foundation’s National Nanotechnology Infrastructure Network from 2011 to 2015.</p> <p>For the first presentation in the series, Howe discussed “the role that shared academic nano facilities, such as nano@Stanford and MIT.nano, can play in nucleating the tools and processes, as well as the community of internal and external researchers, that can accelerate the commercialization of nanotechnology.”</p> <p><strong>April 29: Paul Alivisatos of the University of California at Berkeley</strong></p> <p>Paul&nbsp;Alivisatos is the University of California at Berkeley's executive vice chancellor and provost, and Samsung Distinguished Professor of Nanoscience and Nanotechnology. He is also the director emeritus of Lawrence Berkeley National Laboratory, founding director of the Kavli Energy Nanoscience Institute, and a founder of two prominent nanotechnology companies.</p> <p>In his talk, Alivisatos discussed his research on colloidal nanocrystals, one of the several artificial building blocks&nbsp;for nanoscience and nanotechnology. He reflected on the question, “What will happen when artificial nanocrystals can be observed and controlled at the level of single atoms?”</p> <p><strong>May 16: </strong><strong>Eli Yablonovitch of the University of California at Berkeley</strong></p> <p>Eli Yablonovitch is a professor of electrical engineering and computer science at UC Berkeley, where he holds the James and Katherine Lau Chair in Engineering. Regarded as a father of the photonic bandgap concept, Yablonovitch coined the term "photonic crystal" and has significantly contributed to the fields of strained semiconductor lasers and photovoltaics.&nbsp;</p> <p>Yablonovitch is the director of the National Science Foundation Center for Energy Efficient Electronics Science, a multi-university center headquartered at Berkeley. For his perspectives presentation, he addressed the question, "What new device will replace the transistor?"</p> <p><strong>June 19: Robert Langer of MIT</strong></p> <p>Robert Langer is the David H. Koch Institute Professor at MIT. Author of more than 1,250 articles, he also has nearly 1,050 patents worldwide. He is the most-cited engineer in history.</p> <p>In a presentation entitled, “From Microtechnology to Nanotechnology: New Ways to Discover and Deliver Medicine to Treat Disease,” Langer addressed the numerous new technologies being developed that may impact the future of medicine.</p> <p>MIT.nano and Niroui will now kick off a continuing monthly seminar series exploring the frontiers of nanoscience and nanotechnology. For more information, visit <a href=""></a>.</p> Roger Howe, William E. Ayer Professor of Engineering, Stanford University (center), with Farnaz Niroui, assistant professor of electrical engineering and computer science and Perspectives in Nanotechnology series organizer (left), and Vladimir Bulović, faculty director of MIT.nano and Fariborz Maseeh Chair in Emerging Technology (right). Photo: Tom Gearty/MIT.nanoMIT.nano, Electrical engineering and computer science (EECS), School of Engineering, Nanoscience and nanotechnology, Research Laboratory of Electronics, Microsystems Technology Laboratories, Special events and guest speakers “Electroadhesive” stamp picks up and puts down microscopic structures New technique could enable assembly of circuit boards and displays with more minute components. Fri, 11 Oct 2019 13:59:59 -0400 Jennifer Chu | MIT News Office <p>If you were to pry open your smartphone, you would see an array of electronic chips and components laid out across a circuit board, like a miniature city. Each component might contain even smaller “chiplets,” some no wider than a human hair. These elements are often assembled with robotic grippers designed to pick up the components and place them down in precise configurations.</p> <p>As circuit boards are packed with ever smaller components, however, robotic grippers’ ability to manipulate these objects is approaching a limit. &nbsp;</p> <p>“Electronics manufacturing requires handling and assembling small components in a size similar to or smaller than grains of flour,” says Sanha Kim, a former MIT postdoc and research scientist who worked in the lab of mechanical engineering associate professor John Hart. “So a special pick-and-place solution is needed, rather than simply miniaturizing [existing] robotic grippers and vacuum systems.”</p> <p>Now Kim, Hart, and others have developed a miniature “electroadhesive” stamp that can pick up and place down objects as small as 20 nanometers wide — about 1,000 times finer than a human hair. The stamp is made from a sparse forest of ceramic-coated carbon nanotubes arranged like bristles on a tiny brush.</p> <p>When a small voltage is applied to the stamp, the carbon nanotubes become temporarily charged, forming prickles of electrical attraction that can attract a minute particle. By turning the voltage off, the stamp’s “stickiness” goes away, enabling it to release the object onto a desired location.</p> <p>Hart says the stamping technique can be scaled up to a manufacturing setting to print micro- and nanoscale features, for instance to pack more elements onto ever smaller computer chips. The technique may also be used to pattern other small, intricate features, such as cells for artificial tissues. And, the team envisions macroscale, bioinspired electroadhesive surfaces, such as voltage-activated pads for grasping everyday objects and for gecko-like climbing robots.</p> <p>“Simply by controlling voltage, you can switch the surface from basically having zero adhesion to pulling on something so strongly, on a per unit area basis, that it can act somewhat like a gecko’s foot,” Hart says.</p> <p>The team has published its results today in the journal <em>Science Advances</em>.</p> <p>The team also includes Michael Boutilier, a former postdoc at MIT and now Assistant Professor at Western University in Ontario, MIT Ph.D. student Nigamaa Nayakanti, MIT postdoc Changhong Cao, and collaborators from the University of&nbsp;Pennsylvania including Prof. Kevin Turner.</p> <p><strong>Like dry Scotch tape</strong></p> <p>Existing mechanical grippers are unable to pick up objects smaller than about 50 to 100 microns, mainly because at smaller scales surface forces tend to win over gravity. You may see this when pouring flour from a spoon — inevitably, some tiny particles stick to the spoon’s surface, rather than letting gravity drag them off.</p> <p>“The dominance of surface forces over gravity forces becomes a problem when trying to precisely place smaller things — which is the foundational process by which electronics are assembled into integrated systems,” Hart says.</p> <p>He and his colleagues noted that electroadhesion, the process of adhering materials via an applied voltage, has been used in some industrial settings to pick and place large objects, such as fabrics, textiles, and whole silicon wafers. But this same electroadhesion had never been applied to objects at the microscopic level, because a new material design for controlling electroadhesion at smaller scales was needed.</p> <p>Hart’s group has previously worked with carbon nanotubes (CNTs) — atoms of carbon linked in a lattice pattern and rolled into microscopic tubes. CNTs are known for their exceptional mechanical, electrical, and chemical properties, and they have been widely studied as dry adhesives.</p> <p>“Previous work on CNT-based dry adhesives focused on maximizing the contact area of the nanotubes to essentially create a dry Scotch tape,” Hart says. “We took the opposite approach, and said, ‘let’s design a nanotube surface to minimize the contact area, but use electrostatics to turn on adhesion when we need it.’”</p> <p><img alt="" src="/sites/" /></p> <p><em><span style="font-size:10px;">New electroadhesive stamp&nbsp;picks and places a 170-micrometer sized LED chiplet, using an external voltage of 30V to temporarily “stick” to the LED. Courtesy of the researchers</span></em></p> <p><strong>A sticky on/off switch</strong></p> <p>The team found that if they coated CNTs with a thin dielectric material such as aluminum oxide, when they applied a voltage to the nanotubes, the ceramic layer became polarized, meaning its positive and negative charges became temporarily separated. For instance, the positive charges of the tips of the nanotubes induced an opposite polarization in any nearby conducting material, such as a microscopic electronic element.</p> <p>As a result, the nanotube-based stamp adhered to the element, picking it up like tiny, electrostatic fingers. When the researchers turned the voltage off, the nanotubes and the element depolarized, and the “stickiness” went away, allowing the stamp to detach and place the object onto a given surface.</p> <p>The team explored various formulations of stamp designs, altering the density of carbon nanotubes grown on the stamp, as well as the thickness of the ceramic layer that they used to coat each nanotube. They found that the thinner the ceramic layer and the more sparsely spaced the carbon nanotubes were, the greater the stamp’s on/off ratio, meaning the greater the stamp’s stickiness was when the voltage was on, versus when it was off.</p> <p>In their experiments, the team used the stamp to pick up and place down films of nanowires, each about 1,000 times thinner than a human hair. They also used the technique to pick and place intricate patterns of polymer and metal microparticles, as well as micro-LEDs.</p> <p>Hart says the electroadhesive printing technology could be scaled up to manufacture circuit boards and systems of miniature electronic chips, as well as displays with microscale LED pixels.</p> <p>“With ever-advancing capabilities of semiconductor devices, an important need and opportunity is to integrate smaller and more diverse components, such as microprocessors, sensors, and optical devices,” Hart says. “Often, these are necessarily made separately but must be integrated together to create next-generation electronic systems. Our technology possibly bridges the gap necessary for scalable, cost-effective assembly of these systems.”</p> <p>This research was supported in part by the Toyota Research Insititute, the National Science Foundation, and the MIT-Skoltech Next Generation Program.</p> Optical image of a pattern of silicon dioxide particles, each 5 micrometers in diameter, and individually picked and placed using a new “electroadhesive” stamp.Image courtesy of the researchers Carbon nanotubes, electronics, Mechanical engineering, Nanoscience and nanotechnology, Research, School of Engineering, National Science Foundation (NSF) Diagnosing cellular nanomechanics SMART has developed a new way to study cells, paving the way for a better understanding of how cancers spread and become deadly. Mon, 07 Oct 2019 13:00:01 -0400 Singapore-MIT Alliance for Research and Technology <p>Researchers at Singapore-MIT Alliance for Research and Technology (SMART) and MIT’s Laser Biomedical Research Center (LBRC) have developed a new way to study cells, paving the way for a better understanding of how cancers spread and become killers.</p> <p>The new technology is explained in a <a href="" target="_blank">paper</a> published recently in <em>Nature Communications. A</em> new confocal reflectance interferometric microscope provides 1.5 microns depth resolution and better than 200 picometers height measurement sensitivity for high-speed characterization of nanometer-scale nucleic envelope and plasma membrane fluctuations in biological cells. It enables researchers to use these fluctuations to understand key biological questions, such as the role of nuclear stiffness in cancer metastasis and genetic diseases.</p> <p>“Current methods for nuclear mechanics are invasive, as they either require mechanical manipulation, such as stretching, or require injecting fluorescent probes that ‘light up’ the nucleus to observe its shape. Both these approaches would undesirably change cells' intrinsic properties, limiting study of cellular mechanisms, disease diagnosis, and cell-based therapies,” say Vijay Raj Singh, SMART research scientist, and Zahid Yaqoob, LBRC principal investigator. “With the confocal reflectance interferometric microscope, we can study nuclear mechanics of biological cells without affecting their native properties.”</p> <p>While the scientists can study about a hundred cells in a few minutes, they believe that the system can be upgraded in the future to improve the throughput to tens of thousands of cells.</p> <p>“Today, many disease mechanisms are not fully understood because we lack a way to look at how cells’ nucleus changes when it undergoes stress,” says Peter So, SMART BioSyM principal investigator, MIT professor, and LBRC director. “For example, people often do not die from the primary cancer, but from the secondary cancers that form after the cancer cells metastasize from the primary site — and doctors do not know why cancer becomes aggressive and when it happens. Nuclear mechanics plays a vital role in cancer metastasis as the cancer cells must ‘squeeze’ through the blood vessel walls into the bloodstream, and again when they enter a new location. This is why the ability to study nuclear mechanics is so important to our understanding of cancer formation, diagnostics, and treatment.”</p> <p>With the new interferometric microscope, scientists at LBRC are studying cancer cells when they undergo mechanical stress, especially during extravasation process, paving the way for new cancer treatments. Further, the scientists are also able to use the same technology to study the effect of “lamin mutations” on nuclear mechanics, which result in rare genetic diseases such as Progeria, which leads to fast aging in young children.</p> <p>The confocal reflectance interferometric microscope also has applications in other sectors. For example, this technology has the potential for studying cellular mechanics within intact living tissues. With the new technology, the scientists could shed new light on biological processes within the body’s major organs such as liver, allowing safer and more accurate cell therapies. Cell therapy is a major focus area for Singapore, with the government recently announcing a <a href="">S$80 million (US $58 million) boost to the manufacturing of living cells as medicine</a>.</p> <p><strong>About BioSyM</strong></p> <p>BioSystems and Micromechanics (BioSyM) Inter-Disciplinary Research Group brings together a multidisciplinary team of faculties and researchers from MIT and the universities and research institutes of Singapore. <a href="">BioSyM</a>’s research deals with the development of new technologies to address critical medical and biological questions applicable to a variety of diseases with an aim to provide novel solutions to the health care industry and to the broader research infrastructure in Singapore. The guiding tenet of BioSyM is that accelerated progress in biology and medicine will critically depend upon the development of modern analytical methods and tools that provide a deep understanding of the interactions between mechanics and biology at multiple length scales — from molecules to cells to tissues — that impact maintenance or disruption of human health.</p> <p><strong>About Singapore-MIT Alliance for Research and Technology (SMART)</strong></p> <p>Singapore-MIT Alliance for Research and Technology (<a href="">SMART</a>) is MIT’s research enterprise in Singapore, established in partnership with the National Research Foundation of Singapore (NRF) since 2007. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise (<a href="">CREATE</a>) developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore. Cutting-edge research projects in areas of interest to both Singapore and MIT are undertaken at SMART. SMART currently comprises an Innovation Centre and six Interdisciplinary Research Groups: Antimicrobial Resistance, BioSystems and Micromechanics, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive &amp; Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.</p> <p>SMART research is funded by the National Research Foundation Singapore under the CREATE program.</p> <p><strong>About the Laser Biomedical Research Center (LBRC)</strong></p> <p>Established in 1985, the Laser Biomedical Research Center is a National Research Resource Center supported by the National Institute of Biomedical Imaging and Bioengineering, a Biomedical Technology Resource Center of the National Institutes of Health. The LBRC’s mission is to develop the basic scientific understanding and new techniques required for advancing the clinical applications of lasers and spectroscopy. Researchers at the LBRC develop laser-based microscopy and spectroscopy techniques for medical applications, such as the spectral diagnosis of various diseases and investigation of biophysical and biochemical properties of cells and tissues. A unique feature of the LBRC is its ability to form strong clinical collaborations with outside investigators in areas of common interest that further the center’s mandated research objectives.</p> Vijay Raj Singh, SMART research scientist, and Zahid Yaqoob, MIT LBRC principal investigator, study tumor cells using the new confocal reflectance interferometric microscope.Photo: SMARTSingapore-MIT Alliance for Research and Technology (SMART), Chemistry, Biological engineering, Cancer, Cells, Medicine, International initiatives, Nanoscience and nanotechnology A new mathematical approach to understanding zeolites Study of minerals widely used in industrial processes could lead to discovery of new materials for catalysis and filtering. Mon, 07 Oct 2019 11:09:12 -0400 David L. Chandler | MIT News Office <p>Zeolites are a class of natural or manufactured minerals with a sponge-like structure, riddled with tiny pores that make them useful as catalysts or ultrafine filters. But of the millions of zeolite compositions that are theoretically possible, so far only about 248 have ever been discovered or made. Now, research from MIT helps explain why only this small subset has been found, and could help scientists find or produce more zeolites with desired properties.</p> <p>The new findings are being reported this week in the journal <em>Nature Materials</em>, in a paper by MIT graduate students Daniel Schwalbe-Koda and Zach Jensen, and professors Elsa Olivetti and Rafael Gomez-Bombarelli.</p> <p>Previous attempts to figure out why only this small group of possible zeolite compositions has been identified, and to explain why certain types of zeolites can be transformed into specific other types, have failed to come up with a theory that matches the observed data. Now, the MIT team has developed a mathematical approach to describing the different molecular structures. The approach is based on graph theory, which can predict which pairs of zeolite types can be transformed from one to the other.</p> <p>This could be an important step toward finding ways of making zeolites tailored for specific purposes. It could also lead to new pathways for production, since it predicts certain transformations that have not been previously observed. And, it suggests the possibility of producing zeolites that have never been seen before, since some of the predicted pairings would lead to transformations into new types of zeolite structures.</p> <p><strong>Interzeolite tranformations</strong></p> <p>Zeolites are widely used today in applications as varied as catalyzing the “cracking” of petroleum in refineries and absorbing odors as components in cat litterbox filler. Even more applications may become possible if researchers can create new types of zeolites, for example with pore sizes suited to specific types of filtration.</p> <p>All kinds of zeolites are silicate minerals, similar in chemical composition to quartz. In fact, over geological timescales, they will all eventually turn into quartz — a much denser form of the mineral — explains Gomez-Bombarelli, who is the Toyota Assistant Professor in Materials Processing. But in the meantime, they are in a “metastable” form, which can sometimes be transformed into a different metastable form by applying heat or pressure or both. Some of these transformations are well-known and already used to produce desired zeolite varieties from more readily available natural forms.</p> <p>Currently, many zeolites are produced by using chemical compounds known as OSDAs (organic structure-directing agents), which provide a kind of template for their crystallization. But Gomez-Bombarelli says that if instead they can be produced through the transformation of another, readily available form of zeolite, “that’s really exciting. If we don’t need to use OSDAs, then it’s much cheaper [to produce the material].The organic material is pricey. Anything we can make to avoid the organics gets us closer to industrial-scale production.”</p> <p>Traditional chemical modeling of the structure of different zeolite compounds, researchers have found, provides no real clue to finding the pairs of zeolites that can readily transform from one to the other. Compounds that appear structurally similar sometimes are not subject to such transformations, and other pairs that are quite dissimilar turn out to easily interchange. To guide their research, the team used an artificial intelligence system previously developed by the Olivetti group to “read” more than 70,000 research papers on zeolites and select those that specifically identify interzeolite transformations. They then studied those pairs in detail to try to identify common characteristics.</p> <p>What they found was that a topological description based on graph theory, rather than traditional structural modeling, clearly identified the relevant pairings. These graph-based descriptions, based on the number and locations of chemical bonds in the solids rather than their actual physical arrangement, showed that all the known pairings had nearly identical graphs. No such identical graphs were found among pairs that were not subject to transformation.</p> <p>The finding revealed a few previously unknown pairings, some of which turned out to match with preliminary laboratory observations that had not previously been identified as such, thus helping to validate the new model. The system also was successful at predicting which forms of zeolites can intergrow — forming combinations of two types that are interleaved like the fingers on two clasped hands. Such combinations are also commercially useful, for example for sequential catalysis steps using different zeolite materials.</p> <p><strong>Ripe for further research</strong><br /> &nbsp;</p> <p>The new findings might also help explain why many of the theoretically possible zeolite formations don’t seem to actually exist. Since some forms readily transform into others, it may be that some of them transform so quickly that they are never observed on their own. Screening using the graph-based approach may reveal some of these unknown pairings and show why those short-lived forms are not seen.</p> <p>Some zeolites, according to the graph model, “have no hypothetical partners with the same graph, so it doesn’t make sense to try to transform them, but some have thousands of partners” and thus are ripe for further research, Gomez-Bombarelli says.</p> <p>In principle, the new findings could lead to the development of a variety of new catalysts, tuned to the exact chemical reactions they are intended to promote. Gomez-Bombarelli says that almost any desired reaction could hypothetically find an appropriate zeolite material to promote it.</p> <p>“Experimentalists are very excited to find a language to describe their transformations that is predictive,” he says.</p> <p>This work is “a major advancement in the understanding of interzeolite transformations, which has become an increasingly important topic owing to the potential for using these processes to improve the efficiency and economics of commercial zeolite production,” says Jeffrey Rimer, an associate professor of chemical and biomolecular engineering at the University of Houston, who was not involved in this research.</p> <p>Manuel Moliner, a tenured scientist at the Technical University of Valencia, in Spain, who also was not connected to this research, says: “Understanding the pairs involved in particular interzeolite transformations, considering not only known zeolites but also hundreds of hypothetical zeolites that have not ever been synthesized, opens extraordinary practical opportunities to rationalize and direct the synthesis of target zeolites with potential interest as industrial catalysts.”</p> <p>This research was supported, in part, by the National Science Foundation and the Office of Naval Research.</p> Traditional structure-based representations of the many forms of zeolites, some of which are illustrated here, provide little guidance as to how they can convert to other forms, but a new graph-based system does a much better job.Illustrations courtesy of the researchersSchool of Engineering, Materials Science and Engineering, Materials Research Laboratory, Industry, Research, Nanoscience and nanotechnology, Machine learning, Energy storage, Photonics, MIT.nano, Artificial intelligence, DMSE, National Science Foundation (NSF) A new way to corrosion-proof thin atomic sheets Ultrathin coating could protect 2D materials from corrosion, enabling their use in optics and electronics. Fri, 04 Oct 2019 00:00:00 -0400 David L. Chandler | MIT News Office <p>A variety of two-dimensional materials that have promising properties for optical, electronic, or optoelectronic applications have been held back by the fact that they quickly degrade when exposed to oxygen and water vapor. The protective coatings developed thus far have proven to be expensive and toxic, and cannot be taken off.</p> <p>Now, a team of researchers at MIT and elsewhere has developed an ultrathin coating that is inexpensive, simple to apply, and can be removed by applying certain acids.</p> <p>The new coating could open up a wide variety of potential applications for these “fascinating” 2D materials, the researchers say. Their findings are reported this week in the journal <em>PNAS</em>, in a paper by MIT graduate student Cong Su; professors Ju Li, Jing Kong, Mircea Dinca, and Juejun Hu; and 13 others at MIT and in Australia, China, Denmark, Japan, and the U.K.</p> <p>Research on 2D materials, which form thin sheets just one or a few atoms thick, is “a very active field,” Li says. Because of their unusual electronic and optical properties, these materials have promising applications, such as highly sensitive light detectors. But many of them, including black phosphorus and a whole category of materials known as transition metal dichalcogenides (TMDs), corrode when exposed to humid air or to various chemicals. Many of them degrade significantly in just hours, precluding their usefulness for real-world applications.</p> <p>“It’s a key issue” for the development of such materials, Li says. “If you cannot stabilize them in air, their processability and usefulness is limited.” One reason silicon has become such a ubiquitous material for electronic devices, he says, is because it naturally forms a protective layer of silicon dioxide on its surface when exposed to air, preventing further degradation of the surface. But that’s more difficult with these atomically thin materials, whose total thickness could be even less than the silicon dioxide protective layer.</p> <p>There have been attempts to coat various 2D materials with a protective barrier, but so far they have had serious limitations. Most coatings are much thicker than the 2D materials themselves. Most are also very brittle, easily forming cracks that let through the corroding liquid or vapor, and many are also quite toxic, creating problems with handling and disposal.</p> <p>The new coating, based on a family of compounds known as linear alkylamines, improves on these drawbacks, the researchers say. The material can be applied in ultrathin layers, as little as 1 nanometer (a billionth of a meter) thick, and further heating of the material after application heals tiny cracks to form a contiguous barrier. The coating is not only impervious to a variety of liquids and solvents but also significantly blocks the penetration of oxygen. And, it can be removed later if needed by certain organic acids.</p> <p>“This is a unique approach” to protecting thin atomic sheets, Li says, that produces an extra layer just a single molecule thick, known as a monolayer, that provides remarkably durable protection. “This gives the material a factor of 100 longer lifetime,” he says, extending the processability and usability of some of these materials from a few hours up to months. And the coating compound is “very cheap and easy to apply,” he adds.</p> <p>In addition to theoretical modeling of the molecular behavior of these coatings, the team made a working photodetector from flakes of TMD material protected with the new coating, as a proof of concept. The coating material is hydrophobic, meaning that it strongly repels water, which otherwise would diffuse into the coating and dissolve away a naturally formed protective oxide layer within the coating, leading to rapid corrosion.</p> <p>The application of the coating is a very simple process, Su explains. The 2D material is simply placed into bath of liquid hexylamine, a form of the linear alkylamine, which builds up the protective coating after about 20 minutes, at a temperature of 130 degrees Celsius at normal pressure. Then, to produce a smooth, crack-free surface, the material is immersed for another 20 minutes in vapor of the same hexylamine.</p> <p>“You just put the wafer into this liquid chemical and let it be heated,” Su says. “Basically, that’s it.” The coating “is pretty stable, but it can be removed by certain very specific organic acids.”</p> <p>The use of such coatings could open up new areas of research on promising 2D materials, including the TMDs and black phosphorous, but potentially also silicene, stanine, and other related materials. Since black phosphorous is the most vulnerable and easily degraded of all these materials, that’s what the team used for their initial proof of concept.</p> <p>The new coating could provide a way of overcoming “the first hurdle to using these fascinating 2D materials,” Su says. “Practically speaking, you need to deal with the degradation during processing before you can use these for any applications,” and that step has now been accomplished, he says.</p> <p>The team included researchers in MIT’s departments of Nuclear Science and Engineering, Chemistry, Materials Science and Engineering, Electrical Engineering and Computer Science, and the Research Laboratory of Electronics, as well as others at the Australian National University, the University of Chinese Academy of Sciences, Aarhus University in Denmark, Oxford University, and Shinshu University in Japan. The work was supported by the Center for Excitonics and the Energy Frontier Research Center funded by the U.S. Department of Energy, and by the National Science Foundation, the Chinese Academy of Sciences, the Royal Society, the U.S. Army Research Office through the MIT Institute for Soldier Nanotechnologies, and Tohoku University.</p> This diagram shows an edge-on view of the molecular structure of the new coating material. The thin layered material being coated is shown in violet at bottom, and the ambient air is shown as the scattered molecules of oxygen and water at the top. The dark layer in between is the protective material, which allows some oxygen (red) through, forming an oxide layer below that provides added protection.Illustration courtesy of the researchers.Research, 2-D, Nuclear science and engineering, Materials Science and Engineering, DMSE, Nanoscience and nanotechnology, Physics, Electrical Engineering & Computer Science (eecs), Research Laboratory of Electronics, Department of Energy (DoE), National Science Foundation (NSF), School of Engineering, School of Science Controlling 2-D magnetism with stacking order MIT researchers discover why magnetism in certain materials is different in atomically thin layers and their bulk forms. Mon, 30 Sep 2019 15:25:01 -0400 Denis Paiste | Materials Research Laboratory <p>Researchers led by MIT Department of Physics Professor <a href="" target="_blank">Pablo Jarillo-Herrero</a> last year showed that rotating layers of hexagonally structured graphene at a particular “<a dir="ltr" href="" rel="noopener" target="_blank">magic angle</a>” could change the material’s electronic properties from an insulating state to a superconducting state. Now researchers in the same group and their collaborators have demonstrated that in a different ultra-thin material that also features a honeycomb-shaped atomic structure — chromium trichloride (CrCl<sub>3</sub>) — they can alter the material’s magnetic properties by shifting the stacking order of layers.</p> <p>The researchers peeled away two-dimensional (2-D) layers of chromium trichloride using tape in the same way researchers peel away graphene from graphite. Then they studied the 2-D chromium trichloride’s magnetic properties using electron tunneling. They found that the magnetism is different in 2-D and 3-D crystals due to different stacking arrangements between atoms in adjacent layers.</p> <p>At high temperatures, each chromium atom in chromium trichloride has a magnetic moment that fluctuates like a tiny compass needle. Experiments show that as the temperature drops below 14 kelvins (-434.47 degrees Fahrenheit), deep in the cryogenic temperature range, these magnetic moments freeze into an ordered pattern, pointing in opposite directions in alternating layers (antiferromagnetism). The magnetic direction of all the layers of chromium trichloride can be aligned by applying a magnetic field. But the researchers found that in its 2-D form, this alignment needs a magnetic force 10 times stronger than in the 3-D crystal. The&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">results</a>&nbsp;were recently published online in&nbsp;<em>Nature Physics</em>.</p> <p>“What we’re seeing is that it’s 10 times harder to align the layers in the thin limit compared to the bulk, which we measure using electron tunneling in a magnetic field,” says MIT physics graduate student Dahlia R. Klein, a National Science Foundation graduate research fellow and one of the paper’s lead authors. Physicists call the energy required to align the magnetic direction of opposing layers the interlayer exchange interaction. “Another way to think of it is that the interlayer exchange interaction is how much the adjacent layers want to be anti-aligned,” fellow lead author and MIT postdoc David MacNeill suggests.</p> <p>The researchers attribute this change in energy to the slightly different physical arrangement of the atoms in 2-D chromium chloride. “The chromium atoms form a honeycomb structure in each layer, so it’s basically stacking the honeycombs in different ways,” Klein says. “The big thing is we’re proving that the magnetic and stacking orders are very strongly linked in these materials.”</p> <p>"Our work highlights how the magnetic properties of 2-D magnets can differ very substantially from their 3-D counterparts,” says senior author&nbsp;Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics. “This means that we have now a new generation of highly tunable magnetic materials, with important implications for both new fundamental physics experiments and potential applications in spintronics and quantum information technologies."</p> <p>Layers are very weakly coupled in these materials, known as van der Waals magnets, which is what makes it easy to remove a layer from the 3-D crystal with adhesive tape. “Just like with graphene, the bonds within the layers are very strong, but there are only very weak interactions between adjacent layers, so you can isolate few-layer samples using tape,” Klein says.</p> <p>MacNeill and Klein grew the chromium chloride samples, built and tested nanoelectronic devices, and analyzed their results. The researchers also found that as chromium trichloride is cooled from room temperature to cryogenic temperatures, 3-D crystals of the material undergo a structural transition that the 2-D crystals do not. This structural difference accounts for the higher energy required to align the magnetism in the 2-D crystals.</p> <p>The researchers measured the stacking order of 2-D layers through the use of Raman spectroscopy and developed a mathematical model to explain the energy involved in changing the magnetic direction. Co-author and Harvard University postdoc&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">Daniel T. Larson</a>&nbsp;says he analyzed a plot of Raman data that showed variations in peak location with the rotation of the chromium trichloride sample, determining that the variation was caused by the stacking pattern of the layers. “Capitalizing on this connection, Dahlia and David have been able to use Raman spectroscopy to learn details about the crystal structure of their devices that would be very difficult to measure otherwise,” Larson explains. “I think this technique will be a very useful addition to the toolbox for studying ultra-thin structures and devices.” Department of Materials Science and Engineering graduate student Qian Song carried out the Raman spectroscopy experiments in the lab of MIT assistant professor of physics&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">Riccardo Comin</a>. Both also are co-authors of the paper.</p> <p>“This research really highlights the importance of stacking order on understanding how these van der Waals magnets behave in the thin limit,” Klein says.</p> <p>MacNeill adds, “The question of why the 2-D crystals have different magnetic properties had been puzzling us for a long time. We were very excited to finally understand why this is happening, and it’s because of the structural transition.”</p> <p>This work builds on two years of prior research into 2-D magnets in which Jarillo-Herrero’s group collaborated with researchers at the University of Washington, led by Professor Xiaodong Xu, who holds joint appointments in the departments of Materials Science and Engineering, Physics, and Electrical and Computer Engineering, and others. Their work, which was published in a&nbsp;<em>Nature</em>&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">letter</a>&nbsp;in June 2017, showed for the first time that a different material with a similar crystal structure — chromium triiodide (CrI<sub>3</sub>) — also behaved differently in the 2-D form than in the bulk, with few-layer samples showing antiferromagnetism unlike the ferromagnetic 3-D crystals.</p> <p>Jarillo-Herrero’s group went on to show in a May 2018&nbsp;<em>Science</em>&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">paper</a>&nbsp;that chromium triiodide exhibited a sharp change in electrical resistance in response to an applied magnetic field at low temperature. This work demonstrated that electron tunneling is a useful probe for studying magnetism of 2-D crystals. Klein and MacNeill were also the first authors of this paper.</p> <p>University of Washington Professor Xiaodong Xu says of the latest findings, “The work presents a very clever approach, namely the combined tunneling measurements with polarization resolved Raman spectroscopy. The former is sensitive to the interlayer antiferromagnetism, while the latter is a sensitive probe of crystal symmetry. This approach gives a new method to allow others in the community to uncover the magnetic properties of layered magnets.”</p> <p>“This work is in concert with several other recently published works,” Xu says. “Together, these works uncover the unique opportunity provided by layered van der Waals magnets, namely engineering magnetic order via controlling stacking order. It is useful for arbitrary creation of new magnetic states, as well as for potential application in reconfigurable magnetic devices.”</p> <p>Other authors contributing to this work include Efthimious Kaxiras, the John Hasbrouck Van Vleck Professor of Pure and Applied Physics at Harvard University; Harvard graduate student Shiang Fang; Iowa State University Distinguished Professor (Condensed Matter Physics) Paul C. Canfield; Iowa State graduate student Mingyu Xu; and Raquel A. Ribeiro, of Iowa State University and the Federal University of ABC, Santo André, Brazil. This work was supported in part by the Center for Integrated Quantum Materials, the U.S. Department of Energy Office of Science Basic Energy Sciences Program, the Gordon and Betty Moore Foundation’s EPiQS Initiative, and the Alfred P. Sloan Foundation.</p> MIT physics graduate student Dahlia Klein (left) and postdoc David MacNeill showed that magnetic order and stacking order are very strongly linked in two-dimensional magnets such as chromium chloride and chromium iodide, giving engineers a tool to vary the material’s magnetic properties.Photo: Denis Paiste/Materials Research LaboratoryMaterials Research Laboratory, Physics, Materials Science and Engineering, School of Science, School of Engineering, Nanoscience and nanotechnology, Magnets, 2-D MIT.nano awards inaugural NCSOFT seed grants for gaming technologies Five software and hardware projects will launch the MIT.nano Immersion Lab Gaming Program. Mon, 30 Sep 2019 15:20:01 -0400 MIT.nano <p>MIT.nano has announced the first recipients of NCSOFT seed grants to foster hardware and software innovations in gaming technology. The grants are part of the new MIT.nano Immersion Lab Gaming program, with inaugural funding provided by video game developer NCSOFT, a founding member of the MIT.nano Consortium.</p> <p>The newly awarded projects address topics such as 3-D/4-D data interaction and analysis, behavioral learning, fabrication of sensors, light field manipulation, and micro-display optics.&nbsp;</p> <p>“New technologies and new paradigms of gaming will change the way researchers conduct their work by enabling immersive visualization and multi-dimensional interaction,” says MIT.nano Associate Director Brian W. Anthony. “This year’s funded projects highlight the wide range of topics that will be enhanced and influenced by augmented and virtual reality.”</p> <p>In addition to the sponsored research funds, each awardee will be given funds specifically to foster a community of collaborative users of MIT.nano’s Immersion Lab.</p> <p>The MIT.nano Immersion Lab is a new, two-story immersive space dedicated to visualization, augmented and virtual reality (AR/VR), and the depiction and analysis of spatially related data. Currently being outfitted with equipment and software tools, the facility will be available starting this semester for use by researchers and educators interested in using and creating new experiences, including the seed grant projects.&nbsp;</p> <p>The five projects to receive NCSOFT seed grants are:</p> <p><strong>Stefanie Mueller: connecting the virtual and physical world</strong></p> <p>Virtual game play is often accompanied by a prop — a steering wheel, a tennis racket, or some other object the gamer uses in the physical world to create a reaction in the virtual game. Build-it-yourself cardboard kits have expanded access to these props by lowering costs; however, these kits are pre-cut, and thus limited in form and function. What if users could build their own dynamic props that evolve as they progress through the game?</p> <p>Department of Electrical Engineering and Computer Science (EECS) Professor Stefanie Mueller aims to enhance the user’s experience by developing a new type of gameplay with tighter virtual-physical connection. In Mueller’s game, the player unlocks a physical template after completing a virtual challenge, builds a prop from this template, and then, as the game progresses, can unlock new functionalities to that same item. The prop can be expanded upon and take on new meaning, and the user learns new technical skills by building physical prototypes.</p> <p><strong>Luca Daniel and Micha Feigin-Almon: replicating human movements in virtual characters</strong></p> <p>Athletes, martial artists, and ballerinas share the ability to move their body in an elegant manner that efficiently converts energy and minimizes injury risk. Professor Luca Daniel, EECS and Research Laboratory of Electronics, and Micha Feigin-Almon, research scientist in mechanical engineering, seek to compare the movements of trained and untrained individuals to learn the limits of the human body with the goal of generating elegant, realistic movement trajectories for virtual reality characters.</p> <p>In addition to use in gaming software, their research on different movement patterns will predict stresses on joints, which could lead to nervous system models for use by artists and athletes.</p> <p><strong>Wojciech Matusik: using phase-only holograms</strong></p> <p>Holographic displays are optimal for use in augmented and virtual reality. However, critical issues show a need for improvement. Out-of-focus objects look unnatural, and complex holograms have to be converted to phase-only or amplitude-only in order to be physically realized. To combat these issues, EECS Professor Wojciech Matusik proposes to adopt machine learning techniques for synthesis of phase-only holograms in an end-to-end fashion. Using a learning-based approach, the holograms could display visually appealing three-dimensional objects.</p> <p>“While this system is specifically designed for varifocal, multifocal, and light field displays, we firmly believe that extending it to work with holographic displays has the greatest potential to revolutionize the future of near-eye displays and provide the best experiences for gaming,” says Matusik.</p> <p><strong>Fox Harrell: teaching socially impactful behavior</strong></p> <p>Project VISIBLE — Virtuality for Immersive Socially Impactful Behavioral Learning Enhancement — utilizes virtual reality in an educational setting to teach users how to recognize, cope with, and avoid committing microaggressions. In a virtual environment designed by Comparative Media Studies Professor Fox Harrell, users will encounter micro-insults, followed by major micro-aggression themes. The user’s physical response drives the narrative of the scenario, so one person can play the game multiple times and reach different conclusions, thus learning the various implications of social behavior.</p> <p><strong>Juejun Hu: displaying a wider field of view in high resolution</strong></p> <p>Professor Juejun Hu from the Department of Materials Science and Engineering seeks to develop high-performance, ultra-thin immersive micro-displays for AR/VR applications. These displays, based on metasurface optics, will allow for a large, continuous field of view, on-demand control of optical wavefronts, high-resolution projection, and a compact, flat, lightweight engine. While current commercial waveguide AR/VR systems offer less than 45 degrees of visibility, Hu and his team aim to design a high-quality display with a field of view close to 180 degrees.</p> The MIT.nano Immersion Lab will provide an array of software and hardware for NCSOFT seed grant recipients and other researchers at MIT who are investigating augmented reality, virtual reality, and the display and analysis of spatially related data. Photo: Tom Gearty/MIT.nanoMIT.nano, Electrical Engineering & Computer Science (eecs), Research Laboratory of Electronics, Mechanical engineering, Materials Science and Engineering, Video games, Augmented and virtual reality, Computer science and technology, Artificial intelligence, Collaboration, Technology and society, Grants, Funding, Research, Sensors, Industry, Nanoscience and nanotechnology Quantum sensing on a chip Researchers integrate diamond-based sensing components onto a chip to enable low-cost, high-performance quantum hardware. Wed, 25 Sep 2019 00:00:00 -0400 Rob Matheson | MIT News Office <p>MIT researchers have, for the first time, fabricated a diamond-based quantum sensor on a silicon chip. The advance could pave the way toward low-cost, scalable hardware for quantum computing, sensing, and communication.</p> <p>“Nitrogen-vacancy (NV) centers” in diamonds are defects with electrons that can be manipulated by light and microwaves. In response, they emit colored photons that carry quantum information about surrounding magnetic and electric fields, which can be used for biosensing, neuroimaging, object detection, and other sensing applications. But traditional NV-based quantum sensors are about the size of a kitchen table, with expensive, discrete components that limit practicality and scalability.</p> <p>In a paper published in <em>Nature Electronics</em>, the researchers found a way to integrate all those bulky components — including a microwave generator, optical filter, and photodetector —&nbsp;onto a millimeter-scale package, using traditional semiconductor fabrication techniques. Notably, the sensor operates at room temperature with capabilities for sensing the direction and magnitude of magnetic fields.</p> <p>The researchers demonstrated the sensor’s use for magnetometry, meaning they were able to measure atomic-scale shifts in the frequency due to surrounding magnetic fields, which could contain information about the environment. With further refining, the sensor could have a range of applications, from mapping electrical impulses in the brain to detecting objects, even without a line of sight.</p> <p>“It’s very difficult to block magnetic fields, so that’s a huge advantage for quantum sensors,” says co-author Christopher Foy, a graduate student in the Department of Electrical Engineering and Computer Science (EECS). “If there’s a vehicle traveling in, say, an underground tunnel below you, you’d be able to detect it even if you don’t see it there.”</p> <p>Joining Foy on the paper are: Mohamed Ibrahim, a graduate student in EECS; Donggyu Kim PhD ’19; Matthew E. Trusheim, a postdoc in EECS; Ruonan Han, an associate professor in EECS and head of the Terahertz Integrated Electronics Group, which is part of MIT's Microsystems Technology Laboratories (MTL); and Dirk Englund, an MIT associate professor of electrical engineering and computer science, a researcher in Research Laboratory of Electronics (RLE), and head of the Quantum Photonics Laboratory.</p> <p><strong>Shrinking and stacking</strong></p> <p>NV centers in diamonds occur where carbon atoms in two adjacent places in the lattice structure are missing — one atom is replaced by a nitrogen atom, and the other space is an empty “vacancy.” That leaves missing bonds in the structure, where the electrons are extremely sensitive to tiny variations in electrical, magnetic, and optical characteristics in the surrounding environment.</p> <p>The NV center essentially functions as an atom, with a nucleus and surrounding electrons. It also has photoluminescent properties, meaning it absorbs and emits colored photons. Sweeping microwaves across the center can make it change states —&nbsp;positive, neutral, and negative —&nbsp;which in turn changes the spin of its electrons. Then, it emits different amounts of red photons, depending on the spin.</p> <p>A technique, called optically detected magnetic resonance&nbsp;(ODMR), measures how many photons are emitted by interacting with the surrounding magnetic field. That interaction produces further, quantifiable information about the field. For all of that to work, traditional sensors require bulky components, including a mounted laser, power supply, microwave generator, conductors to route the light and microwaves, an optical filter and sensor, and a readout component.</p> <p>The researchers instead developed a novel chip architecture that positions and stacks tiny, inexpensive components in a certain way using standard complementary metal-oxide-semiconductor (CMOS) technology, so they function like those components. “CMOS technologies enable very complex 3-D structures on a chip,” Ibrahim says. “We can have a complete system on the chip, and we only need &nbsp;a piece of diamond and green light source on top. But that can be a regular chip-scale LED.”</p> <p>NV centers within a diamond slab are positioned in a “sensing area” of the chip. A small green pump laser excites the NV centers, while a nanowire placed close to the NV centers generates sweeping microwaves in response to current. Basically, the light and microwave work together to make the NV centers emit a different amount of red photons — with the difference being the target signal for readout in the researchers’ experiments.</p> <p>Below the NV centers is a photodiode, designed to eliminate noise and measure the photons. In between the diamond and photodiode is a metal grating that acts as a filter that absorbs the green laser photons while allowing the red photons to reach the photodiode. In short, this enables an on-chip ODMR device, which measures resonance frequency shifts with the red photons that carry information about the surrounding magnetic field.</p> <p>But how can one chip do the work of a large machine? A key trick is simply moving the conducting wire, which produces the microwaves, at an optimal distance from the NV centers. Even if the chip is very small, this precise distance enables the wire current to generate enough magnetic field to manipulate the electrons. The tight integration and codesign of the microwave conducting wires and generation circuitry also help. In their paper, the researchers were able to generate enough magnetic field to enable practical applications in object detection.</p> <p><strong>Only the beginning</strong></p> <p>In another paper presented earlier this year at the International Solid-State Circuits Conference, the researchers describe a second-generation sensor that makes various improvements on this design to achieve 100-fold greater sensitivity. Next, the researchers say they have a “roadmap” for how to increase sensitivity by 1,000 times. That basically involves scaling up the chip to increase the density of the NV centers, which determines sensitivity.</p> <p>If they do, the sensor could be used even in neuroimaging applications. That means putting the sensor near neurons, where it can detect the intensity and direction of firing neurons. That could help researchers map connections between neurons and see which neurons trigger each other. Other future applications including a GPS replacement for vehicles and airplanes. Because the magnetic field on Earth has been mapped so well, quantum sensors can serve as extremely precise compasses, even in GPS-denied environments.</p> <p>“We’re only at the beginning of what we can accomplish,” Han says. “It’s a long journey, but we already have two milestones on the track, with the first-and second-generation sensors. We plan to go from sensing to communication to computing. We know the way forward and we know how to get there.”</p> <p>“I am enthusiastic about this quantum sensor technology and foresee major impact in several fields,” says Ron Walsworth, a senior lecturer at Harvard University whose group develops high-resolution magnetometry tools using NV centers.</p> <p>“They have taken a key step in the integration of quantum-diamond sensors with CMOS technology, including on-chip microwave generation and delivery, as well as on-chip filtering and detection of the information-carrying fluorescent light from the quantum defects in diamond.&nbsp;The resulting unit is compact and relatively low-power.&nbsp;Next steps will be to further enhance the sensitivity and bandwidth of the quantum diamond sensor [and] integrate the CMOS-diamond sensor with wide-ranging applications, including chemical analysis, NMR spectroscopy, and materials characterization.”</p> MIT researchers have fabricated a diamond-based quantum sensor on a silicon chip using traditional fabrication techniques (pictured), which could enable low-cost quantum hardware.Image courtesy of the researchersResearch, Computer science and technology, Light, Photonics, Nanoscience and nanotechnology, Quantum computing, electronics, Research Laboratory of Electronics, Electrical Engineering & Computer Science (eecs), School of Engineering 3 Questions: Why sensing, why now, what next? Brian Anthony, co-leader of SENSE.nano, discusses sensing for augmented and virtual reality and for advanced manufacturing. Fri, 20 Sep 2019 13:00:01 -0400 MIT.nano <p><em>Sensors are everywhere today, from our homes and vehicles to medical devices, smart phones, and other useful tech. More and more, sensors help detect our interactions with the environment around us — and shape our understanding of the world.</em></p> <p><em>SENSE.nano&nbsp;is an MIT.nano Center of Excellence, with a focus on sensors, sensing systems, and sensing technologies.</em><em> The&nbsp;</em><a href=""><em>2019 SENSE.nano Symposium</em></a><em>, taking place on Sept. 30 at MIT</em><em>, will dive deep into the impact of sensors on two topics: sensing for augmented and virtual reality (AR/VR) and sensing for advanced manufacturing.&nbsp;</em></p> <p><em>MIT Principal Research Scientist Brian W. Anthony</em><em> is the associate director of MIT.nano and faculty director of the Industry Immersion Program in Mechanical Engineering. He weighs in on&nbsp;</em><em>why sensing is ubiquitous and how advancements in sensing technologies are linked to the challenges and opportunities of big data.</em></p> <p><strong>Q:&nbsp;</strong>What do you see as the next frontier for sensing as it relates to augmented and virtual reality?</p> <p><strong>A:</strong> Sensors are an enabling technology for AR/VR. When you slip on a VR headset and enter an immersive environment, sensors map your movements and gestures to create a convincing virtual experience.</p> <p>But sensors have a role beyond the headset. When we're interacting with the real world we're constrained by our own senses — seeing, hearing, touching, and feeling. But imagine sensors providing data within AR/VR to enhance your understanding of the physical environment, such as allowing you to see air currents, thermal gradients, or the electricity flowing through wires superimposed on top of the real physical structure. That's not something you could do any place else other than a virtual environment.</p> <p>Another example:&nbsp;<a href="">MIT.nano</a>&nbsp;is a massive generator of data. Could AR/VR provide a more intuitive and powerful way to study information coming from the metrology instruments in the basement, or the fabrication tools in the clean room? Could it allow you to look at data on a massive scale, instead of always having to look under a microscope or on a flat screen that's the size of your laptop? Sensors are also critical for haptics, which are interactions related to the sensation of touch. As I apply pressure to a device or pick up an object — real or virtual — can I receive physical feedback that conveys that state of interaction to me?</p> <p>You can’t be an engineer or a scientist without being involved with sensing instrumentation in some way. Recognizing the widespread presence of sensing on campus, SENSE.nano and MIT.nano — with MIT.nano’s new Immersion Lab providing the tools and facility — are trying to bring together researchers on both the hardware and software sides to explore the future of these technologies.</p> <p><strong>Q:&nbsp;</strong>Why is SENSE.nano focusing on sensing for advanced manufacturing?</p> <p><strong>A:</strong> In this era of big data, we sometimes forget that data comes from someplace: sensors and instruments.&nbsp;As soon as the data industry as a whole has solved the big data challenges we have now with the data that's coming from current sensors — wearable physiological monitors, or from factories, or from your automobiles — it is going to be starved for new sensors with improved functionality.</p> <p>Coupled with that, there are a large number of manufacturing technologies — in the U.S. and worldwide — that are either coming to maturity or receiving a lot of investment. For example, researchers are looking at novel ways to make integrated photonics devices combining electronics and optics for on-chip sensors; exploring novel fiber manufacturing approaches to embed sensors into your clothing or composites; and developing flexible materials that mold to the body or to the shape of an automobile as the substrate for integrated circuits or as a sensor. These various manufacturing technologies enable us to think of new, innovative ways to create sensors that are lower in cost and more readily immersed into our environment.</p> <p><strong>Q:&nbsp;</strong>You’ve said that a factory is not just a place that produces products, but also a machine that produces information. What does that mean?</p> <p><strong>A: </strong>Today’s manufacturers have to approach a factory not just as a physical place, but also as a data center. Seeing physical operation and data as interconnected can improve quality, drive down costs, and increase the rate of production. And sensors and sensing systems are the tools to collect this data and improve the manufacturing process.</p> <p>Communications technologies now make it easy to transmit data from a machine to a central location. For example, we can apply sensing techniques to individual machines and then collect data across an entire factory so that information on how to debug one computer-controlled machine can be used to improve another in the same facility. Or, suppose I'm the producer of those machines and I've deployed them to any number of manufacturers. If I can get a little bit of information from each of my customers to optimize the machine’s operating performance, I can turn around and share improvements with all the companies who purchase my equipment. When information is shared amongst manufacturers, it helps all of them drive down their costs and improve quality.&nbsp;</p> Brian AnthonyMIT.nano, Nanoscience and nanotechnology, Sensors, Research, Manufacturing, Augmented and virtual reality, 3 Questions, Data, Analytics, Staff, Mechanical engineering, School of Engineering, Industry Uncovering the hidden “noise” that can kill qubits New detection tool could be used to make quantum computers robust against unwanted environmental disturbances. Mon, 16 Sep 2019 10:41:02 -0400 Rob Matheson | MIT News Office <p>MIT and Dartmouth College researchers have demonstrated, for the first time, a tool that detects new characteristics of environmental “noise” that can destroy the fragile quantum state of qubits, the fundamental components of quantum computers. The advance may provide insights into microscopic noise mechanisms to help engineer new ways of protecting qubits. &nbsp;</p> <p>Qubits can represent the two states corresponding to the classic binary bits, a 0 or 1. But, they can also maintain a “quantum superposition” of both states simultaneously, enabling quantum computers to solve complex problems that are practically impossible for classical computers.</p> <p>But a qubit’s quantum “coherence” — meaning &nbsp;its ability to maintain the superposition state — can fall apart due to noise coming from environment around the qubit. Noise can arise from control electronics, heat, or impurities in the qubit material itself, and can also cause serious computing errors that may be difficult to correct.</p> <p>Researchers have developed statistics-based models to estimate the impact of unwanted noise sources surrounding qubits to create new ways to protect them, and to gain insights into the noise mechanisms themselves. But, those tools generally capture simplistic “Gaussian noise,” essentially the collection of random disruptions from a large number of sources. In short, it’s like white noise coming from the murmuring of a large crowd, where there’s no specific disruptive pattern that stands out, so the qubit isn’t particularly affected by any one particular source. In this type of model, the probability distribution of the noise would form a standard symmetrical bell curve, regardless of the statistical significance of individual contributors.</p> <p>In a paper published today in the journal <em>Nature Communications</em>, the researchers describe a new tool that, for the first time, measures “non-Gaussian noise” affecting a qubit. This noise features distinctive patterns that generally stem from a few particularly strong noise sources.</p> <p>The researchers designed techniques to separate that noise from the background Gaussian noise, and then used signal-processing techniques to reconstruct highly detailed information about those noise signals. Those reconstructions can help researchers build more realistic noise models, which may enable more robust methods to protect qubits from specific noise types. There is now a need for such tools, the researchers say: Qubits are being fabricated with fewer and fewer defects, which could increase the presence of non-Gaussian noise.</p> <p>“It’s like being in a crowded room. If everyone speaks with the same volume, there is a lot of background noise, but I can still maintain my own conversation. However, if a few people are talking particularly loudly, I can’t help but lock on to their conversation. It can be very distracting,” says William Oliver, an associate professor of electrical engineering and computer science, professor of the practice of physics, MIT Lincoln Laboratory Fellow, and associate director of the Research Laboratory for Electronics (RLE). “For qubits with many defects, there is noise that decoheres, but we generally know how to handle that type of aggregate, usually Gaussian noise. However, as qubits improve and there are fewer defects, the individuals start to stand out, and the noise may no longer be simply of a Gaussian nature. We can find ways to handle that, too, but we first need to know the specific type of non-Gaussian noise and its statistics.”</p> <p>“It is not common for theoretical physicists to be able to conceive of an idea and also find an experimental platform and experimental colleagues willing to invest in seeing it through,” says co-author Lorenza Viola, a professor of physics at Dartmouth<strong>.</strong> “It was great to be able to come to such an important result with the MIT team.”</p> <p>Joining Oliver and Viola on the paper are: first author Youngkyu Sung, Fei Yan, Jack Y. Qiu, Uwe von Lüpke, Terry P. Orlando, and Simon Gustavsson, all of RLE; David K. Kim and Jonilyn L. Yoder of the Lincoln Laboratory; and Félix Beaudoin and Leigh M. Norris of Dartmouth.</p> <p><strong>Pulse filters</strong></p> <p>For their work, the researchers leveraged the fact that superconducting qubits are good sensors for detecting their own noise. Specifically, they use a “flux” qubit, which consists of a superconducting loop that is capable of detecting a particular type of disruptive noise, called magnetic flux, from its surrounding environment.</p> <p>In the experiments, they induced non-Gaussian “dephasing” noise by injecting engineered flux noise that disturbs the qubit and makes it lose coherence, which in turn is then used as a measuring tool. “Usually, we want to avoid decoherence, but in this case, how the qubit decoheres tells us something about the noise in its environment,” Oliver says.</p> <p>Specifically, they shot 110 “pi-pulses” — which are used to flip the states of qubits — in specific sequences over tens of microseconds. Each pulse sequence effectively created a narrow frequency “filter” which masks out much of the noise, except in a particular band of frequency. By measuring the response of a qubit sensor to the bandpass-filtered noise, they extracted the noise power in that frequency band.</p> <p>By modifying the pulse sequences, they could move filters up and down to sample the noise at different frequencies. Notably, in doing so, they tracked how the non-Gaussian noise distinctly causes the qubit to decohere, which provided a high-dimensional spectrum of the non-Gaussian noise.</p> <p><strong>Error suppression and correction</strong></p> <p>The key innovation behind the work is carefully engineering the pulses to act as specific filters that extract properties of the “bispectrum,” a two-dimension representation that gives information about distinctive time correlations of non-Gaussian noise.</p> <p>Essentially, by reconstructing the bispectrum, they could find properties of non-Gaussian noise signals impinging on the qubit over time —&nbsp;ones that don’t exist in Gaussian noise signals. The general idea is that, for Gaussian noise, there will be only correlation between two points in time, which is referred to as a “second-order time correlation.” But, for non-Gaussian noise, the properties at one point in time will directly correlate to properties at multiple future points. Such “higher-order” correlations are the hallmark of non-Gaussian noise. In this work, the authors were able to extract noise with correlations between three points in time.</p> <p>This information can help programmers validate and tailor dynamical error suppression and error-correcting codes for qubits, which fixes noise-induced errors and ensures accurate computation.</p> <p>Such protocols use information from the noise model to make implementations that are more efficient for practical quantum computers. But, because the details of noise aren’t yet well-understood, today’s error-correcting codes are designed with that standard bell curve in mind. With the researchers’ tool, programmers can either gauge how their code will work effectively in realistic scenarios or start to zero in on non-Gaussian noise.</p> <p>Keeping with the crowded-room analogy, Oliver says: “If you know there’s only one loud person in the room, then you’ll design a code that effectively muffles that one person, rather than trying to address every possible scenario.”</p> MIT and Dartmouth College researchers developed a tool that detects new characteristics of non-Gaussian “noise” that can destroy the fragile quantum superposition state of qubits, the fundamental components of quantum computers.Photo: Sampson WilcoxResearch, Computer science and technology, Quantum computing, Superconductors, Nanoscience and nanotechnology, Research Laboratory of Electronics, Lincoln Laboratory, Physics, Electrical Engineering & Computer Science (eecs), School of Science, School of Engineering Creating new opportunities from nanoscale materials MIT Professor Frances Ross is pioneering new techniques to study materials growth and how structure relates to performance. Thu, 05 Sep 2019 15:15:01 -0400 Denis Paiste | Materials Research Laboratory <p>A hundred years ago, “2d” meant a two-penny, or 1-inch, nail. Today, “2-D” encompasses a broad range of atomically thin flat materials, many with exotic properties not found in the bulk equivalents of the same materials, with graphene — the single-atom-thick form of carbon — perhaps the most prominent. While many researchers at MIT and elsewhere are exploring two-dimensional materials and their special properties,&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">Frances M. Ross</a>, the Ellen Swallow Richards Professor in Materials Science and Engineering, is interested in what happens when these 2-D materials and ordinary 3-D materials come together.</p> <p>“We’re interested in the interface between a 2-D material and a 3-D material because every 2-D material that you want to use in an application, such as an electronic device, still has to talk to the outside world, which is three-dimensional,” Ross says.</p> <p>“We’re at an interesting time because there are immense developments in instrumentation for electron microscopy, and there is great interest in materials with very precisely controlled structures and properties, and these two things cross in a fascinating way,” says Ross.&nbsp;</p> <p>“The opportunities are very exciting,” Ross says. “We’re going to be really improving the characterization capabilities here at MIT.” Ross specializes in examining how nanoscale materials grow and react in both gases and liquid media, by recording movies using electron microscopy. Microscopy of reactions in liquids is particularly useful for understanding the mechanisms of electrochemical reactions that govern the performance of catalysts, batteries, fuel cells, and other important technologies. “In the case of liquid phase microscopy, you can also look at corrosion where things dissolve away, while in gases you can look at how individual crystals grow or how materials react with, say, oxygen,” she says.<br /> <br /> Ross&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">joined</a>&nbsp;the Department of Materials Science and Engineering (DMSE) faculty last year, moving from the Nanoscale Materials Analysis department at the IBM Thomas J. Watson Research Center. “I learned a tremendous amount from my IBM colleagues and hope to extend our research in material design and growth in new directions,” she says.</p> <p><strong>Recording movies</strong></p> <p>During a recent visit to her lab, Ross explained an experimental setup donated to MIT by IBM. An ultra-high vacuum evaporator system arrived first, to be attached later directly onto a specially designed transmission electron microscope. “This gives powerful possibilities,” Ross explains. “We can put a sample in the vacuum, clean it, do all sorts of things to it such as heating and adding other materials, then transfer it under vacuum into the microscope, where we can do more experiments while we record images. So we can, for example, deposit silicon or germanium, or evaporate metals, while the sample is in the microscope and the electron beam is shining through it, and we are recording a movie of the process.”</p> <p>While waiting this spring for the transmission electron microscope to be set up, members of Ross’ seven-member research group, including materials science and engineering postdoc Shu Fen Tan and graduate student Kate Reidy, made and studied a variety of self-assembled structures. The evaporator system was housed temporarily on the fifth-level prototyping space of MIT.nano while Ross’s lab was being readied in Building 13. “MIT.nano had the resources and space; we were happy to be able to help,” says Anna Osherov, MIT.nano assistant director of user services.</p> <p>“All of us are interested in this grand challenge of materials science, which is:&nbsp;‘How do you make a material with the properties you want and, in particular, how do you use nanoscale dimensions to tweak the properties, and create new properties, that you can’t get from bulk materials?’” Ross says.</p> <p>Using the ultra-high vacuum system, graduate student Kate Reidy formed structures of gold and niobium on several 2-D materials. “Gold loves to grow into little triangles,” Ross notes. “We’ve been talking to people in physics and materials science about which combinations of materials are the most important to them in terms of controlling the structures and the interfaces between the components in order to give some improvement in the properties of the material,” she notes.</p> <p>Shu Fen Tan synthesized nickel-platinum nanoparticles and examined them using another technique, liquid cell electron microscopy. She could arrange for only the nickel to dissolve, leaving behind spiky skeletons of platinum. “Inside the liquid cell, we are able to see this whole process at high spatial and temporal resolutions,” Tan says. She explains that platinum is a noble metal and less reactive than nickel, so under the right conditions the nickel participates in an electrochemical dissolution reaction and the platinum is left behind.<br /> <br /> Platinum is a well-known catalyst in organic chemistry and fuel cell materials, Tan notes, but it is also expensive, so finding combinations with less-expensive materials such as nickel is desirable.</p> <p>“This is an example of the range of materials reactions you can image in the electron microscope using the liquid cell technique,” Ross says. “You can grow materials; you can etch them away; you can look at, for example, bubble formation and fluid motion.”</p> <p>A particularly important application of this technique is to study cycling of battery materials. “Obviously, I can’t put an AA battery in here, but you could set up the important materials inside this very small liquid cell and then you can cycle it back and forth and ask, if I charge and discharge it 10 times, what happens? It does not work just as well as before — how does it fail?” Ross asks. “Some kind of failure analysis and all the intermediate stages of charging and discharging can be observed in the liquid cell.”</p> <p>“Microscopy experiments where you see every step of a reaction give you a much better chance of understanding what’s going on,” Ross says.</p> <p><strong>Moiré patterns</strong></p> <p>Graduate student Reidy is interested in how to control the growth of gold on 2-D materials such as graphene, tungsten diselenide, and molybdenum disulfide. When she deposited gold on “dirty” graphene, blobs of gold collected around the impurities. But when Reidy grew gold on graphene that had been heated and cleaned of impurities, she found perfect triangles of gold. Depositing gold on both the top and bottom sides of clean graphene, Reidy saw in the microscope features known as&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">moiré patterns</a>, which are caused when the overlapping crystal structures are out of alignment.</p> <p>The gold triangles may be useful as photonic and plasmonic structures. “We think this could be important for a lot of applications, and it is always interesting for us to see what happens,” Reidy says. She is planning to extend her clean growth method to form 3-D metal crystals on stacked 2-D materials with various rotation angles and other mixed-layer structures. Reidy is interested in the properties of graphene and hexagonal boron nitride (hBN), as well as two materials that are semiconducting in their 2-D single-layer form, molybdenum disulfide (MoS<sub>2</sub>) and tungsten diselenide (WSe<sub>2</sub>). “One aspect that’s very interesting in the 2-D materials community is the contacts between 2-D materials and 3-D metals,” Reidy says. “If they want to make a semiconducting device or a device with graphene, the contact could be ohmic for the graphene case or a Schottky contact for the semiconducting case, and the interface between these materials is really, really important.”</p> <p>“You can also imagine devices using the graphene just as a spacer layer between two other materials,” Ross adds.</p> <p>For device makers, Reidy says it is sometimes important to have a 3-D material grow with its atomic arrangement aligned perfectly with the atomic arrangement in the 2-D layer beneath. This is called epitaxial growth. Describing an image of gold grown together with silver on graphene, Reidy explains, “We found that silver doesn’t grow epitaxially, it doesn’t make those perfect single crystals on graphene that we wanted to make, but by first depositing the gold and then depositing silver around it, we can almost force silver to go into an epitaxial shape because it wants to conform to what its gold neighbors are doing.”</p> <p>Electron microscope images can also show imperfections in a crystal such as rippling or bending, Reidy notes. “One of the great things about electron microscopy is that it is very sensitive to changes in the arrangement of the atoms,” Ross says. “You could have a perfect crystal and it would all look the same shade of gray, but if you have a local change in the structure, even a subtle change, electron microscopy can pick it up. Even if the change is just within the top few layers of atoms without affecting the rest of the material beneath, the image will show distinctive features that allow us to work out what’s going on.”</p> <p>Reidy also is exploring the possibilities of combining niobium — a metal that is superconducting at low temperatures — with a 2-D topological insulator, bismuth telluride. Topological insulators have fascinating properties whose discovery resulted in the&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">Nobel Prize</a>&nbsp;in Physics in 2016. “If you deposit niobium on top of bismuth telluride, with a very good interface, you can make superconducting junctions. We’ve been looking into niobium deposition, and rather than triangles we see structures that are more dendritic looking,” Reidy says. Dendritic structures look like the frost patterns formed on the inside of windows in winter, or the feathery patterns of some ferns. Changing the temperature and other conditions during the deposition of niobium can change the patterns that the material takes.</p> <p>All the researchers are eager for new electron microscopes to arrive at MIT.nano to give further insights into the behavior of these materials. “Many things will happen within the next year, things are ramping up already, and I have great people to work with. One new microscope is being installed now in MIT.nano and another will arrive next year. The whole community will see the benefits of improved microscopy characterization capabilities here,” Ross says.</p> <p>MIT.nano’s Osherov notes that two&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">cryogenic transmission electron microscopes</a>&nbsp;(cryo-TEM) are installed and running. “Our goal is to establish a unique microscopy-centered community. We encourage and hope to facilitate a cross-pollination between the cryo-EM researchers, primarily focused on biological applications and ‘soft’ material, as well as other research communities across campus,” she says. The latest addition of a scanning transmission electron microscope with enhanced analytical capabilities (ultrahigh energy resolution monochromator, 4-D STEM detector, Super-X EDS detector, tomography, and several&nbsp;in situ&nbsp;holders) brought in by John Chipman Associate Professor of Materials Science and Engineering&nbsp;<a dir="ltr" href="" rel="noopener" target="_blank">James M. LeBeau</a>, once installed, will substantially enhance the microscopy capabilities of the MIT campus. “We consider Professor Ross to be an immense resource for advising us in how to shape the in situ approach to measurements using the advanced instrumentation that will be shared and available to all the researchers within the MIT community and beyond,” Osherov says.</p> <p><strong>Little drinking straws</strong></p> <p>“Sometimes you know more or less what you are going to see during a growth experiment, but very often there’s something that you don’t expect,” Ross says. She shows an example of zinc oxide nanowires that were grown using a germanium catalyst. Some of the long crystals have a hole through their centers, creating structures which are like little drinking straws, circular outside but with a hexagonally shaped interior. “This is a single crystal of zinc oxide, and the interesting question for us is why do the experimental conditions create these facets inside, while the outside is smooth?” Ross asks. “Metal oxide nanostructures have so many different applications, and each new structure can show different properties. In particular, by going to the nanoscale you get access to a diverse set of properties.”</p> <p>“Ultimately, we’d like to develop techniques for growing well-defined structures out of metal oxides, especially if we can control the composition at each location on the structure,” Ross says. A key to this approach is self-assembly, where the material builds itself into the structure you want without having to individually tweak each component. “Self-assembly works very well for certain materials but the problem is that there’s always some uncertainty, some randomness or fluctuations. There’s poor control over the exact structures that you get. So the idea is to try to understand self-assembly well enough to be able to control it and get the properties that you want,” Ross says.</p> <p>“We have to understand how the atoms end up where they are, then use that self-assembly ability of atoms to make a structure we want. The way to understand how things self-assemble is to watch them do it, and that requires movies with high spatial resolution and good time resolution,” Ross explains. Electron microscopy can be used to acquire structural and compositional information and can even measure strain fields or electric and magnetic fields. “Imagine recording all of these things, but in a movie where you are also controlling how materials grow within the microscope. Once you have made a movie of something happening, you analyze all the steps of the growth process and use that to understand which physical principles were the key ones that determined how the structure nucleated and evolved and ended up the way it does.”</p> <p><strong>Future directions</strong></p> <p>Ross hopes to bring in a unique high-resolution, high vacuum TEM with capabilities to image materials growth and other dynamic processes. She intends to develop new capabilities for both water-based and gas-based environments. This custom microscope is still in the planning stages but will be situated in one of the rooms in the Imaging Suite in MIT.nano.</p> <p>“Professor Ross is a pioneer in this field,” Osherov says. “The majority of TEM studies to-date have been static, rather than dynamic. With static measurements you are observing a sample at one particular snapshot in time, so you don’t gain any information about how it was formed. Using dynamic measurements, you can look at the atoms hopping from state to state until they find the final position. The ability to observe self-assembling processes and growth in real time provides valuable mechanistic insights. We’re looking forward to bringing these advanced capabilities to MIT.nano.” she says.</p> <p>“Once a certain technique is disseminated to the public, it brings attention,” Osherov says. “When results are published, researchers expand their vision of experimental design based on available state-of-the-art capabilities, leading to many new experiments that will be focused on dynamic applications.”</p> <p>Rooms in MIT.nano feature the quietest space on the MIT campus, designed to reduce vibrations and electromagnetic interference to as low a level as possible. “There is space available for Professor Ross to continue her research and to develop it further,” Osherov says. “The ability of in situ monitoring the formation of matter and interfaces will find applications in multiple fields across campus, and lead to a further push of the conventional electron microscopy limits.”</p> Left to right: Postdoc Shu Fen Tan, graduate student Kate Reidy, and Professor Frances Ross, all of the Department of Materials Science and Engineering, sit in front of a high vacuum evaporator system. Photo: Denis Paiste/Materials Research LaboratoryMaterials Research Laboratory, Materials Science and Engineering, School of Engineering, Nanoscience and nanotechnology, Microscopy, Research, MIT.nano, DMSE, Faculty, Profile, 2-D, Graphene MIT engineers build advanced microprocessor out of carbon nanotubes New approach harnesses the same fabrication processes used for silicon chips, offers key advance toward next-generation computers. Wed, 28 Aug 2019 13:00:00 -0400 Rob Matheson | MIT News Office <p>After years of tackling numerous design and manufacturing challenges, MIT researchers have built a modern microprocessor from carbon nanotube transistors, which are widely seen as a faster, greener alternative to their traditional silicon counterparts.</p> <p>The microprocessor, described today in the journal <em>Nature</em>, can be built using traditional silicon-chip fabrication processes, representing a major step toward making carbon nanotube microprocessors more practical.</p> <p>Silicon transistors — critical microprocessor components that switch between 1 and 0 bits to carry out computations — have carried the computer industry for decades. As predicted by Moore’s Law, industry has been able to shrink down and cram more transistors onto chips every couple of years to help carry out increasingly complex computations. But experts now foresee a time when silicon transistors will stop shrinking, and become increasingly inefficient.</p> <p>Making carbon nanotube field-effect transistors (CNFET) has become a major goal for building next-generation computers. Research indicates CNFETs have properties that promise around 10 times the energy efficiency and far greater speeds compared to silicon. But when fabricated at scale, the transistors often come with many defects that affect performance, so they remain impractical.</p> <p>The MIT researchers have invented new techniques to dramatically limit defects and enable full functional control in fabricating CNFETs, using processes in traditional silicon chip foundries. They demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks as commercial microprocessors. The <em>Nature</em> paper describes the microprocessor design and includes more than 70 pages detailing the manufacturing methodology.</p> <p>The microprocessor is based on the RISC-V open-source chip architecture that has a set of instructions that a microprocessor can execute. The researchers’ microprocessor was able to execute the full set of instructions accurately. It also executed a modified version of the classic “Hello, World!” program, printing out, “Hello, World! I am RV16XNano, made from CNTs.”</p> <p>“This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing,” says co-author Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories. “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.”</p> <p>Joining Shulaker on the paper are: first author and postdoc Gage Hills, graduate students Christian Lau, Andrew Wright, Mindy D. Bishop, Tathagata Srimani, Pritpal Kanhaiya, Rebecca Ho, and Aya Amer, all of EECS; Arvind, the Johnson Professor of Computer Science and Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory; Anantha Chandrakasan, the dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Samuel Fuller, Yosi Stein, and Denis Murphy, all of Analog Devices.</p> <p><strong>Fighting the “bane” of CNFETs</strong></p> <p>The microprocessor builds on a previous iteration designed by Shulaker and other researchers six years ago that had only 178 CNFETs and ran on a single bit of data. Since then, Shulaker and his MIT colleagues have tackled three specific challenges in producing the devices: material defects, manufacturing defects, and functional issues. Hills did the bulk of the microprocessor design, while Lau handled most of the manufacturing.</p> <p>For years, the defects intrinsic to carbon nanotubes have been a “bane of the field,” Shulaker says. Ideally, CNFETs need semiconducting properties to switch their conductivity on and off, corresponding to the bits 1 and 0. But unavoidably, a small portion of carbon nanotubes will be metallic, and will slow or stop the transistor from switching. To be robust to those failures, advanced circuits will need carbon nanotubes at around 99.999999 percent purity, which is virtually impossible to produce today. &nbsp;</p> <p>The researchers came up with a technique called DREAM (an acronym for “designing resiliency against metallic CNTs”), which positions metallic CNFETs in a way that they won’t disrupt computing. In doing so, they relaxed that stringent purity requirement by around four orders of magnitude — or 10,000 times —&nbsp;meaning they only need carbon nanotubes at about 99.99 percent purity, which is currently possible.</p> <p>Designing circuits basically requires a library of different logic gates attached to transistors that can be combined to, say, create adders and multipliers —&nbsp;like combining letters in the alphabet to create words. The researchers realized that the metallic carbon nanotubes impacted different pairings of these gates differently. A single metallic carbon nanotube in gate A, for instance, may break the connection between A and B. But several metallic carbon nanotubes in gates B may not impact any of its connections.</p> <p>In chip design, there are many ways to implement code onto a circuit. The researchers ran simulations to find all the different gate combinations that would be robust and wouldn’t be robust to any metallic carbon nanotubes. They then customized a chip-design program to automatically learn the combinations least likely to be affected by metallic carbon nanotubes. When designing a new chip, the program will only utilize the robust combinations and ignore the vulnerable combinations.</p> <p>“The ‘DREAM’ pun is very much intended, because it’s the dream solution,” Shulaker says. “This allows us to buy carbon nanotubes off the shelf, drop them onto a wafer, and just build our circuit like normal, without doing anything else special.”</p> <p><strong>Exfoliating and tuning</strong></p> <p>CNFET fabrication starts with depositing carbon nanotubes in a solution onto a wafer with predesigned transistor architectures. However, some carbon nanotubes inevitably stick randomly together to form big bundles — like strands of spaghetti formed into little balls — that form big particle contamination on the chip. &nbsp;</p> <p>To cleanse that contamination, the researchers created RINSE (for “removal of incubated nanotubes through selective exfoliation”). The wafer gets pretreated with an agent that promotes carbon nanotube adhesion. Then, the wafer is coated with a certain polymer and dipped in a special solvent. That washes away the polymer, which only carries away the big bundles, while the single carbon nanotubes remain stuck to the wafer. The technique leads to about a 250-times reduction in particle density on the chip compared to similar methods.</p> <p>Lastly, the researchers tackled common functional issues with CNFETs. Binary computing requires two types of transistors: “N” types, which turn on with a 1 bit and off with a 0 bit, and “P” types, which do the opposite. Traditionally, making the two types out of carbon nanotubes has been challenging, often yielding transistors that vary in performance. For this solution, the researchers developed a technique called MIXED (for “metal interface engineering crossed with electrostatic doping”), which precisely tunes transistors for function and optimization.</p> <p>In this technique, they attach certain metals to each transistor —&nbsp;platinum or titanium — which allows them to fix that transistor as P or N. Then, they coat the CNFETs in an oxide compound through atomic-layer deposition, which allows them to tune the transistors’ characteristics for specific applications. Servers, for instance, often require transistors that act very fast but use up energy and power. Wearables and medical implants, on the other hand, may use slower, low-power transistors. &nbsp;</p> <p>The main goal is to get the chips out into the real world. To that end, the researchers have now started implementing their manufacturing techniques into a silicon chip foundry through a program by Defense Advanced Research Projects Agency, which supported the research. Although no one can say when chips made entirely from carbon nanotubes will hit the shelves, Shulaker says it could be fewer than five years. “We think it’s no longer a question of if, but when,” he says.</p> <p>The work was also supported by Analog Devices, the National Science Foundation, and the Air Force Research Laboratory.</p> A close up of a modern microprocessor built from carbon nanotube field-effect transistors.Image: Felice FrankelResearch, Computer science and technology, Microsystems Technology Laboratories, Nanoscience and nanotechnology, Carbon nanotubes, electronics, Defense Advanced Research Projects Agency (DARPA), National Science Foundation (NSF), Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering Ultrathin 3-D-printed films convert energy of one form into another Low-cost “piezoelectric” films produce voltage, could be used for flexible electronic components and more. Wed, 28 Aug 2019 12:12:03 -0400 Rob Matheson | MIT News Office <p>MIT researchers have developed a simple, low-cost method to 3-D print ultrathin films with high-performing “piezoelectric” properties, which could be used for components in flexible electronics or highly sensitive biosensors.</p> <p>Piezoelectric materials produce a voltage in response to physical strain, and they respond to a voltage by physically deforming. They’re commonly used for transducers, which convert energy of one form into another. Robotic actuators, for instance, use piezoelectric materials to move joints and parts in response to an electrical signal. And various sensors use the materials to convert changes in pressure, temperature, force, and other physical stimuli, into a measurable electrical signal.</p> <p>Researchers have been trying for years to develop piezoelectric ultrathin films that can be used as energy harvesters, sensitive pressure sensors for touch screens, and other components in flexible electronics. The films could also be used as tiny biosensors that are sensitive enough to detect the presence of molecules that are biomarkers for certain diseases and conditions.</p> <p>The material of choice for those applications is often a type of ceramic with a crystal structure that resonates at high frequencies due to its extreme thinness. (Higher frequencies basically translate to faster speeds and higher sensitivity.) But, with traditional fabrication techniques, creating ceramic ultrathin films is a complex and expensive process.</p> <p>In a paper recently published in the journal <em>Applied Materials and Interfaces</em>, the MIT researchers describe a way to 3-D print ceramic transducers about 100 nanometers thin by adapting an additive manufacturing technique for the process that builds objects layer by layer, at room temperature. The films can be printed in flexible substrates with no loss in performance, and can resonate at around 5 gigahertz, which is high enough for high-performance biosensors.</p> <p>“Making transducing components is at the heart of the technological revolution,” says Luis Fernando Velásquez-García, a researcher in the Microsystems Technology Laboratories (MTL) in the Department of Electrical Engineering and Computer Science. “Until now, it’s been thought 3-D-printed transducing materials will have poor performances. But we’ve developed an additive fabrication method for piezoelectric transducers at room temperature, and the materials oscillate at gigahertz-level frequencies, which is orders of magnitude higher than anything previously fabricated through 3-D printing.”</p> <p>Joining Velásquez-García on the paper is first author Brenda García-Farrera of MTL and the Monterrey Institute of Technology and Higher Education in Mexico.</p> <p><strong>Electrospraying nanoparticles</strong></p> <p>Ceramic piezoelectric thin films, made of aluminum nitride or zinc oxide, can be fabricated through physical vapor deposition and chemical vapor deposition. But those processes must be completed in sterile clean rooms, under high temperature and high vacuum conditions. That can be a time-consuming, expensive process.</p> <p>There are lower-cost 3-D-printed piezoelectric thin films available. But those are fabricated with polymers, which must be “poled”— meaning they must be given piezoelectric properties after they’re printed. Moreover, those materials usually end up tens of microns thick and thus can’t be made into ultrathin films capable of high-frequency actuation.</p> <p>The researchers’ system adapts an additive fabrication technique, called near-field electrohydrodynamic deposition (NFEHD), which uses high electric fields to eject a liquid jet through a nozzle to print an ultrathin film. Until now, the technique has not been used to print films with piezoelectric properties.</p> <p>The researchers’ liquid feedstock — raw material used in 3-D printing —&nbsp;contains zinc oxide nanoparticles mixed with some inert solvents, which forms into a piezoelectric material when printed onto a substrate and dried. The feedstock is fed through a hollow needle in a 3-D printer. As it prints, the researchers apply a specific bias voltage to the tip of the needle and control the flow rate, causing the meniscus — the curve seen at the top of a liquid —&nbsp;to form into a cone shape that ejects a fine jet from its tip.</p> <p>The jet is naturally inclined to break into droplets. But when the researchers bring the tip of the needle close to the substrate —&nbsp;about a millimeter —&nbsp;the jet doesn’t break apart. That process prints long, narrow lines on a substrate. They then overlap the lines and dry them at about 76 degrees Fahrenheit, hanging upside down.</p> <p>Printing the film precisely that way creates an ultrathin film of crystal structure with piezoelectric properties that resonates at about 5 gigahertz. “If anything of that process is missing, it doesn’t work,” Velásquez-García says.</p> <p>Using microscopy techniques, the team was able to prove that the films have a much stronger piezoelectric response — meaning the measurable signal it emits — than films made through traditional bulk fabrication methods. Those methods don’t really control the film’s piezoelectric axis direction, which determines the material’s response. “That was a little surprising,” Velásquez-García says. “In those bulk materials, they may have inefficiencies in the structure that affect performance. But when you can manipulate materials at the nanoscale, you get a stronger piezoelectric response.”</p> <p>“This very nice body of work demonstrates the feasibility of preparing functional piezoelectric films using 3-D printing techniques,” says Mark Allen, a professor specializing in microfabrication, nanotechnology, and microelectromechanical systems at the University of Pennsylvania. “Exploitation of this fabrication technique can lead to complex, three-dimensional, and low temperature fabrication of piezoelectric structures. I expect we will see new classes of microscale sensors, actuators, and resonators enabled by this exciting fabrication technology."</p> <p><strong>Low-cost sensors</strong></p> <p>Because the piezoelectric ultrathin films are 3-D printed and resonate at very high frequencies, they can be leveraged to fabricate low-cost, highly sensitive sensors. The researchers are currently working with colleagues in Monterrey Tec as part of a collaborative program in nanoscience and nanotechnology, to make piezoelectric biosensors to detect biomarkers for certain diseases and conditions.</p> <p>A resonating circuit is integrated into these biosensors, which makes the piezoelectric ultrathin film oscillate at a specific frequency, and the piezoelectric material can be functionalized to attract certain molecule biomarkers to its surface. When the molecules stick to the surface, it causes the piezoelectric material to slightly shift the frequency oscillations of the circuit. That small frequency shift can be measured and correlated to a certain amount of the molecule that piles up on its surface.</p> <p>The researchers are also developing a sensor to measure the decay of electrodes in fuel cells. That would function similarly to the biosensor, but the shifts in frequency would correlate to the degradation of a certain alloy in the electrodes. “We’re making sensors that can diagnose the health of fuel cells, to see if they need to be replaced,” Velásquez-García says. “If you assess the health of these systems in real time, you can make decisions about when to replace them, before something serious happens.”</p> MIT researchers have 3-D printed ultrathin ceramic films that convert energy from one form into another for flexible electronics and biosensors. Here, they’ve printed the piezoelectric films into a pattern spelling out “MIT.”Research, Microsystems Technology Laboratories, 3-D printing, Design, Manufacturing, Materials Science and Engineering, electronics, Disease, Health sciences and technology, Nanoscience and nanotechnology, Electrical Engineering & Computer Science (eecs), School of Engineering A new way to deliver drugs with pinpoint targeting Magnetic particles allow drugs to be released at precise times and in specific areas. Mon, 19 Aug 2019 10:59:59 -0400 David L. Chandler | MIT News Office <p>Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.</p> <p>The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.</p> <p>The findings are reported today in the journal <em>Nature Nanotechnology</em> in a paper by MIT postdoc Siyuan Rao, Associate Professor Polina Anikeeva, and 14 others at MIT, Stanford University, Harvard University, and the Swiss Federal Institute of Technology in Zurich.</p> <p>“We wanted a system that could deliver a drug with temporal precision, and could eventually target a particular location,” Anikeeva explains. “And if we don’t want it to be invasive, we need to find a non-invasive way to trigger the release.”</p> <p>Magnetic fields, which can easily penetrate through the body — as demonstrated by detailed internal images produced by magnetic resonance imaging, or MRI — were a natural choice. The hard part was finding materials that could be triggered to heat up by using a very weak magnetic field (about one-hundredth the strength of that used for MRI), in order to prevent damage to the drug or surrounding tissues, Rao says.</p> <p>Rao came up with the idea of taking magnetic nanoparticles, which had already been shown to be capable of being heated by placing them in a magnetic field, and packing them into these spheres called liposomes. These are like little bubbles of lipids, which naturally form a spherical double layer surrounding a water droplet.</p> <p>When placed inside a high-frequency but low-strength magnetic field, the nanoparticles heat up, warming the lipids and making them undergo a transition from solid to liquid, which makes the layer more porous — just enough to let some of the drug molecules escape into the surrounding areas. When the magnetic field is switched off, the lipids re-solidify, preventing further releases. Over time, this process can be repeated, thus releasing doses of the enclosed drug at precisely controlled intervals.</p> <p>The drug carriers were engineered to be stable inside the body at the normal body temperature of 37 degrees Celsius, but able to release their payload of drugs at a temperature of 42 degrees. “So we have a magnetic switch for drug delivery,” and that amount of heat is small enough “so that you don’t cause thermal damage to tissues,” says Anikeeva, who holds appointments in the departments of Materials Science and Engineering and the Brain and Cognitive Sciences.</p> <p>In principle, this technique could also be used to guide the particles to specific, pinpoint locations in the body, using gradients of magnetic fields to push them along, but that aspect of the work is an ongoing project. For now, the researchers have been injecting the particles directly into the target locations, and using the magnetic fields to control the timing of drug releases. “The technology will allow us to address the spatial aspect,” Anikeeva says, but that has not yet been demonstrated.</p> <p>This could enable very precise treatments for a wide variety of conditions, she says. “Many brain disorders are characterized by erroneous activity of certain cells. When neurons are too active or not active enough, that manifests as a disorder, such as Parkinson’s, or depression, or epilepsy.” If a medical team wanted to deliver a drug to a specific patch of neurons and at a particular time, such as when an onset of symptoms is detected, without subjecting the rest of the brain to that drug, this system “could give us a very precise way to treat those conditions,” she says.</p> <p>Rao says that making these nanoparticle-activated liposomes is actually quite a simple process. “We can prepare the liposomes with the particles within minutes in the lab,” she says, and the process should be “very easy to scale up” for manufacturing. And the system is broadly applicable for drug delivery: “we can encapsulate any water-soluble drug,” and with some adaptations, other drugs as well, she says.</p> <p>One key to developing this system was perfecting and calibrating a way of making liposomes of a highly uniform size and composition. This involves mixing a water base with the fatty acid lipid molecules and magnetic nanoparticles and homogenizing them under precisely controlled conditions. Anikeeva compares it to shaking a bottle of salad dressing to get the oil and vinegar mixed, but controlling the timing, direction and strength of the shaking to ensure a precise mixing.</p> <p>Anikeeva says that while her team has focused on neurological disorders, as that is their specialty, the drug delivery system is actually quite general and could be applied to almost any part of the body, for example to deliver cancer drugs, or even to deliver painkillers directly to an affected area instead of delivering them systemically and affecting the whole body. “This could deliver it to where it’s needed, and not deliver it continuously,” but only as needed.</p> <p>Because the magnetic particles themselves are similar to those already in widespread use as contrast agents for MRI scans, the regulatory approval process for their use may be simplified, as their biological compatibility has largely been proven.</p> <p>The team included researchers in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, as well as the McGovern Institute for Brain Research, the Simons Center for Social Brain, and the Research Laboratory of Electronics; the Harvard University Department of Chemistry and Chemical Biology and the John A. Paulsen School of Engineering and Applied Sciences; Stanford University; and the Swiss Federal Institute of Technology in Zurich. The work was supported by the Simons Postdoctoral Fellowship, the U.S. Defense Advanced Research Projects Agency, the <a href="">Bose Research Grant</a>, and the National Institutes of Health.</p> Diagram illustrates the structure of the tiny bubbles, called liposomes, used to deliver drugs. The blue spheres represent lipids, a kind of fat molecule, surrounding a central cavity containing magnetic nanoparticles (black) and the drug to be delivered (red). When the nanoparticles are heated, the drug can escape into the body.Image courtesy of the researchersDrug delivery, Research, Materials Science and Engineering, Nanoscience and nanotechnology, Neuroscience, Medicine, DMSE, Brain and cognitive sciences, School of Engineering, School of Science, McGovern Institute, Advanced Research Projects Agency (DARPA), National Institutes of Health (NIH) A single-photon source you can make at home Shining light through household bleach creates fluorescent quantum defects in carbon nanotubes for quantum computing and biomedical imaging. Fri, 09 Aug 2019 13:30:01 -0400 Daniel Darling | Department of Biological Engineering <p>Quantum computing and quantum cryptography are expected to give much higher capabilities than their classical counterparts. For example, the computation power in a quantum system may grow at a double exponential rate instead of a classical linear rate due to the different nature of the basic unit, the qubit (quantum bit). Entangled particles enable the unbreakable codes for secure communications. The importance of these technologies motivated the U.S. government to legislate the National Quantum Initiative Act, which authorizes $1.2 billion over the following five years for developing quantum information science.</p> <p>Single photons can be an essential qubit source for these applications. To achieve practical usage, the single photons should be in the telecom wavelengths, which range from 1,260-1,675 nanometers, and the device should be functional at room temperature. To date, only a single fluorescent quantum defect in carbon nanotubes possesses both features simultaneously. However, the precise creation of these single defects has been hampered by preparation methods that require special reactants, are difficult to control, proceed slowly, generate non-emissive defects, or are challenging to scale.</p> <p>Now, research from Angela Belcher, head of the MIT Department of Biologicial Engineering, Koch Institute member, and the James Crafts Professor of Biological Engineering, and postdoc Ching-Wei Lin, <a href="" target="_blank">published online</a> in <em>Nature Communications,</em> describes a simple solution to create carbon-nanotube based single-photon emitters, which are known as fluorescent quantum defects.</p> <p>“We can now quickly synthesize these fluorescent quantum defects within a minute, simply using household bleach and light,” Lin says. “And we can produce them at large scale easily.”</p> <p>Belcher’s lab has demonstrated this amazingly simple method with minimum non-fluorescent defects generated. Carbon nanotubes were submerged in bleach and then irradiated with ultraviolet light for less than a minute to create the fluorescent quantum defects.</p> <p>The availability of fluorescent quantum defects from this method has greatly reduced the barrier for translating fundamental studies to practical applications. Meanwhile, the nanotubes become even brighter after the creation of these fluorescent defects. In addition, the excitation/emission of these defect carbon nanotubes is shifted to the so-called shortwave infrared region (900-1,600 nm), which is an invisible optical window that has slightly longer wavelengths than the regular near-infrared. What's more, operations at longer wavelengths with brighter defect emitters allow researchers to see through the tissue more clearly and deeply for optical imaging. As a result, the defect carbon nanotube-based optical probes (usually to conjugate the targeting materials to these defect carbon nanotubes) will greatly improve the imaging performance, enabling cancer detection and treatments such as <a href="">early detection</a> and <a href="">image-guided surgery</a>.</p> <p>Cancers were the second-leading cause of death in the United States in 2017. Extrapolated, this comes out to around 500,000 people who die from cancer every year. The goal in the Belcher Lab is to develop very bright probes that work at the optimal optical window for looking at very small tumors, primarily on ovarian and brain cancers. If doctors can detect the disease earlier, the survival rate can be significantly increased, according to statistics. And now the new bright fluorescent quantum defect can be the right tool to upgrade the current imaging systems, looking at even smaller tumors through the defect emission.</p> <p>“We have demonstrated a clear visualization of vasculature structure and lymphatic systems using 150 times less amount of probes compared to previous generation of imaging systems,” Belcher says, “This indicates that we have moved a step forward closer to cancer early detection.”</p> <p>In collaboration with contributors from Rice University, reearchers can identify for the first time the distribution of quantum defects in carbon nanotubes using a novel spectroscopy method called variance spectroscopy. This method helped the researchers monitor the quality of the quantum defect contained-carbon nanotubes and find the correct synthetic parameters easier.</p> <p>Other co-authors at MIT include biological engineering graduate student Uyanga Tsedev, materials science and engineering graduate student Shengnan Huang, as well as Professor R. Bruce Weisman, Sergei Bachilo, and Zheng Yu of Rice University.</p> <p>This work was supported by grants from the Marble Center for Cancer Nanomedicine, the Koch Institute Frontier Research Program, Frontier, the National Science Foundation, and the Welch Foundation.</p> A bright fluorescent quantum defect can be a tool to upgrade current biomedical imaging systems, looking at even smaller tumors through the defect emission.Image: Belcher LabQuantum computing, Biological engineering, Koch Institute, School of Engineering, Research, Nanoscience and nanotechnology, Carbon nanotubes, Cancer, Medicine Yearlong hackathon engages nano community around health issues Hacking Nanomedicine kicks off a series of events to develop an idea over time. Fri, 09 Aug 2019 11:45:01 -0400 MIT.nano <p>A traditional hackathon&nbsp;focuses on computer science and programming,&nbsp;attracts coders in droves, and spans an entire weekend with three stages: problem definition, solution development, and business formation.&nbsp;</p> <p>Hacking Nanomedicine, however, recently brought together graduate and postgraduate students for a single morning of hands-on problem solving and innovation in health care while offering networking opportunities across departments and research interests. Moreover, the July hackathon was the first in a series of three half-day events structured to allow ideas to develop over time.</p> <p>This deliberately deconstructed, yearlong process promotes necessary ebb and flow as teams shift in scope and recruit new members throughout each&nbsp;stage.&nbsp;“We believe this format is a powerful combination of intense, collaborative, multidisciplinary interactions, separated by restful research periods for reflecting on new ideas, allowing additional background research to take place and enabling additional people to be pulled into the fray as ideas take shape,” says&nbsp;Brian Anthony, associate director of MIT.nano and principal research scientist in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Mechanical Engineering.</p> <p>Organized by Marble Center for Cancer Nanomedicine Assistant Director Tarek Fadel, Foundation Medicine’s Michael Woonton, and MIT Hacking Medicine Co-Directors Freddy Nguyen and Kriti Subramanyam, the event was sponsored by IMES, the Koch Institute’s Marble Center for Cancer Nanomedicine, and MIT.nano, the new 200,000-square-foot nanoscale research center that launched at MIT last fall.</p> <p>Sangeeta Bhatia, director of the Marble Center, emphasizes the importance of creating these communication channels between community members working in tangentially-related research spheres. "The goal of the event is to galvanize the nanotechnology community around Boston — including MIT.nano, the Marble Center, and IMES — to leverage the unique opportunities&nbsp;presented by miniaturization and to answer critical questions impacting health care,” says Bhatia, who is also the John J. and Dorothy Wilson Professor of Health Sciences and Technology at MIT.</p> <p>At the kickoff session, organizers sought to&nbsp;create a smaller, workshop-based event that would introduce students, medical residents, and trainees to the world of hacking and disruptive problem solving. Representatives from MIT Hacking Medicine started the day with a brief overview and case study on PillPack, a successful internet pharmacy startup&nbsp;created from a previous hackathon event.</p> <p>Participants then each had 30 seconds to develop and pitch problems highlighting critical health care industry shortcomings before forming into five teams based on shared interests. Groups pinpointed a wide array of timely topics, from the nation’s fight against obesity to minimizing vaccine pain. Each cohort had two hours to work through multifaceted, nanotechnology-based solutions.&nbsp;</p> <p>Mentors Cicely Fadel, a clinical researcher at the Wyss Institute for Biologically Inspired Engineering and neonatologist at Beth Israel Deaconess Medical Center, and David Chou, a hematopathologist at Massachusetts General Hospital and clinical fellow at the Wyss Institute, roamed the room during the solution phase, offering feedback on feasibility based on their own clinical experience.</p> <p>At the conclusion of the problem-solving block, each of the five teams presented their solution to a panel of expert judges: Imran Babar, chief business officer of Cydan; Adama Marie Sesay, senior staff engineer of the Wyss Institute; Craig Mak, director of strategy at Arbor Bio; Jaideep Dudani, associate director of Relay Therapeutics; and Zen Chu, senior lecturer at the MIT Sloan School of Management and faculty director of MIT Hacking Medicine.&nbsp;</p> <p>Given the introductory nature of the event, judges opted to forego the traditional scoring rubric and instead paired with each team to offer individualized, qualitative feedback. Event sponsors note that the decision to steer away from a black-and-white, ranked-placing system encourages participants to continue thinking about the pain points of their problem in anticipation of the next hackathon in the series this fall.</p> <p>During this second phase, participants will further develop their solution and explore the issue’s competitive landscape. Organizers plan to bring together local business and management stakeholders for a final event in the spring that will allow participants to pitch their project for acquisition or initial seed funding.&nbsp;</p> <p>Founded in 2011, MIT Hacking Medicine consists of both students and community members and aims to promote medical innovation to benefit the health care community. The group recognizes that technological advancement is often born out of collaboration rather than isolation. Monday’s event accordingly encouraged networking among students and postdocs not just from MIT but institutions all around Boston, creating lasting relationships rooted in a commitment to deliver crucial health care solutions.</p> <p>Indeed, these events have proven successful in fostering connections and propelling innovation. According to MIT Hacking Medicine’s website, more than 50 companies with over $240 million in venture funding have been created since June 2018 thanks to their hackathons, workshops, and networking gatherings. The organization’s events across the globe have engaged nearly 22,000 hackers eager to disrupt the status quo and think critically about health systems in place.</p> <p>This past weekend, MIT Hacking Medicine hosted its flagship Grand Hack event in Washington. Over the course of a weekend, like-minded students and professionals across a range of industries will join forces to tackle issues related to health care access, mental health and professional burnout, rare diseases, and more.&nbsp;Sponsors hope that Monday’s shorter, intimate event will garner enthusiasm for larger hackathons like this one to sustain communication among a diverse community of experts in their respective fields.&nbsp;</p> Hacking Nanomedicine participantsPhoto: Thomas Gearty/MIT.nanoMIT.nano, Institute for Medical Engineering and Science (IMES), Mechanical engineering, School of Engineering, Koch Institute, Health sciences and technology, Medicine, Students, Graduate, postdoctoral, Innovation and Entrepreneurship (I&E), Hackathon, Startups, Contests and academic competitions, Special events and guest speakers, Nanoscience and nanotechnology MIT “Russian Doll” tech lands $7.9M international award to fight brain tumors Researchers from MIT&#039;s Koch Institute will work with teams in the UK and Europe to use nanoparticles to carry multiple drug therapies to treat glioblastoma. Fri, 26 Jul 2019 13:30:01 -0400 Koch Institute <p>Cancer Research UK awarded $7.9 million to MIT researchers as part of an international team to identify combinations of drugs that could effectively tackle glioblastoma — the most aggressive and deadly type of brain tumor. The team will then use tiny “Russian doll-like” particles — a technology developed at MIT — to deliver those combinations to brain tumors.</p> <p>The MIT team, based at the Koch Institute for Integrative Cancer Research, includes Paula Hammond, the David H. Koch Professor of Engineering and head of the Department of Chemical Engineering; Michael Yaffe, the David H. Koch Professor of Science and director of the MIT Center for Precision Cancer Medicine; and Forest White, the Ned C. and Janet Bemis Rice Professor of Biological Engineering.</p> <p>Brain tumors represent one of the hardest types of cancer to treat. There are just a few drugs approved to treat glioblastoma, but none of them are curative. Just last year, around 24,200 people in the United States were diagnosed with brain tumors, with around 17,500 deaths from brain tumors in the same year. Patients diagnosed with disease have a median life expectancy of less than 15 months.</p> <p>Treating glioblastoma is challenging in part because, like many other cancers, it can quickly develop resistance to cancer drugs. Some drug combinations deliver a powerful one-two punch that can overcome cancer cells’ ability to adapt to treatment.</p> <p>The international team aims to find potential drug combinations and targets using high-throughput small molecules and CRISPRi-based screens, mass spectrometry proteomic analysis, and computational modeling platforms for systems pharmacology developed at MIT for predicting the development and reversal of drug resistance in glioblastomas. The team will then test the effectiveness of newly-identified drug combinations in cell and mouse models, including two promising combinations already identified by researchers at the Koch Institute and the University of Edinburgh.</p> <p>Drugs that have already been approved, as well as experimental drugs that have passed initial safety testing in people, will be prioritized. Because of this, if an effective drug combination is found, the team won’t have to navigate the initial regulatory hurdles needed to get them into clinical testing, which could help get promising treatments to patients faster.</p> <p>But glioblastoma presents an additional obstacle to treatment: Even if the researchers find potential new treatments, the drugs must cross the blood-brain barrier, a structure that keeps a tight check on anything trying to get into the brain, drugs included. The team will deploy nanoparticles developed by Hammond at MIT to ferry new drug treatments across this barrier. The nanoparticles — one-thousandth the width of a human hair — are coated in a protein called transferrin, which helps them cross the blood-brain barrier.</p> <p>Not only are the nanoparticles able to access hard-to-reach areas of the brain, they have also been designed to carry multiple cancer drugs at once by holding them inside layers, similarly to the way Russian dolls fit inside one another.</p> <p>To make the nanoparticles even more effective, they will carry signals on their surface so that they are preferentially taken up by brain tumor cells. This means that healthy cells should be left untouched, which will minimize the side effects of treatment.</p> <p><a href="" target="_blank">Early research</a> by the Hammond and Yaffe labs has already shown that nanoparticles loaded with two different drugs were able to shrink glioblastomas in mice.</p> <p>“Glioblastoma is particularly challenging because we want to get highly effective but toxic drug combinations safely across the blood-brain barrier, but also want our nanoparticles to avoid healthy brain cells and only target the cancer cells," Hammond says. "We are very excited about this alliance between the MIT Koch Institute and our colleagues at Edinburgh and Oxford to address these critical challenges.”</p> <p>The MIT group and their collaborators in the UK are one of three international teams to have been given Cancer Research UK Brain Tumor Awards — in partnership with The Brain Tumour Charity — receiving $7.9 million of funding. The awards are designed to accelerate the pace of brain tumor research. Altogether, teams were awarded a total of $23 million.</p> <p>“The Cancer Research UK Brain Tumor Award provides us with a unique opportunity to unite perspectives in biology and engineering to create better options for patients with glioblastoma,” says Yaffe. “Each member of this international team brings a deep well of expertise— in the biology of brain tumors, signaling proteomics, high-throughput screening, drug combinations and systems pharmacology, and drug delivery technologies — that will be vital to overcoming the challenges of developing effective therapies for glioblastoma.”</p> <p><em>This article has been updated to reflect additional specificity regarding the project and its collaborators.</em></p> Cancer cells targeted with nanoparticles built in the Hammond laboratoryImage: Stephen Morton, Kevin Shopsowitz, Peter DeMuthKoch Institute, Chemical engineering, School of Engineering, Biological engineering, Faculty, Cancer, Medicine, Funding, Nanoscience and nanotechnology, Drug development, Pharmaceuticals Artificial “muscles” achieve powerful pulling force New MIT system of contracting fibers could be a boon for biomedical devices and robotics. Thu, 11 Jul 2019 14:00:00 -0400 David L. Chandler | MIT News Office <p>As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.</p> <p>While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal <em>Science</em>.</p> <p>The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.</p> <p>The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.</p> <p><img alt="" src="/sites/" style="width: 500px; height: 281px;" /></p> <p><em><span style="font-size:10px;">Credit: Courtesy of the researchers</span></em></p> <p>Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.</p> <p>As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.</p> <p><img alt="" src="/sites/" /></p> <p><span style="font-size:10px;"><em>Credit: Courtesy of the researchers</em></span></p> <p>One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.</p> <p>The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.</p> <p><img alt="" src="/sites/" /></p> <p><em><span style="font-size:10px;">Credit: Courtesy of the researchers</span></em></p> <p>The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.</p> <p>The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.</p> <p>For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”</p> <p>Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.</p> <p>Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.</p> <p>To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.</p> <p>Kanik says that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.</p> <p>“The strength of this work is coming from its simplicity,” he says.</p> <p>The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.</p> The tiny coils in the fiber developed by MIT researchers curl even tighter when warmed up. This causes the fiber to contract, much like a muscle fiber.Credit: Felice Frankel, edited by MIT NewsResearch, Materials Science and Engineering, DMSE, Mechanical engineering, Nanoscience and nanotechnology, Research Laboratory of Electronics, McGovern Institute, Brain and cognitive sciences, School of Science, School of Engineering, National Science Foundation (NSF)