MIT Research News http://news.mit.edu/mitresearch-rss.xml MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Tue, 10 Mar 2020 00:00:00 -0400 Why do banking crises occur? http://news.mit.edu/2020/banks-on-brink-singer-book-0310 In a new book, political scientist David Singer finds two key factors connected to financial-sector collapses around the globe. Tue, 10 Mar 2020 00:00:00 -0400 Peter Dizikes | MIT News Office http://news.mit.edu/2020/banks-on-brink-singer-book-0310 <p>Why did the U.S. banking crisis of 2007-2008 occur? Many accounts have chronicled the bad decisions and poor risk management at places like Lehmann Brothers, the now-vanished investment bank. Still, plenty of banks have vanished, and many countries have had their own banking crises in recent decades. So, to pose the question more generally, why do modern banking crises occur?</p> <p>David Singer believes he knows. An MIT professor and head of the Institute’s Department of Political Science, Singer has spent years examining global data on the subject with his colleague Mark Copelovitch, a political scientist at the University of Wisconsin at Madison.</p> <p>Together, Singer and Copelovitch have identified two things, in tandem, that generate banking crises: One, a large amount of foreign investment surges into a country, and two, that country’s economy has a well-developed market in securities — especially stocks.</p> <p>“Empirically, we find that systemic bank failures are more likely when substantial foreign capital inflows meet a financial system with well-developed stock markets,” says Singer. “Banks take on more risk in these environments, which makes them more prone to collapse.”</p> <p>Singer and Copelovitch detail their findings in a new book, “Banks on the Brink: Global Capital, Securities Markets, and the Political Roots of Financial Crises,” published by Cambridge University Press. In it, they emphasize that the historical development of markets creates conditions ripe for crisis — it is not just a matter of a few rogue bankers engaging in excessive profit-hunting.</p> <p>“There wasn’t much scholarship that explored the phenomenon from both a political and an economic perspective,” Singer adds. “We sought to go up to 30,000 feet and see what the patterns were, to explain why some banking systems were more resilient than others.”</p> <p><strong>Where the risk goes: Banks or stocks?</strong></p> <p>Through history, lending institutions have often been prone to instability. But Singer and Copelovitch examined what makes banks vulnerable under contemporary conditions. They looked at economic and banking-sector data from 1976-2011, for the 32 countries in the Organization for Economic Cooperation and Development (OECD).</p> <p>That time period begins soon after the Bretton Woods system of international monetary-policy cooperation vanished, which led to a significant increase in foreign capital movement. From 1990 to 2005 alone, international capital flow increased from $1 trillion to $12 trillion annually. (It has since slid back to $5 trillion, after the Great Recession.)</p> <p>Even so, a flood of capital entering a country is not enough, by itself, to send a banking sector under water, Singer says: “Why is it that some capital inflows can be accommodated and channeled productively throughout an economy, but other times they seem to lead a banking system to go awry?”</p> <p>The answer, Singer and Copelovitch contend, is that a highly active stock market is a form of competition for the banking sector, to which banks respond by taking greater risks.&nbsp;</p> <p>To see why, imagine a promising business needs capital. It could borrow funds from a bank. Or it could issue a stock offering, and raise the money from investors, as riskier firms generally do. If a lot of foreign investment enters a country, backing firms that issue stock offerings, bankers will want a piece of the action.</p> <p>“Banks and stock markets are competing for the business of firms that need to raise money,” Singer says. “When stock markets are small and unsophisticated, there’s not much competition. Firms go to their banks.” However, he adds, “A bank doesn’t want to lose a good chunk of its customer base to the stock markets. … And if that happens, banks start to do business with slightly riskier firms.”</p> <p><strong>Rethinking Canadian bank stability</strong></p> <p>Exploring this point in depth, the book develops contrasting case studies of Canada and Germany. Canada is one of the few countries to remain blissfully free of banking crises — something commentators usually ascribe to sensible regulation.</p> <p>However, Singer and Copelovitch observe, Canada has always had small, regional stock markets, and is the only OECD country without a national stock-market regulator.</p> <p>“There’s a sense that Canada has stable banks just because they’re well-regulated,” Singer says. “That’s the conventional wisdom we’re trying to poke holes in. And I think it’s not well-understood that Canada’s stock markets are as underdeveloped as they are.”</p> <p>He adds: “That’s one of the key considerations, when we analyze why Canada’s banks are so stable. They don’t face a competitive threat from stock markets the way banks in the United States do. They can be conservative and be competitive and still be profitable.”</p> <p>By contrast, German banks have been involved in many banking blowups in the last two decades. At one time, that would not have been the case. But Germany’s national-scale banks, feeling pressure from a thriving set of regional banks, tried to bolster profits through securities investment, leading to some notable problems.</p> <p>“Germany started off the period we study looking like a very bank-centric economy,” Singer says. “And that’s what Germany is often known for, close connections between banks and industry.” However, he notes, “The national banks started to feel a competitive threat and looked to stock markets to bolster their competitive advantage. … German banks used to be so stable and so long-term focused, and they’re now finding short-term trouble.”</p> <p>“Banks on the Brink” has drawn praise from other scholars in the field. Jeffry Frieden, a professor of government at Harvard University, says the book’s “careful logic, statistical analyses, and detailed case studies make compelling reading for anyone interested in the economics and politics of finance.”</p> <p>For their part, Singer and Copelovitch say they hope to generate more discussion about both the recent history of banking crises, and how to avoid them in the future.</p> <p>Perhaps surprisingly, Singer believes that separating commerical and investment banks from each other — which the Glass-Steagall Act used to do in the U.S. — would not prevent crises. Any bank, not just investment banks, can flounder if profit-hunting in risky territory.</p> <p>Instead, Singer says, “We think macroprudential regulations for banks are the way to go. That’s just about capital regulations, making sure banks are holding enough capital to absorb any losses they might incur. That seems to be the best approach to maintaining a stable banking system, especially in the face of large capital flows.”</p> David Singer, an MIT professor and head of the Department of Political Science, is the co-author of a new book, “Banks on the Brink: Global Capital, Securities Markets, and the Political Roots of Financial Crises,” published by Cambridge University Press.Photo: M. Scott BrauerPolitical science, Banking, Finance, Books and authors, Faculty, Research, School of Humanities Arts and Social Sciences How the brain encodes landmarks that help us navigate http://news.mit.edu/2020/brain-encodes-landmarks-navigate-0310 Neuroscientists discover how a key brain region combines visual and spatial information to help us find our way. Tue, 10 Mar 2020 00:00:00 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/brain-encodes-landmarks-navigate-0310 <p>When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.</p> <p>While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.</p> <p>“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”</p> <p>In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.</p> <p>“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.</p> <p>Harnett is the senior author of the study, which appears today in the journal <em>eLife</em>. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.</p> <p><strong>Encoding landmarks</strong></p> <p>Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.</p> <p>The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.</p> <p>“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”</p> <p>In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.</p> <p>At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.</p> <p>Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.</p> <p>There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.</p> <p>Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.</p> <p>When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.</p> <p><strong>Combining inputs</strong></p> <p>The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.</p> <p>Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.</p> <p>“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.</p> <p>The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.</p> <p>The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.</p> MIT neuroscientists have identified a “landmark code” that helps the brain navigate our surroundings.Image: Christine Daniloff, MITResearch, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH) Mathematical model could lead to better treatment for diabetes http://news.mit.edu/2020/math-model-glucose-insulin-diabetes-0609 A new model can predict which types of glucose-responsive insulin will work in humans and animals. Mon, 09 Mar 2020 10:01:46 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/math-model-glucose-insulin-diabetes-0609 <p>One promising new strategy to treat diabetes is to give patients insulin that circulates in their bloodstream, staying dormant until activated by rising blood sugar levels. However, no glucose-responsive insulins (GRIs) have been approved for human use, and the only candidate that entered the clinical trial stage was discontinued after it failed to show effectiveness in humans.</p> <p>MIT researchers have now developed a mathematical model that can predict the behavior of different kinds of GRIs in both humans and in rodents. They believe this model could be used to design GRIs that are more likely to be effective in humans, and to avoid drug designs less likely to succeed in costly clinical trials.</p> <p>“There are GRIs that will fail in humans but will show success in animals, and our models can predict this,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “In theory, for the animal system that diabetes researchers typically employ, we can immediately predict how the results will translate to humans.”</p> <p>Strano is the senior author of the study, which appears today in the journal <em>Diabetes</em>. MIT graduate student Jing Fan Yang is the lead author of the paper. Other MIT authors include postdoc Xun Gong and graduate student Naveed Bakh. Michael Weiss, a professor of biochemistry and molecular biology at Indiana University School of Medicine, and Kelley Carr, Nelson Phillips, Faramarz Ismail-Beigi of Case Western Reserve University are also authors of the paper.</p> <p><strong>Optimal design</strong></p> <p>Patients with diabetes typically have to measure their blood sugar throughout the day and inject themselves with insulin when their blood sugar gets too high. As a potential alternative, many diabetes researchers are now working to develop glucose-responsive insulin, which could be injected just once a day and would spring into action whenever blood sugar levels rise.</p> <p>Scientists have used a variety of strategies to design such drugs. For instance, insulin might be carried by a polymer particle that dissolves when glucose is present, releasing the drug. Or, insulin could be modified with molecules that can bind to glucose and trigger insulin activation. In this paper, the MIT team focused on a GRI that is coated with molecules called PBA, which can bind to glucose and activate the insulin.</p> <p>The new study builds on a <a href="http://news.mit.edu/2017/model-predicts-performance-glucose-responsive-insulin-0926">mathematical model</a> that Strano’s lab first developed in 2017. The model is essentially a set of equations that describes how glucose and insulin behave in different compartments of the human body, such as blood vessels, muscle, and fatty tissue. This model can predict how a given GRI will affect blood sugar in different parts of the body, based on chemical features such as how tightly it binds to glucose and how rapidly the insulin is activated.</p> <p>“For any glucose-responsive insulin, we can turn it into mathematical equations, and then we can insert that into our model and make very clear predictions about how it will perform in humans,” Strano says.</p> <p>Although this model offered helpful guidance in developing GRIs, the researchers realized that it would be much more useful if it could also work on data from tests in animals. They decided to adapt the model so that it could predict how rodents, whose endocrine and metabolic responses are very different from those of humans, would respond to GRIs.</p> <p>“A lot of experimental work is done in rodents, but it’s known that there are lots of imperfections with using rodents. Some are now quite wittily referring to this situation as ‘lost in [clinical] translation,’” Yang says.</p> <p>“This paper is pioneering in that we’ve taken our model of the human endocrine system and we’ve linked it to an animal model,” adds Strano.</p> <p>To achieve that, the researchers determined the most important differences between humans and rodents in how they process glucose and insulin, which allowed them to adapt the model to interpret data from rodents.&nbsp;</p> <p>Using these two variants of the model, the researchers were able to predict the GRI features that would be needed for the PBA-modified GRI to work well in humans and rodents. They found that about 13 percent of the possible GRIs would work well in both rodents and humans, while 14 percent were predicted to work in humans but not rodents, and 12 percent would work in rodents but not humans.</p> <p>“We used our model to test every point in the range of potential candidates,” Gong says. “There exists an optimal design, and we found where that optimal design overlaps between humans and rodents.”</p> <p><strong>Analyzing failure</strong></p> <p>This model can also be adapted to predict the behavior of other types of GRIs. To demonstrate that, the researchers created equations that represent the chemical features of a glucose-responsive insulin that Merck tested from 2014 to 2016, which ultimately did not succeed in patients. They now plan to test whether their model would have predicted the drug’s failure.</p> <p>“That trial was based on a lot of promising animal data, but when it got to humans it failed. The question is whether this failure could have been prevented,” Strano says. “We’ve already turned it into a mathematical representation and now our tool can try to figure out why it failed.”</p> <p>Strano’s lab is also collaborating with Weiss to design and test new GRIs based on the results from the model. Doing this type of modeling during the drug development stage could help to reduce the number of animal experiments needed to test many possible variants of a proposed GRI.</p> <p>This kind of model, which the researchers are making available to anyone who wants to use it, could also be applied to other medicines designed to respond to conditions within a patient’s body.</p> <p>“You can envision new kinds of medicines, one day, that will go in the body and modulate their potency as needed based on the real-time patient response,” Strano says. “If we get GRIs to work, this could be a model for the pharmaceutical industry, where a drug is delivered and its potency is constantly modulated in response to some therapeutic endpoint, such as levels of cholesterol or fibrinogen.”</p> <p>The research was funded by JDRF.</p> A new model of glucose-responsive insulin, developed by MIT researchers, could lead to better treatment for diabetes and could eliminate the need for regular manual glucose-level testing.Research, Chemical engineering, School of Engineering, Health sciences and technology, Diabetes, Medicine The elephant in the server room http://news.mit.edu/2020/data-feminism-catherine-dignazio-0309 Catherine D’Ignazio’s new book, “Data Feminism,” examines problems of bias and power that beset modern information. Mon, 09 Mar 2020 00:00:00 -0400 Peter Dizikes | MIT News Office http://news.mit.edu/2020/data-feminism-catherine-dignazio-0309 <p>Suppose you would like to know mortality rates for women during childbirth, by country, around the world. Where would you look? One option is the <a href="http://www.womanstats.org/" target="_blank">WomanStats</a> Project, the website of an academic research effort investigating the links between the security and activities of nation-states, and the security of the women who live in them.</p> <p>The project, founded in 2001, meets a need by patching together data from around the world. Many countries are indifferent to collecting statistics about women’s lives. But even where countries try harder to gather data, there are clear challenges to arriving at useful numbers — whether it comes to women’s physical security, property rights, and government participation, among many other issues. &nbsp;</p> <p>For instance: In some countries, violations of women’s rights may be reported more regularly than in other places. That means a more responsive legal system may create the appearance of greater problems, when it provides relatively more support for women. The WomanStats Project notes many such complications.</p> <p>Thus the WomanStats Project offers some answers — for example, Australia, Canada, and much of Western Europe have low childbirth mortality rates — while also showing what the challenges are to taking numbers at face value. This, according to MIT professor Catherine D’Ignazio, makes the site unusual, and valuable.</p> <p>“The data never speak for themselves,” says D’Ignazio, referring to the general problem of finding reliable numbers about women’s lives. “There are always humans and institutions speaking for the data, and different people have their own agendas. The data are never innocent.”</p> <p>Now D’Ignazio, an assistant professor in MIT’s Department of Urban Studies and Planning, has taken a deeper look at this issue in a new book, co-authored with Lauren Klein, an associate professor of English and quantitative theory and methods at Emory University. In the book, “<a href="http://datafeminism.io/" target="_blank">Data Feminism</a>,” published this month by the MIT Press, the authors use the lens of intersectional feminism to scrutinize how data science reflects the social structures it emerges from.</p> <p>“Intersectional feminism examines unequal power,” write D’Ignazio and Klein, in the book’s introduction. “And in our contemporary world, data is power too. Because the power of data is wielded unjustly, it must be challenged and changed.”</p> <p><strong>The 4 percent problem</strong></p> <p>To see a clear case of power relations generating biased data, D’Ignazio and Klein note, consider research led by MIT’s own Joy Buolamwini, who as a graduate student in a class studying facial-recognition programs, observed that the software in question could not “see” her face. Buolamwini found that for the facial-recognition system in question, the software was based on a set of faces which were 78 percent male and 84 percent white; only 4 percent were female and dark-skinned, like herself.&nbsp;</p> <p>Subsequent media coverage of Buolamwini’s work, D’Ignazio and Klein write, contained “a hint of shock.” But the results were probably less surprising to those who are not white males, they think.&nbsp;&nbsp;</p> <p>“If the past is racist, oppressive, sexist, and biased, and that’s your training data, that is what you are tuning for,” D’Ignazio says.</p> <p>Or consider another example, from tech giant Amazon, which tested an automated system that used AI to sort through promising CVs sent in by job applicants. One problem: Because a high percentage of company employees were men, the algorithm favored men’s names, other things being equal.&nbsp;</p> <p>“They thought this would help [the] process, but of course what it does is train the AI [system] to be biased toward women, because they themselves have not hired that many women,” D’Ignazio observes.</p> <p>To Amazon’s credit, it did recognize the problem. Moreover, D’Ignazio notes, this kind of issue is a problem that can be addressed. “Some of the technologies can be reformed with a more participatory process, or better training data. … If we agree that’s a good goal, one path forward is to adjust your training set and include more people of color, more women.”</p> <p><strong>“Who’s on the team? Who had the idea? Who’s benefiting?” </strong></p> <p>Still, the question of who participates in data science is, as the authors write, “the elephant in the server room.” As of 2011, only 26 percent of all undergraduates receiving computer science degrees in the U.S. were women. That is not only a low figure, but actually a decline from past levels: In 1985, 37 percent of computer science graduates were women, the highest mark on record.</p> <p>As a result of the lack of diversity in the field, D’Ignazio and Klein believe, many data projects are radically limited in their ability to see all facets of the complex social situations they purport to measure.&nbsp;</p> <p>“We want to try to tune people in to these kinds of power relationships and why they matter deeply,” D’Ignazio says. “Who’s on the team? Who had the idea? Who’s benefiting from the project? Who’s potentially harmed by the project?”</p> <p>In all, D’Ignazio and Klein outline seven principles of data feminism, from examining and challenging power, to rethinking binary systems and hierarchies, and embracing pluralism. (Those statistics about gender and computer science graduates are limited, they note, by only using the “male” and “female” categories, thus excluding people who identify in different terms.)</p> <p>People interested in data feminism, the authors state, should also “value multiple forms of knowledge,” including firsthand knowledge that may lead us to question seemingly official data. Also, they should always consider the context in which data are generated, and “make labor visible” when it comes to data science. This last principle, the researchers note, speaks to the problem that even when women and other excluded people contribute to data projects, they often receive less credit for their work.</p> <p>For all the book’s critique of existing systems, programs, and practices, D’Ignazio and Klein are also careful to include examples of positive, successful efforts, such as the WomanStats project, which has grown and thrived over two decades.</p> <p>“For people who are data people but are new to feminism, we want to provide them with a very accessible introduction, and give them concepts and tools they can use in their practice,” D’Ignazio says. “We’re not imagining that people already have feminism in their toolkit. On the other hand, we are trying to speak to folks who are very tuned in to feminism or social justice principles, and highlight for them the ways data science is both problematic, but can be marshalled in the service of justice.”</p> Catherine D’Ignazio is the co-author of a new book, “Data Feminism,” published by MIT Press in March 2020. Image: Diana Levine and MIT PressData, Women, Faculty, Research, Books and authors, MIT Press, Diversity and inclusion, Ethics, Technology and society, Artificial intelligence, Machine learning, Computer science and technology, Urban studies and planning, School of Architecture and Planning Novel method for easier scaling of quantum devices http://news.mit.edu/2020/scaling-quantum-devices-quibits-0306 System “recruits” defects that usually cause disruptions, using them to instead carry out quantum operations. Thu, 05 Mar 2020 23:59:59 -0500 Rob Matheson | MIT News Office http://news.mit.edu/2020/scaling-quantum-devices-quibits-0306 <p>In an advance that may help researchers scale up quantum devices, an MIT team has developed a method to “recruit” neighboring quantum bits made of nanoscale defects in diamond, so that instead of causing disruptions they help carry out quantum operations.</p> <p>Quantum devices perform operations using quantum bits, called “qubits,” that can represent the two states corresponding to classic binary bits — a 0 or 1 — or a “quantum superposition” of both states simultaneously. The unique superposition state can enable quantum computers to solve problems that are practically impossible for classical computers, potentially spurring breakthroughs in biosensing, neuroimaging, machine learning, and other applications.</p> <p>One promising qubit candidate is a defect in diamond, called a nitrogen-vacancy (NV) center, which holds electrons that can be manipulated by light and microwaves. In response, the defect emits photons that can carry quantum information. Because of their solid-state environments, however, NV centers are always surrounded by many other unknown defects with different spin properties, called “spin defects.” When the measurable NV-center qubit interacts with those spin defects, the qubit loses its coherent quantum state — “decoheres”—&nbsp;and operations fall apart. Traditional solutions try to identify these disrupting defects to protect the qubit from them.</p> <p>In a paper published Feb. 25 in <em>Physical Letters Review</em>, the researchers describe a method that uses an NV center to probe its environment and uncover the existence of several nearby spin defects. Then, the researchers can pinpoint the defects’ locations and control them to achieve a coherent quantum state — essentially leveraging them as additional qubits.</p> <p>In experiments, the team generated and detected quantum coherence among three electronic spins — scaling up the size of the quantum system from a single qubit (the NV center) to three qubits (adding two nearby spin defects). The findings demonstrate a step forward in scaling up quantum devices using NV centers, the researchers say. &nbsp;</p> <p>“You always have unknown spin defects in the environment that interact with an NV center. We say, ‘Let’s not ignore these spin defects, which [if left alone] could cause faster decoherence. Let’s learn about them, characterize their spins, learn to control them, and ‘recruit’ them to be part of the quantum system,’” says the lead co-author Won Kyu Calvin Sun, a graduate student in the Department of Nuclear Science and Engineering and a member of the Quantum Engineering group. “Then, instead of using a single NV center [or just] one qubit, we can then use two, three, or four qubits.”</p> <p>Joining Sun on the paper are lead author Alexandre Cooper ’16 of Caltech; Jean-Christophe Jaskula, a research scientist in the MIT Research Laboratory of Electronics (RLE) and member of the Quantum Engineering group at MIT; and Paola Cappellaro, a professor in the Department of Nuclear Science and Engineering, a member of RLE, and head of the Quantum Engineering group at MIT.</p> <p><strong>Characterizing defects</strong></p> <p>NV centers occur where carbon atoms in two adjacent places in a diamond’s lattice structure are missing — one atom is replaced by a nitrogen atom, and the other space is an empty “vacancy.” The NV center essentially functions as an atom, with a nucleus and surrounding electrons that are extremely sensitive to tiny variations in surrounding electrical, magnetic, and optical fields. Sweeping microwaves across the center, for instance, makes it change, and thus control, the spin states of the nucleus and electrons.</p> <p>Spins are measured using a type of magnetic resonance spectroscopy. This method plots the frequencies of electron and nucleus spins in megahertz as a “resonance spectrum” that can dip and spike, like a heart monitor. Spins of an NV center under certain conditions are well-known. But the surrounding spin defects are unknown and difficult to characterize.</p> <p>In their work, the researchers identified, located, and controlled two electron-nuclear spin defects near an NV center. They first sent microwave pulses at specific frequencies to control the NV center. Simultaneously, they pulse another microwave that probes the surrounding environment for other spins. They then observed the resonance spectrum of the spin defects interacting with the NV center.</p> <p>The spectrum dipped in several spots when the probing pulse interacted with nearby electron-nuclear spins, indicating their presence. The researchers then swept a magnetic field across the area at different orientations. For each orientation, the defect would “spin” at different energies, causing different dips in the spectrum. Basically, this allowed them to measure each defect’s spin in relation to each magnetic orientation. They then plugged the energy measurements into a model equation with unknown parameters. This equation is used to describe the quantum interactions of an electron-nuclear spin defect under a magnetic field. Then, they could solve the equation to successfully characterize each defect.</p> <p><strong>Locating and controlling</strong></p> <p>After characterizing the defects, the next step was to characterize the interaction between the defects and the NV, which would simultaneously pinpoint their locations. To do so, they again swept the magnetic field at different orientations, but this time looked for changes in energies describing the interactions between the two defects and the NV center. The stronger the interaction, the closer they were to one another. They then used those interaction strengths to determine where the defects were located, in relation to the NV center and to each other. That generated a good map of the locations of all three defects in the diamond.</p> <p>Characterizing the defects and their interaction with the NV center allow for full control, which involves a few more steps to demonstrate. First, they pump the NV center and surrounding environment with a sequence of pulses of green light and microwaves that help put the three qubits in a well-known quantum state. Then, they use another sequence of pulses that ideally entangles the three qubits briefly, and then disentangles them, which enables them to detect the three-spin coherence of the qubits.</p> <p>The researchers verified the three-spin coherence by measuring a major spike in the resonance spectrum. The measurement of the spike recorded was essentially the sum of the frequencies of the three qubits. If the three qubits for instance had little or no entanglement, there would have been four separate spikes of smaller height.</p> <p>“We come into a black box [environment with each NV center]. But when we probe the NV environment, we start seeing dips and wonder which types of spins give us those dips. Once we [figure out] the spin of the unknown defects, and their interactions with the NV center, we can start controlling their coherence,” Sun says. “Then, we have full universal control of our quantum system.”</p> <p>Next, the researchers hope to better understand other environmental noise surrounding qubits. That will help them develop more robust error-correcting codes for quantum circuits. Furthermore, because on average the process of NV center creation in diamond creates numerous other spin defects, the researchers say they could potentially scale up the system to control even more qubits. “It gets more complex with scale. But if we can start finding NV centers with more resonance spikes, you can imagine starting to control larger and larger quantum systems,” Sun says.</p> An MIT team found a way to “recruit” normally disruptive quantum bits (qubits) in diamond to, instead, help carry out quantum operations. This approach could be used to help scale up quantum computing systems. Image: Christine Daniloff, MITResearch, Computer science and technology, Quantum computing, Nuclear science and engineering, Nanoscience and nanotechnology, Sensors, Research Laboratory of Electronics, Materials Science and Engineering, Physics, School of Engineering Showing robots how to do your chores http://news.mit.edu/2020/showing-robots-learn-chores-0306 By observing humans, robots learn to perform complex tasks, such as setting a table. Thu, 05 Mar 2020 23:59:59 -0500 Rob Matheson | MIT News Office http://news.mit.edu/2020/showing-robots-learn-chores-0306 <p>Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.</p> <p>Making progress on that vision, MIT researchers have designed a system that lets these types of robots learn complicated tasks that would otherwise stymie them with too many confusing rules. One such task is setting a dinner table under certain conditions. &nbsp;</p> <p>At its core, the researchers’ “Planning with Uncertain Specifications” (PUnS) system gives robots the humanlike planning ability to simultaneously weigh many ambiguous —&nbsp;and potentially contradictory —&nbsp;requirements to reach an end goal. In doing so, the system always chooses the most likely action to take, based on a “belief” about some probable specifications for the task it is supposed to perform.</p> <p>In their work, the researchers compiled a dataset with information about how eight objects — a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl — could be placed on a table in various configurations. A robotic arm first observed randomly selected human demonstrations of setting the table with the objects. Then, the researchers tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.</p> <p>To succeed, the robot had to weigh many possible placement orderings, even when items were purposely removed, stacked, or hidden. Normally, all of that would confuse robots too much. But the researchers’ robot made no mistakes over several real-world experiments, and only a handful of mistakes over tens of thousands of simulated test runs. &nbsp;</p> <p>“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group, who emphasizes that their work is just one step in fulfilling that vision. “That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home.”</p> <p>Joining Shah on the paper are AeroAstro and Interactive Robotics Group graduate student Shen Li and Interactive Robotics Group leader Julie Shah, an associate professor in AeroAstro and the Computer Science and Artificial Intelligence Laboratory.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Bots hedging bets</strong></p> <p>Robots are fine planners in tasks with clear “specifications,” which help describe the task the robot needs to fulfill, considering its actions, environment, and end goal. Learning to set a table by observing demonstrations, is full of uncertain specifications. Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item’s immediate availability or social conventions. Present approaches to planning are not capable of dealing with such uncertain specifications.</p> <p>A popular approach to planning is “reinforcement learning,” a trial-and-error machine-learning technique that rewards and penalizes them for actions as they work to complete a task. But for tasks with uncertain specifications, it’s difficult to define clear rewards and penalties. In short, robots never fully learn right from wrong.</p> <p>The researchers’ system, called PUnS (for Planning with Uncertain Specifications), enables a robot to hold a “belief” over a range of possible specifications. The belief itself can then be used to dish out rewards and penalties. “The robot is essentially hedging its bets in terms of what’s intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification,” Ankit Shah says.</p> <p>The system is built on “linear temporal logic” (LTL), an expressive language that enables robotic reasoning about current and future outcomes. The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs. The robot’s observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas. Each formula encoded a slightly different preference — or specification — for setting the table. That probability distribution becomes its belief.</p> <p>“Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually,” Ankit Shah says.</p> <p><strong>Following criteria</strong></p> <p>The researchers also developed several criteria that guide the robot toward satisfying the entire belief over those candidate formulas. One, for instance, satisfies the most likely formula, which discards everything else apart from the template with the highest probability. Others satisfy the largest number of unique formulas, without considering their overall probability, or they satisfy several formulas that represent highest total probability. Another simply minimizes error, so the system ignores formulas with high probability of failure.</p> <p>Designers can choose any one of the four criteria to preset before training and testing. Each has its own tradeoff between flexibility and risk aversion. The choice of criteria depends entirely on the task. In safety critical situations, for instance, a designer may choose to limit possibility of failure. But where consequences of failure are not as severe, designers can choose to give robots greater flexibility to try different approaches.</p> <p>With the criteria in place, the researchers developed an algorithm to convert the robot’s belief — the probability distribution pointing to the desired formula — into an equivalent reinforcement learning problem. This model will ping the robot with a reward or penalty for an action it takes, based on the specification it’s decided to follow.</p> <p>In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries. In real-world demonstrations, it showed behavior similar to how a human would perform the task. If an item wasn’t initially visible, for instance, the robot would finish setting the rest of the table without the item. Then, when the fork was revealed, it would set the fork in the proper place. “That’s where flexibility is very important,” Ankit Shah says. “Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup.”</p> <p>Next, the researchers hope to modify the system to help robots change their behavior based on verbal instructions, corrections, or a user’s assessment of the robot’s performance. “Say a person demonstrates to a robot how to set a table at only one spot. The person may say, ‘do the same thing for all other spots,’ or, ‘place the knife before the fork here instead,’” Ankit Shah says. “We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations.”&nbsp;&nbsp;</p> Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores.Image: Christine Daniloff, MITResearch, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Robots, Robotics, Assistive technology, Aeronautical and astronautical engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering 3 Questions: Emre Gençer on the evolving role of hydrogen in the energy system http://news.mit.edu/2020/3-questions-emre-gencer-hydrogen-fuel-0305 With support from renewable energy sources, the MIT research scientist says, we can consider hydrogen fuel as a tool for decarbonization. Thu, 05 Mar 2020 15:05:01 -0500 Nafisa Syed | MIT Energy Initiative http://news.mit.edu/2020/3-questions-emre-gencer-hydrogen-fuel-0305 <p><em>As the world increasingly recognizes the need to develop more sustainable and renewable energy sources, low-carbon hydrogen has reemerged as an energy carrier with the potential to play a key role in sectors from transportation to power. </em></p> <p><em>At MITEI’s&nbsp;2019 Spring Symposium, MIT Energy Initiative Research Scientist Emre Gençer gave a presentation titled “</em><a href="http://energy.mit.edu/wp-content/uploads/2019/06/2019-MIT-Energy-Initiative-Spring-Symposium-Presentation-Emre-Gencer.pdf" target="_blank"><em>Hydrogen towards Deep Decarbonization</em></a><em>,” in which he elaborated on how hydrogen can be used across all energy sectors. Other themes </em><a href="http://energy.mit.edu/news/can-hydrogen-become-part-of-the-climate-solution/" target="_blank"><em>discussed</em></a><em> by experts at the symposium included industry’s role in promoting hydrogen, public safety concerns surrounding the hydrogen infrastructure, and the policy landscape required to scale hydrogen around the world. </em></p> <p><em>Here, Gençer shares his thoughts on the history of hydrogen and how it could be incorporated into our energy system as a tool for deep decarbonization to address climate change. </em></p> <p><strong>Q: </strong>How has public perception of hydrogen changed over time?<strong> </strong></p> <p><strong>A: </strong>Hydrogen has been in the public imagination since the 1870s. Jules Verne wrote that “water will be the coal of the future” in his novel “The Mysterious Island.”<em> </em>The concept of hydrogen has persisted in the public imagination for over a century, though interest in hydrogen has changed over time.</p> <p>Initial conversations about hydrogen focused on using it to supplement depleting fuel sources on Earth, but the role of hydrogen is evolving. Now we know that there is enough fuel on Earth, especially with the support of renewable energy sources, and that we can consider hydrogen as a tool for decarbonization.</p> <p>The first “hydrogen economy” concept was introduced in the 1970s. The term “hydrogen economy” refers to using hydrogen as an energy carrier, mostly for the transportation sector. In this context, hydrogen can be compared to electricity. Electricity requires a primary energy source and transmission lines to transmit electrons. In the case of hydrogen, energy sources and transmission infrastructure are required to transport protons.</p> <p>In 2004, there was a big initiative in the U.S. to involve hydrogen in all energy sectors to ensure access to reliable and safe energy sources. That year, the National Research Council and National Academy of Engineering released a report titled “<a href="https://www.nap.edu/catalog/10922/the-hydrogen-economy-opportunities-costs-barriers-and-rd-needs">The Hydrogen Economy: Opportunities, Costs, Barriers, and R&amp;D Needs</a>.” This report described how hydrogen could be used to increase energy security and reduce environmental impacts. Because its combustion yields only water vapor, hydrogen does not produce carbon dioxide (CO<sub>2</sub>) emissions. As a result, we can really benefit from eliminating CO<sub>2</sub> emissions in many of its end-use applications.</p> <p>Today, hydrogen is primarily used in industry to remove contaminants from diesel fuel and to produce ammonia. Hydrogen is also used in consumer vehicles with hydrogen fuel cells, and countries such as Japan are exploring its use in <a href="https://www.npr.org/2019/03/18/700877189/japan-is-betting-big-on-the-future-of-hydrogen-cars">public transportation</a>. In the future, there is ample room for hydrogen in the energy space. Some of the work I completed for my PhD in 2015 involved researching efficient hydrogen production via solar thermal and other renewable sources. This application of renewable energy is now coming back to the fore as we think about “deep decarbonization.”</p> <p><strong>Q: </strong>How can hydrogen be incorporated into our energy system?<strong> </strong></p> <p><strong>A: </strong>When we consider deep decarbonization, or economy-wide decarbonization, there are some sectors that are hard to decarbonize with electricity alone. They include heavy industries that require high temperatures, heavy-duty transportation, and long-term energy storage. We are now thinking about the role hydrogen can play in decarbonizing these sectors.</p> <p>Hydrogen has a number of properties that make it safer to handle and use than the conventional fuels used in our energy system today. Hydrogen is nontoxic and much lighter than air. In the case of a leak, its lightness allows for relatively rapid dispersal. All fuels have some degree of danger associated with them, but we can design fuel systems with engineering controls and establish standards to ensure their safe handling and use. As the number of successful hydrogen projects grows, the public will become increasingly confident that hydrogen can be as safe as the fuels we use today.</p> <p>To expand hydrogen’s uses, we first need to explore ways of integrating it into as many energy sectors as possible. This presents a challenge because the entry points can vary for different regions. For example, in colder regions like the northeastern U.S., hydrogen can help provide heating. In California, it can be used for energy storage and light-duty transportation. And in the southern U.S., hydrogen can be used in industry as a feedstock or energy source.</p> <p>Once the most strategic entry points for hydrogen are identified for each region, the supporting infrastructure can be built and used for additional purposes. For example, if the northeastern U.S. implements hydrogen as its primary source of residential heating, other uses for hydrogen will follow, such as for transportation or energy storage. At that point, we hope that the market will shift so that it is profitable to use hydrogen across all energy sectors.</p> <p><strong>Q: </strong>What challenges need to be overcome so that hydrogen can be used to support decarbonization, and what are some solutions to these challenges?</p> <p><strong>A: </strong>The first challenge involves addressing the large capital investment that needs to be made, especially in infrastructure. Once industry and policymakers are convinced that hydrogen will be a critical component for decarbonization, investing in that infrastructure is the next step. Currently, we have many hydrogen plants — we know how to produce hydrogen. But in order to move toward a semi-hydrogen economy, we need to identify the sectors or end users that really require or could benefit from using hydrogen. The way I see it, we need two energy vectors for decarbonization. One is electricity; we are sure about that. But it's not enough. The second vector can be, and should be, hydrogen.</p> <p>Another key issue is the nature of hydrogen production itself. Though hydrogen does not generate any emissions directly when used, hydrogen production can have a huge environmental impact. Today, close to 95 percent of its production is from fossil resources. As a result, the CO<sub>2</sub> emissions from hydrogen production are quite high.</p> <p>There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. In this case, usually a CO<sub>2</sub> emissions reduction of around 90 percent is feasible.</p> <p>The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. Here, the source of electricity is very important. Our source of hydrogen needs to produce very low levels of CO<sub>2</sub> emissions, if not zero. Otherwise, there will not be any environmental benefit. If we start with clean, low-carbon electricity sources such as renewables, our CO<sub>2</sub> emissions will be quite low.</p> Emre Gençer discusses hydrogen at the MIT Energy Initiative’s 2019 Spring Symposium.Photo: Kelley TraversMIT Energy Initiative, Research, Energy, Sustainability, Alternative energy, Infrastructure, Policy, 3 Questions, Staff, Carbon, Emissions Historic migration patterns are written in Americans&#039; DNA http://news.mit.edu/2020/historic-migration-patterns-americans-dna-0305 Genetic, geographic, and demographic data from more than 30,000 Americans reveal more genetic diversity within ancestry groups than previously thought. Thu, 05 Mar 2020 14:11:03 -0500 Tom Ulrich | Broad Institute http://news.mit.edu/2020/historic-migration-patterns-americans-dna-0305 <p><em>The following press release was issued today by the Broad Institute of MIT and Harvard.</em></p> <p>Studies of DNA from ancient human fossils have helped scientists to trace human migration routes around the world thousands of years ago. But can modern DNA tell us anything about more recent movements, especially in an ancestrally diverse melting pot like the United States?</p> <p>To find out, researchers from the Broad Institute of MIT and Harvard, Massachusetts General Hospital (MGH), and Massachusetts Institute of Technology (MIT) analyzed data provided by more than 32,000 Americans as part of the National Geographic Society's Genographic Project. This project, launched in 2005, asked Americans to provide their DNA along with their geographic and demographic data, including birth records and family histories, to learn more about human migration.&nbsp;</p> <p>The research team found distinct genetic traces within many American populations that reflect the nation's complicated history of immigration, migration, and mixture.</p> <p>Writing in the <em><a href="https://www.cell.com/ajhg/fulltext/S0002-9297(20)30044-6" target="_blank">American Journal of Human Genetics</a></em>, the team also reported subtle but potentially important levels of diversity within certain groups, such as the Hispanic population.</p> <p>They also call on genetics researchers to increase the ancestral diversity of the participants in their studies so that their findings capture more of the genetic diversity in US populations. This will help ensure that precision medicine will benefit as many people as possible in the US.</p> <p>"Understanding the genetic structure of the US is important because it helps illuminate distinctions between populations that studies might not otherwise account for," said Alicia Martin, a geneticist in the Broad Institute's <a href="https://www.broadinstitute.org/node/8507/" target="_blank">Program in Medical and Population Genetics</a>, a research fellow in MGH's Analytical and Translational Genetics Unit, and co-senior author of the study with Carlo Ratti, director of MIT's Senseable City Lab. "If we want genetic technologies to benefit everyone, we need to rethink our current approach for genetic studies because at the moment, they typically miss a huge swath of American diversity."</p> <p>Martin, Ratti, and their colleagues, including study first author Chengzhen Dai of MIT's Department of Electrical Engineering and Computer Science, partnered with the Genographic project because they wanted to understand the geographic patterns of genetic ancestry and admixture across the US over time, and learn how much people’s genetics across the US reflect historic demographic events.</p> <p>Some findings caught the researchers by surprise. For instance, their analysis revealed a striking diversity in the geographic origins of participants who identified as Hispanic or Latino. The genetic patterns of these participants indicated a complex mixture of European, African, and Native American ancestries that varied widely depending on where participants lived, whether they were in California, Texas or Florida, for example.</p> <p>Results like this, Martin noted, could hold implications for precision medicine as it becomes available to more and more Americans.</p> <p>"There are subtle genetic differences within ancestry groups that arise from their population history,” she said. “Those differences will be important but challenging to account for, especially as genetic testing is used by more diverse groups of patients than have been studied so far."</p> An analysis of genetic, geographic, and demographic data provided by more than 32,000 Americans found distinct genetic traces within many American populations that reflect the nation's complicated history of immigration, migration, and mixture.Image: Susanna M. Hamilton, Broad CommunicationsResearch, Broad Institute, Genetics, School of Engineering, School of Architecture and Planning, Urban studies and planning, Electrical Engineering & Computer Science (eecs), DNA New approach to sustainable building takes shape in Boston http://news.mit.edu/2020/mass-timber-sustainable-building-boston-0305 A five-story mixed-use structure in Roxbury represents a new kind of net-zero-energy building, made from wood. Wed, 04 Mar 2020 23:59:59 -0500 David L. Chandler | MIT News Office http://news.mit.edu/2020/mass-timber-sustainable-building-boston-0305 <p>A new building about to take shape in Boston’s Roxbury area could, its designers hope, herald a new way of building residential structures in cities.</p> <p>Designed by architects from MIT and the design and construction firm Placetailor, the five-story building’s structure will be made from cross-laminated timber (CLT), which eliminates most of the greenhouse-gas emissions associated with standard building materials. It will be assembled on site mostly from factory-built subunits, and it will be so energy-efficient that its net carbon emissions will be essentially zero.</p> <p>Most attempts to quantify a building’s greenhouse gas contributions focus on the building’s operations, especially its heating and cooling systems. But the materials used in a building’s construction, especially steel and concrete, are also major sources of carbon emissions and need to be included in any realistic comparison of different types of construction.</p> <p>Wood construction has tended to be limited to single-family houses or smaller apartment buildings with just a few units, narrowing the impact that it can have in urban areas. But recent developments — involving the production of large-scale wood components, known as mass timber; the use of techniques such as cross-laminated timber; and changes in U.S. building codes — now make it possible to extend wood’s reach into much larger buildings, potentially up to 18 stories high.</p> <p>Several recent buildings in Europe have been pushing these limits, and now a few larger wooden buildings are beginning to take shape in the U.S. as well. The new project in Boston will be one of the largest such residential buildings in the U.S. to date, as well as one of the most innovative, thanks to its construction methods.</p> <p>Described as a Passive House Demonstration Project, the Boston building will consist of 14 residential units of various sizes, along with a ground-floor co-working space for the community. The building was designed by Generate Architecture and Technologies, a startup company out of MIT and Harvard University, headed by John Klein, in partnership with Placetailor, a design, development, and construction company that has specialized in building net-zero-energy and carbon-neutral buildings for more than a decade in the Boston area.</p> <p>Klein, who has been a principal investigator in MIT’s Department of Architecture and now serves as CEO of Generate, says that large buildings made from mass timber and assembled using the kit-of-parts approach he and his colleagues have been developing have a number of potential advantages over conventionally built structures of similar dimensions. For starters, even when factoring in the energy used in felling, transporting, assembling, and finishing the structural lumber pieces, the total carbon emissions produced would be less than half that of a comparable building made with conventional steel or concrete. Klein, along with collaborators from engineering firm BuroHappold Engineering and ecological market development firm Olifant, will be presenting a detailed analysis of these lifecycle emissions comparisons later this year at the annual Passive and Low Energy Architecture (<a href="https://www.plea2020.org/">PLEA</a>) conference in A Coruña, Spain, whose theme this year is “planning post-carbon cities.”</p> <p>For that study, Klein and his co-authors modeled nine different versions of an eight-story mass-timber building, along with one steel and one concrete version of the building, all with the same overall scale and specifications. Their analysis showed that materials for the steel-based building produced the most greenhouse emissions; the concrete version produced 8 percent less than that; and one version of the mass-timber building produced 53 percent less.</p> <p>The first question people tend to ask about the idea of building tall structures out of wood is: What about fire? But Klein says this question has been thoroughly studied, and tests have shown that, in fact, a mass-timber building retains its structural strength longer than a comparable steel-framed building. That’s because the large timber elements, typically a foot thick or more, are made by gluing together several layers of conventional dimensioned lumber. These will char on the outside when exposed to fire, but the charred layer actually provides good insulation and protects the wood for an extended period. Steel buildings, by contrast, can collapse suddenly when the temperature of the fire approaches steel’s melting point and causes it to soften.</p> <p>The kit-based approach that Generate and Placetailor have developed, which the team calls Model-C, means that in designing a new building, it’s possible to use a series of preconfigured modules, assembled in different ways, to create a wide variety of structures of different sizes and for different uses, much like assembling a toy structure out of LEGO blocks. These subunits can be built in factories in a standardized process and then trucked to the site and bolted together. This process can reduce the impact of weather by keeping much of the fabrication process indoors in a controlled environment, while minimizing the construction time on site and thus reducing the construction’s impact on the neighborhood.</p> <p><img alt="" src="/sites/mit.edu.newsoffice/files/images/MIT-Mass-Timber.gif" style="width: 500px; height: 333px;" /></p> <p><em style="font-size: 10px;">Animation depicts the process of assembling the mass-timber building from a set of factory-built components. Courtesy of&nbsp;Generate Architecture and Technologies</em></p> <p>“It’s a way to rapidly deploy these kinds of projects through a standardized system,” Klein says. “It’s a way to build rapidly in cities, using an aesthetic that embraces offsite industrial construction.”</p> <p>Because the thick wood structural elements are naturally very good insulators, the Roxbury building’s energy needs for heating and cooling are reduced compared to conventional construction, Klein says. They also produce very good acoustic insulation for its occupants. In addition, the building is designed to have solar panels on its roof, which will help to offset the building’s energy use.</p> <p>The team won a wood innovation grant in 2018 from the U.S. Forest Service, to develop a mass-timber based system for midscale housing developments. The new Boston building will be the first demonstration project for the system they developed.</p> <p>“It’s really a system, not a one-off prototype,” Klein says. With the on-site assembly of factory-built modules, which includes fully assembled bathrooms with the plumbing in place, he says the basic structure of the building can be completed in only about one week per floor.</p> <p>“We're all aware of the need for an immediate transition to a zero-carbon economy, and the building sector is a prime target,” says Andres Bernal SM ’13, Placetailor’s director of architecture. “As a company that has delivered only zero-carbon buildings for over a decade, we're very excited to be working with CLT/mass timber as an option for scaling up our approach and sharing the kit-of-parts and lessons learned with the rest of the Boston community.”</p> <p>With U.S. building codes now allowing for mass timber buildings of up to 18 stories, Klein hopes that this building will mark the beginning of a new boom in wood-based or hybrid construction, which he says could help to provide a market for large-scale sustainable forestry, as well as for sustainable, net-zero energy housing.</p> <p>“We see it as very competitive with concrete and steel for buildings of between eight and 12 stories,” he says. Such buildings, he adds, are likely to have great appeal, especially to younger generations, because “sustainability is very important to them. This provides solutions for developers, that have a real market differentiation.”</p> <p>He adds that Boston has set a goal of building thousands of new units of housing, and also a goal of making the city carbon-neutral. “Here’s a solution that does both,” he says.</p> <p>The project team included&nbsp;Evan Smith and Colin Booth at Placetailor Development; in addition to Klein<strong>,</strong>&nbsp;Zlatan Sehovic, Chris Weaver, John Fechtel, Jaehun Woo, and Clarence Yi-Hsien Lee at Generate Design; Andres Bernal, Michelangelo LaTona, Travis Anderson, and Elizabeth Hauver at Placetailor Design<strong>; </strong>Laura Jolly and Evan Smith at Placetailor Construction<strong>; </strong>Paul Richardson and Wolf Mangelsdorf at Burohappold<strong>; </strong>Sonia Barrantes and Jacob Staub at Ripcord Engineering; and<strong> </strong>Brian Kuhn and Caitlin Gamache at Code Red.</p> Architect's rendering shows the new mass-timber residential building that will soon begin construction in Boston's Roxbury neighborhood.Images: Generate Architecture and TechnologiesResearch, Architecture, Building, Sustainability, Emissions, Cities, Energy, Greenhouse gases, Carbon, Startups, Innovation and Entrepreneurship (I&E), School of Architecture and Planning A new model of vision http://news.mit.edu/2020/computer-model-brain-vision-0304 Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly. Wed, 04 Mar 2020 14:00:00 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2020/computer-model-brain-vision-0304 <p>When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.</p> <p>Scientists who study the brain have tried to replicate this phenomenon using computer models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system’s ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.</p> <p>“What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,” says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).</p> <p>The new model posits that when the brain receives visual input, it quickly performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face or other object. This type of model, known as efficient inverse graphics (EIG), also correlates well with electrical recordings from face-selective regions in the brains of nonhuman primates, suggesting that the primate visual system may be organized in much the same way as the computer model, the researchers say.</p> <p>Ilker Yildirim, a former MIT postdoc who is now an assistant professor of psychology at Yale University, is the lead author of the paper, which appears today in <em>Science Advances</em>. Tenenbaum and Winrich Freiwald, a professor of neurosciences and behavior at Rockefeller University, are the senior authors of the study. Mario Belledonne, a graduate student at Yale, is also an author.</p> <p><strong>Inverse graphics</strong></p> <p>Decades of research on the brain’s visual system has studied, in great detail, how light input onto the retina is transformed into cohesive scenes. This understanding has helped artificial intelligence researchers develop computer models that can replicate aspects of this system, such as recognizing faces or other objects.</p> <p>“Vision is the functional aspect of the brain that we understand the best, in humans and other animals,” Tenenbaum says. “And computer vision is one of the most successful areas of AI at this point. We take for granted that machines can now look at pictures and recognize faces very well, and detect other kinds of objects.”</p> <p>However, even these sophisticated artificial intelligence systems don’t come close to what the human visual system can do, Yildirim says.</p> <p>“Our brains don’t just detect that there’s an object over there, or recognize and put a label on something,” he says. “We see all of the shapes, the geometry, the surfaces, the textures. We see a very rich world.”</p> <p>More than a century ago, the physician, physicist, and philosopher Hermann von Helmholtz theorized that the brain creates these rich representations by reversing the process of image formation. He hypothesized that the visual system includes an image generator that would be used, for example, to produce the faces that we see during dreams. Running this generator in reverse would allow the brain to work backward from the image and infer what kind of face or other object would produce that image, the researchers say.</p> <p>However, the question remained: How could the brain perform this process, known as inverse graphics, so quickly? Computer scientists have tried to create algorithms that could perform this feat, but the best previous systems require many cycles of iterative processing, taking much longer than the 100 to 200 milliseconds the brain requires to create a detailed visual representation of what you’re seeing. Neuroscientists believe perception in the brain can proceed so quickly because it is implemented in a mostly feedforward pass through several hierarchically organized layers of neural processing.</p> <p>The MIT-led team set out to build a special kind of deep neural network model to show how a neural hierarchy can quickly infer the underlying features of a scene — in this case, a specific face. In contrast to the standard deep neural networks used in computer vision, which are trained from labeled data indicating the class of an object in the image, the researchers’ network is trained from a model that reflects the brain’s internal representations of what scenes with faces can look like.</p> <p>Their model thus learns to reverse the steps performed by a computer graphics program for generating faces. These graphics programs begin with a three-dimensional representation of an individual face and then convert it into a two-dimensional image, as seen from a particular viewpoint. These images can be placed on an arbitrary background image. The researchers theorize that the brain’s visual system may do something similar when you dream or conjure a mental image of someone’s face.</p> <p>The researchers trained their deep neural network to perform these steps in reverse — that is, it begins with the 2D image and then adds features such as texture, curvature, and lighting, to create what the researchers call a “2.5D” representation. These 2.5D images specify the shape and color of the face from a particular viewpoint. Those are then converted into 3D representations, which don’t depend on the viewpoint.</p> <p>“The model gives a systems-level account of the processing of faces in the brain, allowing it to see an image and ultimately arrive at a 3D object, which includes representations of shape and texture, through this important intermediate stage of a 2.5D image,” Yildirim says.</p> <p><strong>Model performance</strong></p> <p>The researchers found that their model is consistent with data obtained by studying certain regions in the brains of macaque monkeys. In a study published in 2010, Freiwald and Doris Tsao of Caltech recorded the activity of neurons in those regions and analyzed how they responded to 25 different faces, seen from seven different viewpoints. That study revealed three stages of higher-level face processing, which the MIT team now hypothesizes correspond to three stages of their inverse graphics model: roughly, a 2.5D viewpoint-dependent stage; a stage that bridges from 2.5 to 3D; and a 3D, viewpoint-invariant stage of face representation.</p> <p>“What we show is that both the quantitative and qualitative response properties of those three levels of the brain seem to fit remarkably well with the top three levels of the network that we’ve built,” Tenenbaum says.</p> <p>The researchers also compared the model’s performance to that of humans in a task that involves recognizing faces from different viewpoints. This task becomes harder when researchers alter the faces by removing the face’s texture while preserving its shape, or distorting the shape while preserving relative texture. The new model’s performance was much more similar to that of humans than computer models used in state-of-the-art face-recognition software, additional evidence that this model may be closer to mimicking what happens in the human visual system.</p> <p>“This work is exciting because it introduces interpretable stages of intermediate representation into a feedforward neural network model of face recognition,” says Nikolaus Kriegeskorte, a professor of psychology and neuroscience at Columbia University, who was not involved in the research. “Their approach merges the classical idea that vision inverts a model of how the image was generated, with modern deep feedforward networks. It’s very interesting that this model better explains neural representations and behavioral responses.”</p> <p>The researchers now plan to continue testing the modeling approach on additional images, including objects that aren’t faces, to investigate whether inverse graphics might also explain how the brain perceives other kinds of scenes. In addition, they believe that adapting this approach to computer vision could lead to better-performing AI systems.</p> <p>“If we can show evidence that these models might correspond to how the brain works, this work could lead computer vision researchers to take more seriously and invest more engineering resources in this inverse graphics approach to perception,” Tenenbaum says. “The brain is still the gold standard for any kind of machine that sees the world richly and quickly.”</p> <p>The research was funded by the Center for Brains, Minds, and Machines at MIT, the National Science Foundation, the National Eye Institute, the Office of Naval Research, the New York Stem Cell Foundation, the Toyota Research Institute, and Mitsubishi Electric.</p> MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face.Image: courtesy of the researchersResearch, Computer vision, Brain and cognitive sciences, Center for Brains Minds and Machines, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Engineering, National Science Foundation (NSF), Artificial intelligence, Machine learning, Neuroscience